Add files using upload-large-folder tool
Browse filesThis view is limited to 50 files because it contains too many changes.
See raw diff
- papers/ICLR/ICLR 2023/ICLR 2023 Workshop/ICLR 2023 Workshop TML4H/C09sAJ960u/Initial_manuscript_md/Initial_manuscript.md +144 -0
- papers/ICLR/ICLR 2023/ICLR 2023 Workshop/ICLR 2023 Workshop TML4H/C09sAJ960u/Initial_manuscript_tex/Initial_manuscript.tex +181 -0
- papers/ICLR/ICLR 2023/ICLR 2023 Workshop/ICLR 2023 Workshop TML4H/GYgu8Yq_96/Initial_manuscript_md/Initial_manuscript.md +313 -0
- papers/ICLR/ICLR 2023/ICLR 2023 Workshop/ICLR 2023 Workshop TML4H/GYgu8Yq_96/Initial_manuscript_tex/Initial_manuscript.tex +267 -0
- papers/ICLR/ICLR 2023/ICLR 2023 Workshop/ICLR 2023 Workshop TML4H/J4QatK02Qc/Initial_manuscript_md/Initial_manuscript.md +406 -0
- papers/ICLR/ICLR 2023/ICLR 2023 Workshop/ICLR 2023 Workshop TML4H/J4QatK02Qc/Initial_manuscript_tex/Initial_manuscript.tex +310 -0
- papers/ICLR/ICLR 2023/ICLR 2023 Workshop/ICLR 2023 Workshop TML4H/LsEd-S3ofyW/Initial_manuscript_md/Initial_manuscript.md +109 -0
- papers/ICLR/ICLR 2023/ICLR 2023 Workshop/ICLR 2023 Workshop TML4H/LsEd-S3ofyW/Initial_manuscript_tex/Initial_manuscript.tex +61 -0
- papers/ICLR/ICLR 2023/ICLR 2023 Workshop/ICLR 2023 Workshop TML4H/SG3ztVYDubA/Initial_manuscript_md/Initial_manuscript.md +547 -0
- papers/ICLR/ICLR 2023/ICLR 2023 Workshop/ICLR 2023 Workshop TML4H/SG3ztVYDubA/Initial_manuscript_tex/Initial_manuscript.tex +99 -0
- papers/ICLR/ICLR 2023/ICLR 2023 Workshop/ICLR 2023 Workshop TML4H/Tmb13sYJwP/Initial_manuscript_md/Initial_manuscript.md +296 -0
- papers/ICLR/ICLR 2023/ICLR 2023 Workshop/ICLR 2023 Workshop TML4H/Tmb13sYJwP/Initial_manuscript_tex/Initial_manuscript.tex +156 -0
- papers/ICLR/ICLR 2023/ICLR 2023 Workshop/ICLR 2023 Workshop TML4H/bHbf5-nE8N/Initial_manuscript_md/Initial_manuscript.md +249 -0
- papers/ICLR/ICLR 2023/ICLR 2023 Workshop/ICLR 2023 Workshop TML4H/bHbf5-nE8N/Initial_manuscript_tex/Initial_manuscript.tex +290 -0
- papers/ICLR/ICLR 2023/ICLR 2023 Workshop/ICLR 2023 Workshop TML4H/flfJ1OwD-FD/Initial_manuscript_md/Initial_manuscript.md +133 -0
- papers/ICLR/ICLR 2023/ICLR 2023 Workshop/ICLR 2023 Workshop TML4H/flfJ1OwD-FD/Initial_manuscript_tex/Initial_manuscript.tex +159 -0
- papers/ICLR/ICLR 2023/ICLR 2023 Workshop/ICLR 2023 Workshop TML4H/rmIJnScwO6/Initial_manuscript_md/Initial_manuscript.md +284 -0
- papers/ICLR/ICLR 2023/ICLR 2023 Workshop/ICLR 2023 Workshop TML4H/rmIJnScwO6/Initial_manuscript_tex/Initial_manuscript.tex +178 -0
- papers/ICLR/ICLR 2023/ICLR 2023 Workshop/ICLR 2023 Workshop TML4H/y7XveyWYzIB/Initial_manuscript_md/Initial_manuscript.md +211 -0
- papers/ICLR/ICLR 2023/ICLR 2023 Workshop/ICLR 2023 Workshop TML4H/y7XveyWYzIB/Initial_manuscript_tex/Initial_manuscript.tex +99 -0
- papers/ICLR/ICLR 2023/ICLR 2023 Workshop/ICLR 2023 Workshop TML4H/zZcCINENgm/Initial_manuscript_md/Initial_manuscript.md +253 -0
- papers/ICLR/ICLR 2023/ICLR 2023 Workshop/ICLR 2023 Workshop TML4H/zZcCINENgm/Initial_manuscript_tex/Initial_manuscript.tex +293 -0
- papers/ICML/ICML 2020/ICML 2020 Workshop/ICML 2020 Workshop SAS/3IKKBxByalk/Initial_manuscript_md/Initial_manuscript.md +344 -0
- papers/ICML/ICML 2020/ICML 2020 Workshop/ICML 2020 Workshop SAS/3IKKBxByalk/Initial_manuscript_tex/Initial_manuscript.tex +223 -0
- papers/ICML/ICML 2020/ICML 2020 Workshop/ICML 2020 Workshop SAS/7jxwhNDM0Uv/Initial_manuscript_md/Initial_manuscript.md +301 -0
- papers/ICML/ICML 2020/ICML 2020 Workshop/ICML 2020 Workshop SAS/7jxwhNDM0Uv/Initial_manuscript_tex/Initial_manuscript.tex +266 -0
- papers/ICML/ICML 2020/ICML 2020 Workshop/ICML 2020 Workshop SAS/CSqjS121nsU/Initial_manuscript_md/Initial_manuscript.md +197 -0
- papers/ICML/ICML 2020/ICML 2020 Workshop/ICML 2020 Workshop SAS/CSqjS121nsU/Initial_manuscript_tex/Initial_manuscript.tex +240 -0
- papers/ICML/ICML 2020/ICML 2020 Workshop/ICML 2020 Workshop SAS/DxtEfUpf2q7/Initial_manuscript_md/Initial_manuscript.md +195 -0
- papers/ICML/ICML 2020/ICML 2020 Workshop/ICML 2020 Workshop SAS/DxtEfUpf2q7/Initial_manuscript_tex/Initial_manuscript.tex +206 -0
- papers/ICML/ICML 2020/ICML 2020 Workshop/ICML 2020 Workshop SAS/Jc65VYwYVB/Initial_manuscript_md/Initial_manuscript.md +203 -0
- papers/ICML/ICML 2020/ICML 2020 Workshop/ICML 2020 Workshop SAS/Jc65VYwYVB/Initial_manuscript_tex/Initial_manuscript.tex +133 -0
- papers/ICML/ICML 2020/ICML 2020 Workshop/ICML 2020 Workshop SAS/MEQ_DSSJam_/Initial_manuscript_md/Initial_manuscript.md +307 -0
- papers/ICML/ICML 2020/ICML 2020 Workshop/ICML 2020 Workshop SAS/MEQ_DSSJam_/Initial_manuscript_tex/Initial_manuscript.tex +117 -0
- papers/ICML/ICML 2020/ICML 2020 Workshop/ICML 2020 Workshop SAS/OSVxDDc360z/Initial_manuscript_md/Initial_manuscript.md +424 -0
- papers/ICML/ICML 2020/ICML 2020 Workshop/ICML 2020 Workshop SAS/OSVxDDc360z/Initial_manuscript_tex/Initial_manuscript.tex +190 -0
- papers/ICML/ICML 2020/ICML 2020 Workshop/ICML 2020 Workshop SAS/RlVTYWhsky7/Initial_manuscript_md/Initial_manuscript.md +371 -0
- papers/ICML/ICML 2020/ICML 2020 Workshop/ICML 2020 Workshop SAS/RlVTYWhsky7/Initial_manuscript_tex/Initial_manuscript.tex +261 -0
- papers/ICML/ICML 2020/ICML 2020 Workshop/ICML 2020 Workshop SAS/SR2L__h9q9p/Initial_manuscript_md/Initial_manuscript.md +235 -0
- papers/ICML/ICML 2020/ICML 2020 Workshop/ICML 2020 Workshop SAS/SR2L__h9q9p/Initial_manuscript_tex/Initial_manuscript.tex +271 -0
- papers/ICML/ICML 2020/ICML 2020 Workshop/ICML 2020 Workshop SAS/_P9LyJ5pMDb/Initial_manuscript_md/Initial_manuscript.md +147 -0
- papers/ICML/ICML 2020/ICML 2020 Workshop/ICML 2020 Workshop SAS/_P9LyJ5pMDb/Initial_manuscript_tex/Initial_manuscript.tex +132 -0
- papers/ICML/ICML 2020/ICML 2020 Workshop/ICML 2020 Workshop SAS/aaI4jKANEH4/Initial_manuscript_md/Initial_manuscript.md +383 -0
- papers/ICML/ICML 2020/ICML 2020 Workshop/ICML 2020 Workshop SAS/aaI4jKANEH4/Initial_manuscript_tex/Initial_manuscript.tex +364 -0
- papers/ICML/ICML 2020/ICML 2020 Workshop/ICML 2020 Workshop SAS/cnLz5ckGs1y/Initial_manuscript_md/Initial_manuscript.md +213 -0
- papers/ICML/ICML 2020/ICML 2020 Workshop/ICML 2020 Workshop SAS/cnLz5ckGs1y/Initial_manuscript_tex/Initial_manuscript.tex +299 -0
- papers/ICML/ICML 2020/ICML 2020 Workshop/ICML 2020 Workshop SAS/gp8Hkp9y0bw/Initial_manuscript_md/Initial_manuscript.md +182 -0
- papers/ICML/ICML 2020/ICML 2020 Workshop/ICML 2020 Workshop SAS/gp8Hkp9y0bw/Initial_manuscript_tex/Initial_manuscript.tex +155 -0
- papers/ICML/ICML 2020/ICML 2020 Workshop/ICML 2020 Workshop SAS/gyUMlKhTJZe/Initial_manuscript_md/Initial_manuscript.md +241 -0
- papers/ICML/ICML 2020/ICML 2020 Workshop/ICML 2020 Workshop SAS/gyUMlKhTJZe/Initial_manuscript_tex/Initial_manuscript.tex +396 -0
papers/ICLR/ICLR 2023/ICLR 2023 Workshop/ICLR 2023 Workshop TML4H/C09sAJ960u/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,144 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# EXBEHRT: EXTENDED TRANSFORMER FOR ELEC- TRONIC HEALTH RECORDS
|
| 2 |
+
|
| 3 |
+
Anonymous authors
|
| 4 |
+
|
| 5 |
+
Paper under double-blind review
|
| 6 |
+
|
| 7 |
+
## Abstract
|
| 8 |
+
|
| 9 |
+
In this study, we introduce ExBEHRT, an extended version of BEHRT (BERT applied to Electronic Health Record data) and applied various algorithms to interpret its results. While BEHRT solely incorporates diagnoses and age of patients, we extend the feature space to several multi-modal records, namely demographics, clinical characteristics, vital signs, smoking status, diagnoses, procedures, medications and lab tests using a novel method to unify the frequencies and temporal dimensions of the different features. We show that additional features yield significant benefit in model performance for various down-stream tasks of different diseases. To ensure robustness, we interpret the model predictions using an adaption of Expected Gradients, which hasn't been applied to Transformers with EHR data so far and yields more granular interpretations than previous approaches such as feature and token importances. Further, by clustering the models' representations of oncology patients, we show that the model has implicit understanding of the disease and is able to divide patients suffering from the same cancer type into different risk groups. Given the additional features and interpretability, ExBEHRT can help making informed decisions about disease progressions, diagnoses and risk factors of various diseases.
|
| 10 |
+
|
| 11 |
+
## 1 INTRODUCTION
|
| 12 |
+
|
| 13 |
+
Over the last decade, Electronic Health Records have become increasingly popular to document a patients' treatments, labs, vital signs, etc. Commonly, a sequence of medical events is referred to as a patient journey. Given the immense amount of longitudinal data available, there lies tremendous potential for Machine Learning to generate novel insights about the recognition of disease patterns, progression and subgroups as well as treatment planning. Recent studies adapted Transformers to structured EHR data, demonstrating superiority in various benchmarks compared to other, similar algorithms. The first adaptation of Transformers to EHR data, called BEHRT (Li et al., 2020), incorporated diagnosis concepts and age from EHRs and additionally added embeddings for the separation of individual visits and a positional embedding for the visit number. Other models such as Med-BERT (Rasmy et al., 2021), CEHR-BERT (Pang et al., 2021) and BRLTM (Meng et al., 2021) added more features by concatenating the inputs into one long patient sequence. These approaches are limited in the amount of data of a single patient they can process and the required computational power significantly increases with each added feature.
|
| 14 |
+
|
| 15 |
+
In this work, we introduce a novel approach of incorporating multi-modal features into Transformer models by adding medical concepts separately and vertically instead of concatenating all concepts horizontally. We show that these features are important in several downstream applications such as mortality prediction, patient subtyping and disease progression prediction.
|
| 16 |
+
|
| 17 |
+
## 2 EXBEHRT FOR EHR REPRESENTATION LEARNING
|
| 18 |
+
|
| 19 |
+
ExBEHRT is an extension of BEHRT where medical concepts are not concatenated into one long vector, but rather grouped into separate, learnable embeddings per type of concept. This way, we avoid exploding input lengths when adding new medical features and provide the model the capability to learn which concepts it should focus on. Clinically, it would also be stringent to separate diagnoses, procedures, medications etc. as they offer different clinical value for downstream applications. We take the number of diagnoses in a visit as the indicator of how many "horizontal slots" for other concepts are available at this visit (e.g. two for the first visit in figure 1). Therefore, the maximal patient journey length is defined by the amount of diagnosis codes of a patient, independent of the amount of other concepts added to the model. Another advantage of this procedure is its ability to deal with varying frequencies and sparseness of additional concepts. As exemplified with procedures in figure 1, but executed in the same manner with labs, there are three possible cases of adding a new concept to a visit:
|
| 20 |
+
|
| 21 |
+
1. The number of procedures is equal to the amount of horizontal slots available in the visit (visit 1 - two each). The procedures can therefore be represented as a 1D vector.
|
| 22 |
+
|
| 23 |
+
2. The number of procedures exceeds the amount of slots available in the visit (visit 2 - one diagnosis, two procedures). Here, the procedures fill up the amount of horizontal slots in a row-wise manner until there are no procedures left, resulting in a 2D vector of dimensions $\#$ slots $\times \lceil \frac{\# \text{ procedures }}{\# \text{ slots }}\rceil$ .
|
| 24 |
+
|
| 25 |
+
3. The number of procedures subceeds the amount of slots available (visit 3 - one diagnosis, no procedures). Procedures are represented as a 1D vector and then padded to the amount of horizontal slots available.
|
| 26 |
+
|
| 27 |
+

|
| 28 |
+
|
| 29 |
+
Figure 1: An example of how ExBEHRT represents a patient with a constant sentence length $m$ .
|
| 30 |
+
|
| 31 |
+
After reshaping, all procedures and labs of all patients are padded to the same amount of rows $n$ to enable batch processing. Before passing the inputs to the model, each token is embedded into a 288-dimensional vector and all tokens are summed vertically. Figure 7 in the appendix shows the final representation of one patient.
|
| 32 |
+
|
| 33 |
+
### 2.1 DATA
|
| 34 |
+
|
| 35 |
+
In this study, we used one of the largest EHR datasets from the USA. We only selected data points collected at hospitalizations to ensure data quality and consistency. Each patient is required to have at least five visits with valid ICD-9 or ICD-10 diagnosis codes to ensure sufficient temporal context. Given these criteria, our final pre-training cohort consisted of ${5.4}\mathrm{M}$ individual patients split into training (80%), validation (10%) and testing sets (10%).
|
| 36 |
+
|
| 37 |
+
### 2.2 MODEL TRAINING
|
| 38 |
+
|
| 39 |
+
ExBEHRT consists of the same model architecture as BEHRT. For pre-training, we applied the standard MLM procedure described in the original BERT paper (Devlin et al., 2018). In a second step, we fine-tuned our model on two prediction tasks: Death of a patient within six months after the first cancer diagnosis and readmission into hospital within 30 or less days after a heart failure. All tokens after the cancer diagnosis/heart failure are not disclosed to the model. We further used the patient representations of ExBEHRT to identify risk subtypes of cancer patients using unsupervised clustering. For that, we applied a combination of the dimensionality reduction technique UMAP (McInnes et al., 2018) and the clustering algorithm HDBSCAN (Campello et al., 2013).
|
| 40 |
+
|
| 41 |
+
## 3 RESULTS
|
| 42 |
+
|
| 43 |
+
### 3.1 EVENT PREDICTION
|
| 44 |
+
|
| 45 |
+
In all but one metric in one task, ExBEHRT outperforms BEHRT and other conventional algorithms such as Logistic Regression (LR) and XGBoost when evaluated on the hold-out test set.
|
| 46 |
+
|
| 47 |
+
Table 1: Test set micro-averaged metrics for fine-tuning on event prediction tasks.
|
| 48 |
+
|
| 49 |
+
<table><tr><td>Task</td><td>Cohort Size</td><td>$\mathbf{{Metric}}$</td><td>LR</td><td>XGBoost</td><td>BEHRT</td><td>ExBEHRT</td></tr><tr><td rowspan="3">Death in 6M</td><td>Train: 350'322</td><td>APS</td><td>0.4280</td><td>0.4554</td><td>0.4778</td><td>0.5362</td></tr><tr><td>Val: 43'790</td><td>AUROC</td><td>0.6345</td><td>0.6642</td><td>0.6697</td><td>0.7255</td></tr><tr><td>Test: 43’790</td><td>Precision</td><td>0.7304</td><td>0.7431</td><td>0.7520</td><td>0.7824</td></tr><tr><td rowspan="3">HF readmit</td><td>Train: 402'529</td><td>APS</td><td>0.2976</td><td>0.3132</td><td>0.1995</td><td>0.2501</td></tr><tr><td>Val: 50'316</td><td>AUROC</td><td>0.5190</td><td>0.5359</td><td>0.5117</td><td>0.5670</td></tr><tr><td>Test: 50’317</td><td>Precision</td><td>0.7199</td><td>0.7258</td><td>0.8102</td><td>0.8163</td></tr></table>
|
| 50 |
+
|
| 51 |
+
### 3.2 INTERPRETABILITY ON EVENT PREDICTION RESULTS
|
| 52 |
+
|
| 53 |
+
For all interpretability experiments, we used our model fine-tuned on the task Death in ${6M}$ , meaning whether a cancer patient will decease within six months after their first cancer diagnosis. We visualize the interpretability for single patients as both interpretability approaches are example-based and not model-agnostic.
|
| 54 |
+
|
| 55 |
+
#### 3.2.1 SELF-ATTENTION VISUALIZATION
|
| 56 |
+
|
| 57 |
+
Analogously to previous papers (Li et al. (2020), Rasmy et al. (2021), Meng et al. (2021)), we visualized the attention of the last network layer using BertViz (Vig (2019)). However, since all em-beddings are summed before being passed through the network for all such models, self-attention has no possibility to attribute single input features to the outcome. Nevertheless, we can deduce insights on how the different slots interact with each other and which connections the model deems to be important. Figure 2 displays the self-attention of a single patient of the last layer of ExBEHRT. The left figure displays the attentions of all 12 attention heads in this layer, whereas the right figure displays the attention of one head. Generally, the model focuses a lot on the slots within one visit, which was to be expected as these slots are strongly related by definition. Slot 7 corresponds to the slot, where the patient was diagnosed with lung cancer. Even though the model was not specifically trained to put emphasis on cancer codes, it puts a great amount of attention on this slot, indicating that it learned some correlation between the cancer diagnosis and the predicted outcome. Interestingly, slot 7 receives a lot attention from the first and the second visit, but not from the other two previous visits, indicating that the model is able to learn causalities across long time gaps.
|
| 58 |
+
|
| 59 |
+

|
| 60 |
+
|
| 61 |
+
Figure 2: Left: The self-attention of all 12 attention heads of the last layer of ExBEHRT. Higher opacity corresponds to higher attention. Right: The self-attention of one attention head of the last layer. Slot 7 corresponds to the slot where the cancer was diagnosed.
|
| 62 |
+
|
| 63 |
+
#### 3.2.2 EXPECTED GRADIENTS INTERPRETABILITY
|
| 64 |
+
|
| 65 |
+
Due to the limitations of Self-Attention visualization, we explored the technique Expected Gradients (Erion et al., 2020) for deeper understanding of the model. Expected Gradients is considered to be one of the most robust gradient-based feature attribution methods for deep learning models. This way, we can deduce feature and token importances of single predictions, which is not possible with self-attention. Since each single concept (diagnosis code, procedure code, age, etc.) is mapped to a 288-dimensional embedding before being passed to the model, we first calculated the expected gradients for each of the 288 positions and then summed the absolute values to acquire a single gradient value for each input token. This way, each individual input token has an associated gradient related to the models output yielding detailed insights into which medical concept has had what impact on the models prediction. We visualized the findings in three layers of abstraction with increasing detail in figures 3.4 and 5.
|
| 66 |
+
|
| 67 |
+

|
| 68 |
+
|
| 69 |
+
Figure 3: The absolute sums of the expected gradients summed by input feature.
|
| 70 |
+
|
| 71 |
+
For figure 3, we summed the expected gradients for each of the input features. This way, we can evaluate the different feature impacts on the output for a specific patient. For this patient, diagnoses and procedures (treatments & medications) were by far the most importance features. With this visualization, we can further evaluate basic biases. For example, gender was not considered to be an important feature, indicating that performance would be similar for a person with another gender.
|
| 72 |
+
|
| 73 |
+

|
| 74 |
+
|
| 75 |
+
Figure 4: The absolute sums of the expected gradients summed by input feature and time slot. The dotted lines represent slots with SEP tokens and therefore indicate the next visit.
|
| 76 |
+
|
| 77 |
+
For figure 4, we visualized the absolute expected gradients for each of the input features and summed them at each time slot. This way, we can evaluate the different feature importances over time to get a notion of where the model puts emphasis on. Interestingly, the model put more importance on what kind of medications & treatments that patient received in the first two visits, where as in the last visit (the visit in which the patient was diagnosed with blood cancer), it put more importance on diagnoses and labs. Generally, slot 5, where the cancer was diagnosed, was attributed with the highest importance.
|
| 78 |
+
|
| 79 |
+
<table><tr><td colspan="10">Diagnosis</td></tr><tr><td>CLS</td><td>J40</td><td>SEP</td><td>M54</td><td>SEP</td><td>C81</td><td>R55</td><td>R59</td><td>E87</td><td>SEP</td></tr><tr><td colspan="10">Lab</td></tr><tr><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>CHEMISTRY</td><td>URINALYSIS</td><td>HEMATOLOGY</td><td>SPEC. CHEM.</td><td>-</td></tr><tr><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>SPEC. LAB</td><td>BLOOD GAS</td><td>-</td><td>-</td><td>-</td></tr><tr><td colspan="10">Procedure</td></tr><tr><td>-</td><td>71020</td><td>-</td><td>81003</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>-</td><td>94640</td><td>-</td><td>87077</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr><tr><td>-</td><td>99283</td><td>-</td><td>87086</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td></tr></table>
|
| 80 |
+
|
| 81 |
+
Figure 5: A visualization of the absolute sums of the expected gradients of diagnoses, labs and procedures on a concept level. Darker colours represent higher values and the SEP tokens indicate the separation between two visits.
|
| 82 |
+
|
| 83 |
+
Figure 5 displays the absolute sums of gradients of each individual input token, providing a detailed interpretation of which medical concept has had what impact on the models prediction. Unsurprisingly, the cancer code C81 has had the biggest impact on the outcome. However, earlier codes like J40 or 71020 also contribute to the models prediction, indicating that the model is able to include information from the whole patient journey into its results.
|
| 84 |
+
|
| 85 |
+
### 3.3 CANCER PATIENT CLUSTERING
|
| 86 |
+
|
| 87 |
+
HDBSCAN was able to cluster ${90}\%$ of all cancer patients into 24 clusters1. On average, the most occurring cancer diagnosis within a cluster was present for ${84}\%$ of the patients assigned to this cluster and the mean cluster purity ${}^{2}$ was ${85}\%$ . Similar concepts (e.g. cancer of female reproductive organs or different types of leukaemia) lay in areas close to each other, indicating a spatial logic between the cancer types.
|
| 88 |
+
|
| 89 |
+
In figure 6, we show that with a second pass of HDBSCAN on a specific cluster, we can identify risk subgroups. In all three identified clusters, more than 90% of the patients actually do have pancreatic cancer and all clusters share similar general characteristics. However, as displayed in table 3.3, ExBEHRT identified one subgroup with significantly higher chance of recovering from cancer and a lower probability of death, even though this information was not provided to the model at any ${\text{point}}^{3}$ .
|
| 90 |
+
|
| 91 |
+

|
| 92 |
+
|
| 93 |
+
Figure 6: The three identified patient subclusters with pancreatic cancer visualized with a kernel density estimate plot for visual clarity.
|
| 94 |
+
|
| 95 |
+
---
|
| 96 |
+
|
| 97 |
+
${}^{1}$ A visualization of all clusters can be found in figure 8 in the appendix.
|
| 98 |
+
|
| 99 |
+
${}^{2}$ Cluster purity indicates the fraction of patients with a condition which are assigned to the cluster.
|
| 100 |
+
|
| 101 |
+
${}^{3}$ In the table, $\%$ of journey with cancer indicates the ratio of the time between the first and last cancer diagnosis compared to the duration of the whole recorded patient journey. Cancer-free refers to the percentage of patients within a cluster, which have records of at least two visits without cancer diagnosis after the last visit with a cancer diagnosis. The average death rate is directly taken from the EHR database and unfortunately does not indicate the cause of death.
|
| 102 |
+
|
| 103 |
+
---
|
| 104 |
+
|
| 105 |
+
Table 2: Statistics of the three pancreatic cancer clusters indicating a clear differentiation between higher risk (gray, blue) and lower risk patients (purple).
|
| 106 |
+
|
| 107 |
+
<table><tr><td>$\mathbf{{Metric}}$</td><td>Gray</td><td>Blue</td><td>Purple</td></tr><tr><td>Median age</td><td>67</td><td>68</td><td>68</td></tr><tr><td>Median birth year</td><td>1950</td><td>1947</td><td>1944</td></tr><tr><td>Median BMI</td><td>25</td><td>25</td><td>26</td></tr><tr><td>Average death rate</td><td>76.5%</td><td>75.9%</td><td>70.0%</td></tr><tr><td>$\%$ of journey with cancer</td><td>27.0%</td><td>24.0%</td><td>$\mathbf{{18.3}\% }$</td></tr><tr><td>Cancer-free</td><td>34.0%</td><td>36.9%</td><td>62.7%</td></tr></table>
|
| 108 |
+
|
| 109 |
+
## 4 DISCUSSION
|
| 110 |
+
|
| 111 |
+
In this study, we introduced a novel method of adding patient features to BEHRT, which significantly boosts the predictive power for several downstream tasks in multiple disease areas. The novel method of stacking features vertically yielded improvements in hardware requirements and benchmarks and eases the possible extension to new concepts in the future. Given the vast amount and heterogeneity of patients the model was pre-trained with, we are confident that ExBEHRT would generalize well to novel data, patients and tasks. Combined with interpretability, the model provides more granular insights into disease progressions and subtypes of various patients than previous approaches, which could help clinicians forming more granular assessments of the progression and health of their patients. In addition, it is possible to detect unmet needs and improve patient outcomes with a personalized understanding of patient groups.
|
| 112 |
+
|
| 113 |
+
Nevertheless, there are a few limitations: It is extremely difficult to validate the quality, completeness and correctness of EHR datasets, as the data is usually processed anonymously and stems from a variety of heterogeneous, fragmented sources. There is also bias in the sheer nature of EHR data, as practitioners could be incentivised to additionally diagnose less relevant conditions, since medical billing is closely related to the amount and type of diagnoses indicated. In a potential next step, we would like to verify the results and interpretations of this work with clinicians to ensure robust and sound predictions as possible given the acquired interpretability.
|
| 114 |
+
|
| 115 |
+
## REFERENCES
|
| 116 |
+
|
| 117 |
+
Ricardo J G B Campello, Davoud Moulavi, and Joerg Sander. Density-based clustering based on hierarchical density estimates. 2013.
|
| 118 |
+
|
| 119 |
+
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. 102018.
|
| 120 |
+
|
| 121 |
+
Gabriel Erion, Joseph D Janizek, Pascal Sturmfels, Scott M Lundberg, Su-In Lee, and Paul G Allen. Improving performance of deep learning models with axiomatic attribution priors and expected gradients. 2020. URL https://github.com/suinleelab/attributionpriors.
|
| 122 |
+
|
| 123 |
+
Yikuan Li, Shishir Rao, Jos Roberto Ayala Solares, Abdelaali Hassaine, Rema Ramakrishnan, Dexter Canoy, Yajie Zhu, Kazem Rahimi, and Gholamreza Salimi-Khorshidi. Behrt: Transformer for electronic health records. 12 2020. ISSN 20452322. doi: 10.1038/s41598-020-62922-y.
|
| 124 |
+
|
| 125 |
+
Leland McInnes, John Healy, and James Melville. Umap: Uniform manifold approximation and projection for dimension reduction. 2 2018.
|
| 126 |
+
|
| 127 |
+
Yiwen Meng, William Speier, Michael K. Ong, and Corey W. Arnold. Bidirectional representation learning from transformers using multimodal electronic health record data to predict depression. pp. 3121-3129, 8 2021. ISSN 21682208. doi: 10.1109/JBHI.2021.3063721.
|
| 128 |
+
|
| 129 |
+
Chao Pang, Xinzhuo Jiang, Krishna S Kalluri, Matthew Spotnitz, RuiJun Chen, Adler Perotte, and Karthik Natarajan. Cehr-bert: Incorporating temporal information from structured ehr data to improve prediction tasks. 112021.
|
| 130 |
+
|
| 131 |
+
Laila Rasmy, Yang Xiang, Ziqian Xie, Cui Tao, and Degui Zhi. Med-bert: pre-trained contextualized embeddings on large-scale structured electronic health records for disease prediction. 2021.
|
| 132 |
+
|
| 133 |
+
Jesse Vig. A multiscale visualization of attention in the transformer model. pp. 37-42, 2019. doi: 10.18653/V1/P19-3007.
|
| 134 |
+
|
| 135 |
+
## A APPENDIX
|
| 136 |
+
|
| 137 |
+

|
| 138 |
+
|
| 139 |
+
Figure 7: A sample input of ExBEHRT. Each of the concepts (diagnosis, procedure, lab, age, BMI, smoking status, gender, segment, visit) has its own embedding, where each of the tokens is mapped to a 288-dimensional vector, which is learned during model training. After embedding, all concepts are summed vertically element-wise to create a single ${288} \times m$ dimensional vector as input for the model. The parameters $n$ and $k$ are set before training and fixed for each patient to ensure coherent batch processing.
|
| 140 |
+
|
| 141 |
+

|
| 142 |
+
|
| 143 |
+
Figure 8: The unsupervised cluster assignments from HDBSCAN, visualized with a 2-dimensional UMAP projection. The grey points are patients not assigned to any cluster $\left( {{10}\% }\right)$ . The labels indicate the most frequent diagnosis code of each cluster. Besides cluster 10, all label are neoplasms. On average, the most occurring cancer diagnosis within a cluster was present for 84% of the patients assigned to this cluster. The clusters are clearly separated spatially, indicating a distinct separation of the different cancer types and their representations within the model.
|
| 144 |
+
|
papers/ICLR/ICLR 2023/ICLR 2023 Workshop/ICLR 2023 Workshop TML4H/C09sAJ960u/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,181 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ EXBEHRT: EXTENDED TRANSFORMER FOR ELEC- TRONIC HEALTH RECORDS
|
| 2 |
+
|
| 3 |
+
Anonymous authors
|
| 4 |
+
|
| 5 |
+
Paper under double-blind review
|
| 6 |
+
|
| 7 |
+
§ ABSTRACT
|
| 8 |
+
|
| 9 |
+
In this study, we introduce ExBEHRT, an extended version of BEHRT (BERT applied to Electronic Health Record data) and applied various algorithms to interpret its results. While BEHRT solely incorporates diagnoses and age of patients, we extend the feature space to several multi-modal records, namely demographics, clinical characteristics, vital signs, smoking status, diagnoses, procedures, medications and lab tests using a novel method to unify the frequencies and temporal dimensions of the different features. We show that additional features yield significant benefit in model performance for various down-stream tasks of different diseases. To ensure robustness, we interpret the model predictions using an adaption of Expected Gradients, which hasn't been applied to Transformers with EHR data so far and yields more granular interpretations than previous approaches such as feature and token importances. Further, by clustering the models' representations of oncology patients, we show that the model has implicit understanding of the disease and is able to divide patients suffering from the same cancer type into different risk groups. Given the additional features and interpretability, ExBEHRT can help making informed decisions about disease progressions, diagnoses and risk factors of various diseases.
|
| 10 |
+
|
| 11 |
+
§ 1 INTRODUCTION
|
| 12 |
+
|
| 13 |
+
Over the last decade, Electronic Health Records have become increasingly popular to document a patients' treatments, labs, vital signs, etc. Commonly, a sequence of medical events is referred to as a patient journey. Given the immense amount of longitudinal data available, there lies tremendous potential for Machine Learning to generate novel insights about the recognition of disease patterns, progression and subgroups as well as treatment planning. Recent studies adapted Transformers to structured EHR data, demonstrating superiority in various benchmarks compared to other, similar algorithms. The first adaptation of Transformers to EHR data, called BEHRT (Li et al., 2020), incorporated diagnosis concepts and age from EHRs and additionally added embeddings for the separation of individual visits and a positional embedding for the visit number. Other models such as Med-BERT (Rasmy et al., 2021), CEHR-BERT (Pang et al., 2021) and BRLTM (Meng et al., 2021) added more features by concatenating the inputs into one long patient sequence. These approaches are limited in the amount of data of a single patient they can process and the required computational power significantly increases with each added feature.
|
| 14 |
+
|
| 15 |
+
In this work, we introduce a novel approach of incorporating multi-modal features into Transformer models by adding medical concepts separately and vertically instead of concatenating all concepts horizontally. We show that these features are important in several downstream applications such as mortality prediction, patient subtyping and disease progression prediction.
|
| 16 |
+
|
| 17 |
+
§ 2 EXBEHRT FOR EHR REPRESENTATION LEARNING
|
| 18 |
+
|
| 19 |
+
ExBEHRT is an extension of BEHRT where medical concepts are not concatenated into one long vector, but rather grouped into separate, learnable embeddings per type of concept. This way, we avoid exploding input lengths when adding new medical features and provide the model the capability to learn which concepts it should focus on. Clinically, it would also be stringent to separate diagnoses, procedures, medications etc. as they offer different clinical value for downstream applications. We take the number of diagnoses in a visit as the indicator of how many "horizontal slots" for other concepts are available at this visit (e.g. two for the first visit in figure 1). Therefore, the maximal patient journey length is defined by the amount of diagnosis codes of a patient, independent of the amount of other concepts added to the model. Another advantage of this procedure is its ability to deal with varying frequencies and sparseness of additional concepts. As exemplified with procedures in figure 1, but executed in the same manner with labs, there are three possible cases of adding a new concept to a visit:
|
| 20 |
+
|
| 21 |
+
1. The number of procedures is equal to the amount of horizontal slots available in the visit (visit 1 - two each). The procedures can therefore be represented as a 1D vector.
|
| 22 |
+
|
| 23 |
+
2. The number of procedures exceeds the amount of slots available in the visit (visit 2 - one diagnosis, two procedures). Here, the procedures fill up the amount of horizontal slots in a row-wise manner until there are no procedures left, resulting in a 2D vector of dimensions $\#$ slots $\times \lceil \frac{\# \text{ procedures }}{\# \text{ slots }}\rceil$ .
|
| 24 |
+
|
| 25 |
+
3. The number of procedures subceeds the amount of slots available (visit 3 - one diagnosis, no procedures). Procedures are represented as a 1D vector and then padded to the amount of horizontal slots available.
|
| 26 |
+
|
| 27 |
+
< g r a p h i c s >
|
| 28 |
+
|
| 29 |
+
Figure 1: An example of how ExBEHRT represents a patient with a constant sentence length $m$ .
|
| 30 |
+
|
| 31 |
+
After reshaping, all procedures and labs of all patients are padded to the same amount of rows $n$ to enable batch processing. Before passing the inputs to the model, each token is embedded into a 288-dimensional vector and all tokens are summed vertically. Figure 7 in the appendix shows the final representation of one patient.
|
| 32 |
+
|
| 33 |
+
§ 2.1 DATA
|
| 34 |
+
|
| 35 |
+
In this study, we used one of the largest EHR datasets from the USA. We only selected data points collected at hospitalizations to ensure data quality and consistency. Each patient is required to have at least five visits with valid ICD-9 or ICD-10 diagnosis codes to ensure sufficient temporal context. Given these criteria, our final pre-training cohort consisted of ${5.4}\mathrm{M}$ individual patients split into training (80%), validation (10%) and testing sets (10%).
|
| 36 |
+
|
| 37 |
+
§ 2.2 MODEL TRAINING
|
| 38 |
+
|
| 39 |
+
ExBEHRT consists of the same model architecture as BEHRT. For pre-training, we applied the standard MLM procedure described in the original BERT paper (Devlin et al., 2018). In a second step, we fine-tuned our model on two prediction tasks: Death of a patient within six months after the first cancer diagnosis and readmission into hospital within 30 or less days after a heart failure. All tokens after the cancer diagnosis/heart failure are not disclosed to the model. We further used the patient representations of ExBEHRT to identify risk subtypes of cancer patients using unsupervised clustering. For that, we applied a combination of the dimensionality reduction technique UMAP (McInnes et al., 2018) and the clustering algorithm HDBSCAN (Campello et al., 2013).
|
| 40 |
+
|
| 41 |
+
§ 3 RESULTS
|
| 42 |
+
|
| 43 |
+
§ 3.1 EVENT PREDICTION
|
| 44 |
+
|
| 45 |
+
In all but one metric in one task, ExBEHRT outperforms BEHRT and other conventional algorithms such as Logistic Regression (LR) and XGBoost when evaluated on the hold-out test set.
|
| 46 |
+
|
| 47 |
+
Table 1: Test set micro-averaged metrics for fine-tuning on event prediction tasks.
|
| 48 |
+
|
| 49 |
+
max width=
|
| 50 |
+
|
| 51 |
+
Task Cohort Size $\mathbf{{Metric}}$ LR XGBoost BEHRT ExBEHRT
|
| 52 |
+
|
| 53 |
+
1-7
|
| 54 |
+
3*Death in 6M Train: 350'322 APS 0.4280 0.4554 0.4778 0.5362
|
| 55 |
+
|
| 56 |
+
2-7
|
| 57 |
+
Val: 43'790 AUROC 0.6345 0.6642 0.6697 0.7255
|
| 58 |
+
|
| 59 |
+
2-7
|
| 60 |
+
Test: 43’790 Precision 0.7304 0.7431 0.7520 0.7824
|
| 61 |
+
|
| 62 |
+
1-7
|
| 63 |
+
3*HF readmit Train: 402'529 APS 0.2976 0.3132 0.1995 0.2501
|
| 64 |
+
|
| 65 |
+
2-7
|
| 66 |
+
Val: 50'316 AUROC 0.5190 0.5359 0.5117 0.5670
|
| 67 |
+
|
| 68 |
+
2-7
|
| 69 |
+
Test: 50’317 Precision 0.7199 0.7258 0.8102 0.8163
|
| 70 |
+
|
| 71 |
+
1-7
|
| 72 |
+
|
| 73 |
+
§ 3.2 INTERPRETABILITY ON EVENT PREDICTION RESULTS
|
| 74 |
+
|
| 75 |
+
For all interpretability experiments, we used our model fine-tuned on the task Death in ${6M}$ , meaning whether a cancer patient will decease within six months after their first cancer diagnosis. We visualize the interpretability for single patients as both interpretability approaches are example-based and not model-agnostic.
|
| 76 |
+
|
| 77 |
+
§ 3.2.1 SELF-ATTENTION VISUALIZATION
|
| 78 |
+
|
| 79 |
+
Analogously to previous papers (Li et al. (2020), Rasmy et al. (2021), Meng et al. (2021)), we visualized the attention of the last network layer using BertViz (Vig (2019)). However, since all em-beddings are summed before being passed through the network for all such models, self-attention has no possibility to attribute single input features to the outcome. Nevertheless, we can deduce insights on how the different slots interact with each other and which connections the model deems to be important. Figure 2 displays the self-attention of a single patient of the last layer of ExBEHRT. The left figure displays the attentions of all 12 attention heads in this layer, whereas the right figure displays the attention of one head. Generally, the model focuses a lot on the slots within one visit, which was to be expected as these slots are strongly related by definition. Slot 7 corresponds to the slot, where the patient was diagnosed with lung cancer. Even though the model was not specifically trained to put emphasis on cancer codes, it puts a great amount of attention on this slot, indicating that it learned some correlation between the cancer diagnosis and the predicted outcome. Interestingly, slot 7 receives a lot attention from the first and the second visit, but not from the other two previous visits, indicating that the model is able to learn causalities across long time gaps.
|
| 80 |
+
|
| 81 |
+
< g r a p h i c s >
|
| 82 |
+
|
| 83 |
+
Figure 2: Left: The self-attention of all 12 attention heads of the last layer of ExBEHRT. Higher opacity corresponds to higher attention. Right: The self-attention of one attention head of the last layer. Slot 7 corresponds to the slot where the cancer was diagnosed.
|
| 84 |
+
|
| 85 |
+
§ 3.2.2 EXPECTED GRADIENTS INTERPRETABILITY
|
| 86 |
+
|
| 87 |
+
Due to the limitations of Self-Attention visualization, we explored the technique Expected Gradients (Erion et al., 2020) for deeper understanding of the model. Expected Gradients is considered to be one of the most robust gradient-based feature attribution methods for deep learning models. This way, we can deduce feature and token importances of single predictions, which is not possible with self-attention. Since each single concept (diagnosis code, procedure code, age, etc.) is mapped to a 288-dimensional embedding before being passed to the model, we first calculated the expected gradients for each of the 288 positions and then summed the absolute values to acquire a single gradient value for each input token. This way, each individual input token has an associated gradient related to the models output yielding detailed insights into which medical concept has had what impact on the models prediction. We visualized the findings in three layers of abstraction with increasing detail in figures 3.4 and 5.
|
| 88 |
+
|
| 89 |
+
< g r a p h i c s >
|
| 90 |
+
|
| 91 |
+
Figure 3: The absolute sums of the expected gradients summed by input feature.
|
| 92 |
+
|
| 93 |
+
For figure 3, we summed the expected gradients for each of the input features. This way, we can evaluate the different feature impacts on the output for a specific patient. For this patient, diagnoses and procedures (treatments & medications) were by far the most importance features. With this visualization, we can further evaluate basic biases. For example, gender was not considered to be an important feature, indicating that performance would be similar for a person with another gender.
|
| 94 |
+
|
| 95 |
+
< g r a p h i c s >
|
| 96 |
+
|
| 97 |
+
Figure 4: The absolute sums of the expected gradients summed by input feature and time slot. The dotted lines represent slots with SEP tokens and therefore indicate the next visit.
|
| 98 |
+
|
| 99 |
+
For figure 4, we visualized the absolute expected gradients for each of the input features and summed them at each time slot. This way, we can evaluate the different feature importances over time to get a notion of where the model puts emphasis on. Interestingly, the model put more importance on what kind of medications & treatments that patient received in the first two visits, where as in the last visit (the visit in which the patient was diagnosed with blood cancer), it put more importance on diagnoses and labs. Generally, slot 5, where the cancer was diagnosed, was attributed with the highest importance.
|
| 100 |
+
|
| 101 |
+
max width=
|
| 102 |
+
|
| 103 |
+
10|c|Diagnosis
|
| 104 |
+
|
| 105 |
+
1-10
|
| 106 |
+
CLS J40 SEP M54 SEP C81 R55 R59 E87 SEP
|
| 107 |
+
|
| 108 |
+
1-10
|
| 109 |
+
10|c|Lab
|
| 110 |
+
|
| 111 |
+
1-10
|
| 112 |
+
- - - - - CHEMISTRY URINALYSIS HEMATOLOGY SPEC. CHEM. -
|
| 113 |
+
|
| 114 |
+
1-10
|
| 115 |
+
- - - - - SPEC. LAB BLOOD GAS - - -
|
| 116 |
+
|
| 117 |
+
1-10
|
| 118 |
+
10|c|Procedure
|
| 119 |
+
|
| 120 |
+
1-10
|
| 121 |
+
- 71020 - 81003 - - - - - -
|
| 122 |
+
|
| 123 |
+
1-10
|
| 124 |
+
- 94640 - 87077 - - - - - -
|
| 125 |
+
|
| 126 |
+
1-10
|
| 127 |
+
- 99283 - 87086 - - - - - -
|
| 128 |
+
|
| 129 |
+
1-10
|
| 130 |
+
|
| 131 |
+
Figure 5: A visualization of the absolute sums of the expected gradients of diagnoses, labs and procedures on a concept level. Darker colours represent higher values and the SEP tokens indicate the separation between two visits.
|
| 132 |
+
|
| 133 |
+
Figure 5 displays the absolute sums of gradients of each individual input token, providing a detailed interpretation of which medical concept has had what impact on the models prediction. Unsurprisingly, the cancer code C81 has had the biggest impact on the outcome. However, earlier codes like J40 or 71020 also contribute to the models prediction, indicating that the model is able to include information from the whole patient journey into its results.
|
| 134 |
+
|
| 135 |
+
§ 3.3 CANCER PATIENT CLUSTERING
|
| 136 |
+
|
| 137 |
+
HDBSCAN was able to cluster ${90}\%$ of all cancer patients into 24 clusters1. On average, the most occurring cancer diagnosis within a cluster was present for ${84}\%$ of the patients assigned to this cluster and the mean cluster purity ${}^{2}$ was ${85}\%$ . Similar concepts (e.g. cancer of female reproductive organs or different types of leukaemia) lay in areas close to each other, indicating a spatial logic between the cancer types.
|
| 138 |
+
|
| 139 |
+
In figure 6, we show that with a second pass of HDBSCAN on a specific cluster, we can identify risk subgroups. In all three identified clusters, more than 90% of the patients actually do have pancreatic cancer and all clusters share similar general characteristics. However, as displayed in table 3.3, ExBEHRT identified one subgroup with significantly higher chance of recovering from cancer and a lower probability of death, even though this information was not provided to the model at any ${\text{ point }}^{3}$ .
|
| 140 |
+
|
| 141 |
+
< g r a p h i c s >
|
| 142 |
+
|
| 143 |
+
Figure 6: The three identified patient subclusters with pancreatic cancer visualized with a kernel density estimate plot for visual clarity.
|
| 144 |
+
|
| 145 |
+
${}^{1}$ A visualization of all clusters can be found in figure 8 in the appendix.
|
| 146 |
+
|
| 147 |
+
${}^{2}$ Cluster purity indicates the fraction of patients with a condition which are assigned to the cluster.
|
| 148 |
+
|
| 149 |
+
${}^{3}$ In the table, $\%$ of journey with cancer indicates the ratio of the time between the first and last cancer diagnosis compared to the duration of the whole recorded patient journey. Cancer-free refers to the percentage of patients within a cluster, which have records of at least two visits without cancer diagnosis after the last visit with a cancer diagnosis. The average death rate is directly taken from the EHR database and unfortunately does not indicate the cause of death.
|
| 150 |
+
|
| 151 |
+
Table 2: Statistics of the three pancreatic cancer clusters indicating a clear differentiation between higher risk (gray, blue) and lower risk patients (purple).
|
| 152 |
+
|
| 153 |
+
max width=
|
| 154 |
+
|
| 155 |
+
$\mathbf{{Metric}}$ Gray Blue Purple
|
| 156 |
+
|
| 157 |
+
1-4
|
| 158 |
+
Median age 67 68 68
|
| 159 |
+
|
| 160 |
+
1-4
|
| 161 |
+
Median birth year 1950 1947 1944
|
| 162 |
+
|
| 163 |
+
1-4
|
| 164 |
+
Median BMI 25 25 26
|
| 165 |
+
|
| 166 |
+
1-4
|
| 167 |
+
Average death rate 76.5% 75.9% 70.0%
|
| 168 |
+
|
| 169 |
+
1-4
|
| 170 |
+
$\%$ of journey with cancer 27.0% 24.0% $\mathbf{{18.3}\% }$
|
| 171 |
+
|
| 172 |
+
1-4
|
| 173 |
+
Cancer-free 34.0% 36.9% 62.7%
|
| 174 |
+
|
| 175 |
+
1-4
|
| 176 |
+
|
| 177 |
+
§ 4 DISCUSSION
|
| 178 |
+
|
| 179 |
+
In this study, we introduced a novel method of adding patient features to BEHRT, which significantly boosts the predictive power for several downstream tasks in multiple disease areas. The novel method of stacking features vertically yielded improvements in hardware requirements and benchmarks and eases the possible extension to new concepts in the future. Given the vast amount and heterogeneity of patients the model was pre-trained with, we are confident that ExBEHRT would generalize well to novel data, patients and tasks. Combined with interpretability, the model provides more granular insights into disease progressions and subtypes of various patients than previous approaches, which could help clinicians forming more granular assessments of the progression and health of their patients. In addition, it is possible to detect unmet needs and improve patient outcomes with a personalized understanding of patient groups.
|
| 180 |
+
|
| 181 |
+
Nevertheless, there are a few limitations: It is extremely difficult to validate the quality, completeness and correctness of EHR datasets, as the data is usually processed anonymously and stems from a variety of heterogeneous, fragmented sources. There is also bias in the sheer nature of EHR data, as practitioners could be incentivised to additionally diagnose less relevant conditions, since medical billing is closely related to the amount and type of diagnoses indicated. In a potential next step, we would like to verify the results and interpretations of this work with clinicians to ensure robust and sound predictions as possible given the acquired interpretability.
|
papers/ICLR/ICLR 2023/ICLR 2023 Workshop/ICLR 2023 Workshop TML4H/GYgu8Yq_96/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,313 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# LEARN2AGREE: FITTING WITH MULTIPLE ANNOTATORS WITHOUT OBJECTIVE GROUND TRUTH
|
| 2 |
+
|
| 3 |
+
Anonymous authors
|
| 4 |
+
|
| 5 |
+
Paper under double-blind review
|
| 6 |
+
|
| 7 |
+
## Abstract
|
| 8 |
+
|
| 9 |
+
The annotation of domain experts is important for some medical applications where the objective ground truth is ambiguous to define, e.g., the rehabilitation for some chronic diseases, and the prescreening of some musculoskeletal abnormalities without further medical examinations. However, improper uses of the annotations may hinder developing reliable models. On one hand, forcing the use of a single ground truth generated from multiple annotations is less informative for the modeling. On the other hand, feeding the model with all the annotations without proper regularization is noisy given existing disagreements. For such issues, we propose a novel Learning to Agreement (Learn2Agree) framework to tackle the challenge of learning from multiple annotators without objective ground truth. The framework has two streams, with one stream fitting with the multiple annotators and the other stream learning agreement information between annotators. In particular, the agreement learning stream produces regularization information to the classifier stream, tuning its decision to be better in line with the agreement between annotators. The proposed method can be easily added to existing backbones, with experiments on two medical datasets showed better agreement levels with annotators.
|
| 10 |
+
|
| 11 |
+
## 1 INTRODUCTION
|
| 12 |
+
|
| 13 |
+
There exist difficulties for model development in applications where the objective ground truth is difficult to establish or ambiguous merely given the input data itself. That is, the decision-making, i.e. the detection, classification, and segmentation process, is based on not only the presented data but also the expertise or experiences of the annotator. However, the disagreements existed in the annotations hinder the definition of a good single ground truth. Therefore, an important part of supervise learning for such applications is to achieve better fitting with annotators. In this learning scenario, the input normally comprises pairs of $\left( {{\mathbf{X}}_{i},{r}_{i}^{j}}\right)$ , where ${\mathbf{X}}_{i}$ and ${r}_{i}^{j}$ are respectively the data of $i$ -th sample and the label provided by $r$ -th annotator. Given such input, naïve methods aim to provide a single set of ground truth label for model development. Therein, a common practice is to aggregate these multiple annotations with majority voting (Surowiecki, 2005). However, majority-voting could misrepresent the data instances where the disagreement between different annotators is high. This is particularly harmful for applications where differences in expertise or experiences exist in annotators.
|
| 14 |
+
|
| 15 |
+
Except for majority-voting, some have tried to estimate the ground truth label using STAPLE (Warfield et al., 2004) based on Expectation-Maximization (EM) algorithms. Nevertheless, such methods are sensitive to the variance in annotations and the data size (Lampert et al., 2016; Karimi et al.,2020). When the number of annotations per ${\mathbf{X}}_{i}$ is modest, efforts are put into creating models that utilize all the annotations with multi-score learning (Meng et al., 2011) or soft labels (Hu et al., 2016). Recent approaches have instead focused on leveraging or learning the expertise of annotators while training the model (Long et al., 2013; Long & Hua, 2015; Healey, 2011; Guan et al., 2018; Ji et al., 2021; Yan et al., 2014; 2010; Tanno et al., 2019; Zhang et al., 2020). A basic idea is to refine the classification or segmentation toward the underlying ground truth by modeling annotators.
|
| 16 |
+
|
| 17 |
+
In this paper, we focus on a hard situation when the ground truth is ambiguous to define. On one hand, this could be due to the missing of objective ground truth in a specific scenario. For instance, in the analysis of bodily movement behavior for chronic-pain (CP) rehabilitation, the self-awareness of people with CP about their exhibited pain or fear-related behaviors is low, thus physiotherapists play a key role in judging it (Felipe et al., 2015; Singh et al., 2016). However, since the physiotherapists are assessing the behavior on the basis of visual observations, they may disagree on the judgment or ground truth. Additionally, the ground truth could be temporarily missing, at a special stage of the task. For example, in abnormality prescreening for bone X-rays, except for abnormalities like fractures and hardware implantation that are obvious and deterministic, other types like degenerative diseases and miscellaneous abnormalities are mainly diagnosed with further medical examinations (Rajpurkar et al., 2017). That is, at prescreening stage, the opinion of the doctor makes the decision, which could disagree with other doctors or the final medical examination though.
|
| 18 |
+
|
| 19 |
+

|
| 20 |
+
|
| 21 |
+
Figure 1: The proposed Learn2Agree framework regularizes the classifier that fits with all annotators with the estimated agreement information between annotators.
|
| 22 |
+
|
| 23 |
+
Thereon, unlike the traditional modeling goal that usually requires the existence of a set of ground truth labels to evaluate the performance, the objective of modeling in this paper is to improve the overall agreement between the model and annotators. Our contributions are four-fold: (i) We propose a novel Learn2Agree framework to directly leverage the agreement information stored in annotations from multiple annotators to regularize the behavior of the classifier that learns from them; (ii) To improve the robustness, we propose a general agreement distribution and an agreement regression loss to model the uncertainty in annotations; (iii) To regularize the classifier, we propose a regularization function to tune the classifier to better agree with all annotators; (iv) Our method noticeably improves existing backbones for better agreement levels with all annotators on classification tasks in two medical datasets, involving data of body movement sequences and bone X-rays.
|
| 24 |
+
|
| 25 |
+
## 2 RELATED WORK
|
| 26 |
+
|
| 27 |
+
### 2.1 MODELING ANNOTATORS
|
| 28 |
+
|
| 29 |
+
The leveraging or learning of annotators' expertise for better modeling is usually implemented in a two-step or multiphase manner, or integrated to run simultaneously. For the first category, one way to acquire the expertise is by referring to the prior knowledge about the annotation, e.g. the year of experience of each annotator, and the discussion held on the disagreed annotations. With such prior knowledge, studies in Long et al. (2013); Long & Hua (2015); Healey (2011) propose to distill the annotations, deciding which annotator to trust for disagreed samples. Without the access to such prior knowledge, the expertise, or behavior of an annotator can also be modeled given the annotation and the data, which could be used as a way to weight each annotator in the training of a classification model Guan et al. (2018), or adopted to refine the segmentation learned from multiple annotators Ji et al. (2021). More close to ours are the ones that simultaneously model the expertise of annotators while training the classifier. Previous efforts are seen on using probabilistic models Yan et al. (2014; 2010) driven by EM algorithms, and multi-head models that directly model annotators as confusion matrices estimated in comparison with the underlying ground truth Tanno et al. (2019); Zhang et al. (2020). While the idea behind these works may indeed work for applications where the distance between each annotator and the underlying ground truth exists and can be estimated in some ways to refine the decision-making of a model, we argue that in some cases it is (at least temporarily) difficult to assume the existence of the underlying ground truth.
|
| 30 |
+
|
| 31 |
+
### 2.2 MODELING UNCERTAINTY
|
| 32 |
+
|
| 33 |
+
Modeling uncertainty is a popular topic in the computer vision domain, especially for tasks of semantic segmentation and object detection. Therein, methods proposed can be categorized into two groups: i) the Bayesian methods, where parameters of the posterior distribution (e.g. mean and variance) of the uncertainty are estimated with Monte Carlo dropout Leibig et al. (2017); Kendall et al. (2017); Ma et al. (2017) and parametric learning Hu et al. (2020); Charpentier et al. (2020) etc.; and ii) 'non-Bayesian' alternatives, where the distribution of uncertainty is learned with ensemble methods Lakshminarayanan et al. (2016), variance propagation Postels et al. (2019), and knowledge distillation Shen et al. (2021) etc. Except for their complex and time-consuming training or inference strategies, another characteristic of these methods is the dependence on Gaussian or Dirac delta distributions as the prior assumption.
|
| 34 |
+
|
| 35 |
+

|
| 36 |
+
|
| 37 |
+
Figure 2: An overview of our Learn2Agree framework, comprising i) (above) the classifier stream with original prediction ${\widehat{p}}_{\theta }\left( {x}_{i}\right)$ that fits with available annotations ${\left\{ {r}_{i}^{j}\right\} }^{j = 1,\ldots , J}$ ; and ii) (below) the agreement learning stream that learns to estimate ${\widehat{y}}_{i}$ of the agreement level ${\alpha }_{i}$ between annotators, and leverage such information to compute the regularized prediction ${\widetilde{p}}_{\theta }\left( {x}_{i}\right)$ .
|
| 38 |
+
|
| 39 |
+
### 2.3 EVALUATION WITHOUT GROUND TRUTH
|
| 40 |
+
|
| 41 |
+
In the context of modeling multiple annotations without ground truth, typical evaluation measures rely on metrics of agreements. For example, Kleinsmith et al. (2011) uses metrics of agreement, e.g. Cohen's Kappa Cohen (1960) and Fleiss' Kappa Fleiss (1971), as the way to compare the agreement level between a system and an annotator and the agreement level between other unseen annotators, in a cross-validation manner. However, this method does not consider how to directly learn from all the annotators, and how to evaluate the performance of the model in this case. For this end, Lovchinsky et al. (2019) proposes a metric named discrepancy ratio. In short, the metric compares performances of the model-annotator vs. the annotator-annotator, where the performance can be computed as discrepancy e.g. with absolute error, or as agreement e.g. with Cohen's kappa. In this paper, we use the Cohen's kappa as the agreement calculator together with such a metric to evaluate the performance of our method. We refer to this metric as agreement ratio.
|
| 42 |
+
|
| 43 |
+
## 3 METHOD
|
| 44 |
+
|
| 45 |
+
An overview of our proposed Learn2Agree framework is shown in Fig. 2. The core of our proposed method is to learn to estimate the agreement between different annotators based on their raw annotations, and simultaneously utilize the agreement-level estimation to regularize the training of the classification task. Therein, different components of the proposed method concern: the learning of agreement levels between annotators, and regularizing the classifier with such information. In testing or inference, the model estimates annotators' agreement level based on the current data input, which is then used to aid the classification.
|
| 46 |
+
|
| 47 |
+
In this paper, we consider a dataset comprising $N$ samples $\mathbf{X} = {\left\{ {x}_{i}\right\} }_{i = 1,\ldots , N}$ , with each sample ${x}_{i}$ being an image or a timestep in a body movement data sequence. For each sample ${x}_{i},{r}_{i}^{j}$ denotes the annotation provided by $j$ -th annotator, with ${\alpha }_{i} \in \left\lbrack {0,1}\right\rbrack$ being the agreement computed between annotators. For a binary task, ${r}_{i}^{j} \in \{ 0,1\}$ . With such dataset $\mathcal{D} = {\left\{ {x}_{i},{r}_{i}^{j}\right\} }_{i = 1,\ldots , N}^{j = 1,\ldots , J}$ , the proposed method aims to improve the agreement level with all annotators. It should be noted that, for each sample ${x}_{i}$ , the method does not expect the annotations to be available from all the $J$ annotators.
|
| 48 |
+
|
| 49 |
+
### 3.1 MODELING UNCERTAINTY IN AGREEMENT LEARNING
|
| 50 |
+
|
| 51 |
+
To enable a robust learning of the agreement between annotators, we consider modeling the uncertainty that could exist in the annotations. In our scenarios, the uncertainty comes from annotators' varying expertise exhibited in their annotations across different data samples, which may not follow specific prior distributions. Inspired by the study of Li et al. (2020) that proposed to use a general distribution for uncertainty modeling in the bounding box regression of object classification, without relying on any prior distributions, we further propose a general agreement distribution $G\left( {y}_{i}\right)$ for agreement learning (see the upper part of Fig.3). Therein, the distribution values are the possible agreement levels ${y}_{i}$ between annotators with a range of $\left\lbrack {{y}_{i}^{0},{y}_{i}^{n}}\right\rbrack$ , which is further dis-cretized into $\left\{ {{y}_{i}^{0},{y}_{i}^{1},\ldots {y}_{i}^{n - 1},{y}_{i}^{n}}\right\}$ with a uniform interval of 1 . The general agreement distribution has a property $\mathop{\sum }\limits_{{k = 0}}^{n}G\left( {y}_{i}^{k}\right) = 1$ , which can be implemented with a softmax layer with $n + 1$ nodes. The predicted agreement ${\widehat{y}}_{i}$ for regression can be computed as the weighted sum of all the distribution values
|
| 52 |
+
|
| 53 |
+
$$
|
| 54 |
+
{\widehat{y}}_{i} = \mathop{\sum }\limits_{{k = 0}}^{n}G\left( {y}_{i}^{k}\right) {y}_{i}^{k}. \tag{1}
|
| 55 |
+
$$
|
| 56 |
+
|
| 57 |
+

|
| 58 |
+
|
| 59 |
+
Figure 3: The learning of the agreement ${\alpha }_{i}$ between annotators is modeled with a general agreement distribution $G\left( {y}_{i}\right)$ using agreement regression loss ${\mathcal{L}}_{AR}$ (above), with the $\mathrm{X}$ axis of the distribution being the possible agreement levels ${y}_{i}$ and the $\mathrm{Y}$ axis being the respective probabilities. This learning can also be implemented as a linear regression task that learns to approach the exact agreement level ${\alpha }_{i}$ using RMSE loss (below).
|
| 60 |
+
|
| 61 |
+
For training the predicted agreement value ${\widehat{y}}_{i}$ toward the target agreement ${\alpha }_{i}$ , inspired by the effectiveness of quantile regression in understanding the property of conditional distribution Koenker & Hallock (2001); Hao et al. (2007); Fan et al. (2019), we propose a novel Agreement Regression (AR) loss defined by
|
| 62 |
+
|
| 63 |
+
$$
|
| 64 |
+
{\mathcal{L}}_{AR}\left( {{\widehat{y}}_{i},{\alpha }_{i}}\right) = \max \left\lbrack {{\alpha }_{i}\left( {{\widehat{y}}_{i} - {\alpha }_{i}}\right) ,\left( {{\alpha }_{i} - 1}\right) \left( {{\widehat{y}}_{i} - {\alpha }_{i}}\right) }\right\rbrack . \tag{2}
|
| 65 |
+
$$
|
| 66 |
+
|
| 67 |
+
Comparing with the original quantile regression loss, the quantile $q$ is replaced with the agreement ${\alpha }_{i}$ computed at current input sample ${x}_{i}$ . The quantile $q$ is usually fixed for a dataset, as to understand the underlying distribution of the model’s output at a given quantile. By replacing $q$ with ${\alpha }_{i}$ , we optimize the general agreement distribution to focus on the given agreement level dynamically across samples.
|
| 68 |
+
|
| 69 |
+
In Li et al. (2021), the authors proposed to use the top $k$ values of the distribution and their mean to indicate the shape (flatness) of the distribution, which provides the level of uncertainty in object classification. In our case, all probabilities of the distribution are used to regularize the classifier. While this also informs the shape of the distribution for the perspective of uncertainty modeling, the skewness reflecting the high or low agreement level learned at the current data sample is also revealed. Thereon, two fully-connected layers with RELU and Sigmoid activations respectively are used to process such information and produce the agreement indicator ${\widetilde{y}}_{i}$ for regularization.
|
| 70 |
+
|
| 71 |
+
#### 3.1.1 LEARNING AGREEMENT WITH LINEAR REGRESSION.
|
| 72 |
+
|
| 73 |
+
Straightforwardly, we can also formulate the agreement learning as a plain linear regression task, modelled by a fully-connected layer with a Sigmoid activation function (see the lower part of Fig.3). Then, the predicted agreement ${\widehat{y}}_{i}$ is directly taken as the agreement indicator ${\widetilde{y}}_{i}$ for regularization. Given the predicted agreement ${\widehat{y}}_{i}$ and target agreement ${\alpha }_{i}$ at each input sample ${x}_{i}$ , by using Root Mean Squared Error (RMSE), the linear regression loss is computed as
|
| 74 |
+
|
| 75 |
+
$$
|
| 76 |
+
{\mathcal{L}}_{RMSE}\left( {\widehat{y},\alpha }\right) = {\left\lbrack \frac{1}{N}\mathop{\sum }\limits_{i}^{N}{\left( {\widehat{y}}_{i} - {\alpha }_{i}\right) }^{2}\right\rbrack }^{\frac{1}{2}}. \tag{3}
|
| 77 |
+
$$
|
| 78 |
+
|
| 79 |
+
It should be noted that, the proposed AR loss can also be used for this linear regression variant, which may help optimize the underlying distribution toward the given agreement level. In the experiments, an empirical comparison between different variants for agreement learning is conducted.
|
| 80 |
+
|
| 81 |
+
### 3.2 REGULARIZING THE CLASSIFIER WITH AGREEMENT
|
| 82 |
+
|
| 83 |
+
Since the high-level information implied by the agreement between annotators could provide extra hints in classification tasks, we utilize the agreement indicator ${\widetilde{y}}_{i}$ to regularize the classifier training toward providing outcomes that are more in agreement with annotators. Given a binary classification task (a multi-class task can be decomposed into several binary ones), at input sample ${x}_{i}$ , we denote the original predicted probability toward the positive class of the classifier to be ${\widehat{p}}_{\theta }\left( {x}_{i}\right)$ . The general idea is that, when the learned agreement indicator is i) at chance level i.e. ${\widetilde{y}}_{i} = {0.5},{\widehat{p}}_{\theta }\left( {x}_{i}\right)$ shall stay unchanged; ii) biased toward the positive/negative class, the value of ${\widehat{p}}_{\theta }\left( {x}_{i}\right)$ shall be regularized toward the respective class. For these, we propose a novel regularization function written as
|
| 84 |
+
|
| 85 |
+
$$
|
| 86 |
+
{\widetilde{p}}_{\theta }\left( {x}_{i}\right) = \frac{{\widehat{p}}_{\theta }\left( {x}_{i}\right) {e}^{\lambda \left( {{\widetilde{y}}_{i} - {0.5}}\right) }}{{\widehat{p}}_{\theta }\left( {x}_{i}\right) {e}^{\lambda \left( {{\widetilde{y}}_{i} - {0.5}}\right) } + \left( {1 - {\widehat{p}}_{\theta }\left( {x}_{i}\right) }\right) {e}^{\lambda \left( {{0.5} - {\widetilde{y}}_{i}}\right) }},
|
| 87 |
+
$$
|
| 88 |
+
|
| 89 |
+
(4)
|
| 90 |
+
|
| 91 |
+

|
| 92 |
+
|
| 93 |
+
Figure 4: The property of the regularization function. $\mathrm{X}$ and $\mathrm{Y}$ axes are the agreement indicator ${\widetilde{y}}_{i}$ and regularized probability ${\widetilde{p}}_{\theta }\left( {x}_{i}\right)$ , respectively. ${\widetilde{p}}_{\theta }\left( {x}_{i}\right)$ is regularized to the class, for which the ${\widetilde{y}}_{i}$ is high, with $\lambda$ controlling scale.
|
| 94 |
+
|
| 95 |
+
where ${\widetilde{p}}_{\theta }\left( {x}_{i}\right)$ is the regularized probability toward the positive class of the current binary task, $\lambda$ is a hyperpa-rameter controlling the scale at which the original predicted probability ${\widehat{p}}_{\theta }\left( {x}_{i}\right)$ changes toward ${\widetilde{p}}_{\theta }\left( {x}_{i}\right)$ when the agreement indicator increases/decreases. Fig. 4 shows the property of the function: for the original predicted probability ${\widehat{p}}_{\theta }\left( {x}_{i}\right) = {0.5}$ , the function with larger $\lambda$ augments the effect of the learned agreement indicator ${\widetilde{y}}_{i}$ so that the output ${\widetilde{p}}_{\theta }\left( {x}_{i}\right)$ is regularized toward the more (dis)agreed; when ${\widetilde{y}}_{i}$ is at 0.5 , where annotators are unable to reach an above-chance opinion about the task, the regularized probability stays unchanged with ${\widetilde{p}}_{\theta }\left( {x}_{i}\right) = {\widehat{p}}_{\theta }\left( {x}_{i}\right)$ .
|
| 96 |
+
|
| 97 |
+
### 3.3 COMBATING IMBALANCES IN LOGARITHMIC LOSS
|
| 98 |
+
|
| 99 |
+
In this subsection, we first alleviate the influence of class imbalances present in the annotation of each annotator, by refining the vanilla cross-entropy loss. We further explore the use of an agreement-oriented loss that may naturally avoid such imbalances during training.
|
| 100 |
+
|
| 101 |
+
#### 3.3.1 ANNOTATION BALANCING FOR EACH ANNOTATOR.
|
| 102 |
+
|
| 103 |
+
For the classifier stream, given the regularized probability ${\widetilde{p}}_{\theta }\left( {x}_{i}\right)$ at the current input sample ${x}_{i}$ , the classifier is updated using the sum of the loss computed according to the available annotation ${r}_{i}^{j}$ from each annotator. Due to the various the nature of the task (i.e., positive samples are sparse), the annotation from each annotator could be noticeably imbalanced. Toward this problem, we use the Focal Loss (FL) Lin et al. (2017), written as follows.
|
| 104 |
+
|
| 105 |
+
$$
|
| 106 |
+
{\mathcal{L}}_{\mathrm{{FL}}}\left( {p, g}\right) = - {\left| g - p\right| }^{\gamma }\left( {g\log \left( p\right) + \left( {1 - g}\right) \log \left( {1 - p}\right) }\right) , \tag{5}
|
| 107 |
+
$$
|
| 108 |
+
|
| 109 |
+
where $p$ is the predicted probability of the model toward the positive class at the current data sample, $g \in \{ 0,1\}$ is the binary ground truth, and $\gamma \geq 0$ is the focusing parameter used to control the threshold for judging the well-classified. A larger $\gamma$ leads to a lower threshold so that more samples would be treated as the well-classified and down weighted. In our scenario, the FL function is integrated into the following loss function to compute the average loss from all annotators.
|
| 110 |
+
|
| 111 |
+
$$
|
| 112 |
+
{\mathcal{L}}_{\theta }\left( {{\widetilde{\mathbf{P}}}_{\theta },\mathbf{R}}\right) = \frac{1}{J}\mathop{\sum }\limits_{{j = 1}}^{J}\frac{1}{{\dot{N}}^{j}}\mathop{\sum }\limits_{{i = 1}}^{{\dot{N}}^{j}}{\mathcal{L}}_{FL}\left( {{\widetilde{p}}_{\theta }\left( {x}_{i}\right) ,{r}_{i}^{j}}\right) , \tag{6}
|
| 113 |
+
$$
|
| 114 |
+
|
| 115 |
+
where ${\dot{N}}^{j} \leq N$ is the number of samples that have been labelled by $j$ -th annotator, ${\widetilde{\mathbf{P}}}_{\theta } =$ ${\left\{ {\widetilde{p}}_{\theta }\left( {x}_{i}\right) \right\} }_{i = 1,\ldots , N},\mathbf{R} = {\left\{ {r}_{i}^{j}\right\} }_{i = 1,\ldots ,{N}^{j}}^{j = 1,\ldots , J},{r}_{i}^{j} =$ null if the $j$ -th annotator did not annotate at $i$ -th sample, and the loss is not computed here.
|
| 116 |
+
|
| 117 |
+
Additionally, searching for the $\gamma$ manually for each annotator could be cumbersome, especially for a dataset labeled by numerous annotators. In this paper, we compute $\gamma$ given the number of samples annotated by each annotator per class of each binary task. The hypothesis is that, for annotations biased more toward one class, $\gamma$ shall set to be bigger since larger number of samples tend to be well-classified. We leverage the effective number of samples Cui et al. (2019) to compute each ${\gamma }_{j}$ as follows.
|
| 118 |
+
|
| 119 |
+
$$
|
| 120 |
+
{\gamma }_{j} = \frac{\left( 1 - {\beta }^{{n}_{k}^{j}}\right) }{\left( 1 - {\beta }^{\left( {\widehat{N}}^{j} - {n}_{k}^{j}\right) }\right) }, \tag{7}
|
| 121 |
+
$$
|
| 122 |
+
|
| 123 |
+
where ${n}_{k}^{j}$ is the number of samples for the majority class $k$ in the current binary task annotated by annotator $j,\beta = \frac{{\dot{N}}^{j} - 1}{{\dot{N}}^{j}}$ .
|
| 124 |
+
|
| 125 |
+
#### 3.3.2 AGREEMENT-ORIENTED LOSS.
|
| 126 |
+
|
| 127 |
+
In de La Torre et al. (2018), a Weighted Kappa Loss (WKL) was used to compute the agreement-oriented loss between the output of a model and the annotation of an annotator. As developed from the Cohen's Kappa, this loss may guide the model to pay attention to the overall agreement level instead of the local mistake. Thus, we may be able to avoid the cumbersome work of alleviating the class imbalances as above. This loss function can be written as follows.
|
| 128 |
+
|
| 129 |
+
$$
|
| 130 |
+
{\mathcal{L}}_{\mathrm{{WKL}}} = \log \left( {1 - \kappa }\right) . \tag{8}
|
| 131 |
+
$$
|
| 132 |
+
|
| 133 |
+
The linear weighted kappa $\kappa$ Cohen ([1968]) is used in this equation, where the penalization weight is proportional to the distance between the predicted and the class. We replace the FL loss written in Equation 5, to compute the weighted kappa loss across samples and annotators using Equation 6, The value range of this loss is $( - \infty ,\log 2\rbrack$ , thus a Sigmoid function is applied before we sum the loss from each annotator. We compare this WKL loss function to the logarithmic one in our experiment.
|
| 134 |
+
|
| 135 |
+
## 4 EXPERIMENTS
|
| 136 |
+
|
| 137 |
+
In this section, we evaluate our proposed method with data annotated by multiple human experts, where the objective ground truth is ambiguous to define. Please refer to the Appendix for dataset descriptions, implementation details, and the computation of agreement ground truth.
|
| 138 |
+
|
| 139 |
+
### 4.1 METRIC
|
| 140 |
+
|
| 141 |
+
Following Lovchinsky et al. (2019), we evaluate the performance of a model by using the agreement ratio as follows.
|
| 142 |
+
|
| 143 |
+
$$
|
| 144 |
+
\Delta = \frac{{\mathrm{C}}_{J}^{2}}{J}\frac{\mathop{\sum }\limits_{{j = 1}}^{J}\operatorname{Sigmoid}\left( {\kappa \left( {{\widetilde{\mathbf{P}}}_{\theta },{\mathbf{R}}^{j}}\right) }\right) }{\mathop{\sum }\limits_{{j,{j}^{\prime } = 1\& j \neq {j}^{\prime }}}^{J}\operatorname{Sigmoid}\left( {\kappa \left( {{\mathbf{R}}^{j},{\mathbf{R}}^{{j}^{\prime }}}\right) }\right) }, \tag{9}
|
| 145 |
+
$$
|
| 146 |
+
|
| 147 |
+
where the numerator computes the average agreement for the pairs of predictions of the model and annotations of each annotator, and the denominator computes the average agreement between annotators with ${\mathrm{C}}_{J}^{2}$ denoting the number of different annotator pairs. $\kappa$ is the Cohen’s Kappa. The agreement ratio $\Delta > 0$ is larger than 1 when the model performs better than the average annotator Lovchinsky et al. (2019).
|
| 148 |
+
|
| 149 |
+
### 4.2 RESULTS
|
| 150 |
+
|
| 151 |
+
#### 4.2.1 AGREEMENT-ORIENTED LOSS VS. LOGARITHMIC LOSS.
|
| 152 |
+
|
| 153 |
+
As shown in the first section of Table 1, models trained with majority-voted ground truth produce agreement ratios of 1.0417 and 0.7616 with logarithmic loss and annotation balancing (in this case is class balancing for the single majority-voted ground truth) on the EmoPain and MURA datasets, respectively. However, as shown in the second section of Table 1, directly exposing the model to all the annotations is harmful, with performances lower than the majority-voting ones of 0.9733 and 0.7564 achieved on the two datasets using logarithmic loss alone. By using the balancing method during training, the performance on the EmoPain dataset is improved to 1.0189 but is still lower than majority-voting one, while a better performance of 0.7665 than the majority-voting is achieved on
|
| 154 |
+
|
| 155 |
+
Table 1: The ablation experiment on the EmoPain and MURA datasets. Majority-voting refers to the method using the majority-voted ground truth for training. CE and WKL refer to the logarithmic and weighted kappa loss functions used in the classifier stream, respectively. Linear and Distributional refer to the agreement learning stream with linear regression and general agreement distribution, respectively. The best performance in each section is marked in bold per dataset.
|
| 156 |
+
|
| 157 |
+
<table><tr><td>Framework/Annotator</td><td>CEWKLAnnotation BalanceLinearDistributional</td><td>Δ↑ EmoPain</td><td>Δ↑ MURA</td></tr><tr><td>Majority</td><td>✓✓</td><td>1.0417</td><td>0.7616</td></tr><tr><td>Voting</td><td>✓</td><td>1.0452</td><td>0.7638</td></tr><tr><td rowspan="3">Learn-from-all</td><td>✓</td><td>0.9733</td><td>0.7564</td></tr><tr><td>✓✓</td><td>1.0189</td><td>0.7665</td></tr><tr><td>✓</td><td>1.0407</td><td>0.7751</td></tr><tr><td rowspan="4">Learn2Agree (Ours)</td><td>✓✓✓</td><td>1.0477</td><td>0.7727</td></tr><tr><td>✓✓✓</td><td>1.0508</td><td>0.7796</td></tr><tr><td>✓✓</td><td>1.0471</td><td>0.7768</td></tr><tr><td>✓✓</td><td>1.0547</td><td>0.7801</td></tr><tr><td>Annotator 1</td><td/><td>0.9613</td><td>1.0679</td></tr><tr><td>Annotator 2</td><td/><td>1.0231</td><td>0.9984</td></tr><tr><td>Annotator 3</td><td/><td>1.0447</td><td>0.9743</td></tr><tr><td>Annotator 4</td><td/><td>0.9732</td><td>0.9627</td></tr></table>
|
| 158 |
+
|
| 159 |
+
the MURA dataset. These results show the importance of balancing for the modeling with logarithmic loss in a learn-from-all paradigm. With the WKL loss, performances of the model in majority-voting (1.0452/0.7638) and learn-from-all (1.0407/0.7751) paradigms are further improved. This shows the advantage of the WKL loss for improving the fitting with multiple annotators, which also alleviates the need to use class balancing strategies.
|
| 160 |
+
|
| 161 |
+
#### 4.2.2 THE IMPACT OF OUR LEARN2AGREE METHOD.
|
| 162 |
+
|
| 163 |
+
For both datasets, as shown in the third section of Table 1, with our proposed Learn2Agree method using general agreement distribution, the best overall performances of 1.0547 and 0.7801 are achieved on the two datasets, respectively. For the agreement learning stream, the combination of general agreement distribution and AR loss shows better performance than its variant using linear regression and RMSE on both datasets (1.0477 with logarithmic loss and 0.7768 with WKL loss). Such results could be due to the fact that the agreement indicator ${\widetilde{y}}_{i}$ produced from the linear regression is directly the estimated agreement value ${\widehat{y}}_{i}$ , which could be largely affected by the errors made during agreement learning. In contrast, with general agreement distribution, the information passed to the classifier is first the shape and skewness of the distribution $G\left( {y}_{i}\right)$ . Thus, it is more tolerant to the errors (if) made by the weighted sum that used for regression with agreement learning.
|
| 164 |
+
|
| 165 |
+
#### 4.2.3 COMPARING WITH ANNOTATORS.
|
| 166 |
+
|
| 167 |
+
In the last section of Table 1, the annotation of each annotator is used to compute the agreement ratio against the other annotators (Equation 9).
|
| 168 |
+
|
| 169 |
+
For the EmoPain dataset, the best method in majority-voting (1.0452) and learn-from-all (1.0407) paradigms show very competitive if not better performances than annotator 3 (1.0447) who has the best agreement level with all the other annotators. Thereon, the proposed Learn2Agree method improves the performance to an even higher agreement ratio of 1.0547 against all the annotators. This performance suggests that, when adopted in real-life, the model is able to analyze the protective behavior of people with CP at a performance that is highly in agreement with the human experts.
|
| 170 |
+
|
| 171 |
+
However, for the MURA dataset, the best performance so far achieved by the Learn2Agree method of 0.7801 is still lower than annotator 1 . This suggests that, at the current task setting, the model may make around 22% errors more than the human experts. One reason could be largely due to the challenge of the task. As shown in Rajpurkar et al. (2017), where the same backbone only achieved a similar if not better performance than the other radiologists for only one (wrist) out of the seven upper extremity types. In this paper, the testing set comprises all the extremity types, which makes the experiment even more challenging. Future works may explore better backbones tackling this.
|
| 172 |
+
|
| 173 |
+
Table 2: The experiment on the EmoPain dataset for analyzing the impact of Agreement Regression (AR) loss on agreement learning.
|
| 174 |
+
|
| 175 |
+
<table><tr><td>Classifier Loss</td><td>Agreement Learning Type</td><td>Agreement Learning Loss</td><td>Δ↑</td></tr><tr><td rowspan="3">CE</td><td>Linear</td><td>RMSE AR</td><td>1.0477 0.9976</td></tr><tr><td>Distributional</td><td>RMSE</td><td>1.0289</td></tr><tr><td/><td>AR</td><td>1.0508</td></tr><tr><td rowspan="2">WKL</td><td>Linear</td><td>RMSE AR</td><td>1.0454 1.035</td></tr><tr><td>Distributional</td><td>RMSE AR</td><td>1.0454 1.0482</td></tr></table>
|
| 176 |
+
|
| 177 |
+
Table 3: The experiment on the MURA dataset for analyzing the impact of Agreement Regression (AR) loss on agreement learning.
|
| 178 |
+
|
| 179 |
+
<table><tr><td>Classifier Loss</td><td>Agreement Learning Type</td><td>Agreement Learning Loss</td><td>Δ↑</td></tr><tr><td rowspan="2">CE</td><td>Linear</td><td>RMSE AR</td><td>0.7727 0.7698</td></tr><tr><td>Distributional</td><td>RMSE AR</td><td>0.7729 0.7796</td></tr><tr><td rowspan="2">WKL</td><td>Linear</td><td>RMSE AR</td><td>0.7707 0.7674</td></tr><tr><td>Distributional</td><td>RMSE AR</td><td>0.7724 0.7773</td></tr></table>
|
| 180 |
+
|
| 181 |
+
#### 4.2.4 THE IMPACT OF AGREEMENT REGRESSION LOSS.
|
| 182 |
+
|
| 183 |
+
The proposed AR loss can be used for both the distributional and linear agreement learning stream. However, as seen in Table 2 and Table 3, the performance of linear agreement learning is better with RMSE loss rather than with the AR loss. The design of the AR loss assumes the loss computed for a given quantile is in accord with its counterpart of agreement level. Thus, such results may be due to the gap between the quantile of the underlying distribution of the linear regression and the targeted agreement level. Therefore, the resulting estimated agreement indicator using AR loss passed to the classifier may not reflect the actual agreement level. Instead, for linear regression, a vanilla loss like RMSE promises that the regression value is fitting toward the actual agreement level.
|
| 184 |
+
|
| 185 |
+
By contrast, the proposed general agreement distribution directly adopts the range of agreement levels to be the distribution values, which helps to narrow such a gap when AR loss is used. Therein, the agreement indicator is extracted from the shape and skewness of such distribution (probabilities of all distribution values), which could better reflect the agreement level when updated with AR loss. As shown, the combination of distributional agreement learning and AR loss achieves the best performance in each dataset.
|
| 186 |
+
|
| 187 |
+
## 5 CONCLUSION
|
| 188 |
+
|
| 189 |
+
In this paper, we targeted the scenario of learning with multiple annotators where the ground truth is ambiguous to define. Two medical datasets in this scenario were adopted for the evaluation. We showed that backbones developed with majority-voted ground truth or multiple annotations can be easily enhanced to achieve better agreement levels with annotators, by leveraging the underlying agreement information stored in the annotations. For agreement learning, our experiments demonstrate the advantage of learning with the proposed general agreement distribution and agreement regression loss, in comparison with other possible variants. Future works may extend this paper to prove its efficiency in datasets having multiple classes, as only binary tasks were considered in this paper. Additionally, the learning of annotator's expertise seen in Tanno et al. (2019); Zhang et al. (2020); Ji et al. (2021) could be leveraged to weight the agreement computation and learning proposed in our method for cases where annotators are treated differently.
|
| 190 |
+
|
| 191 |
+
## REFERENCES
|
| 192 |
+
|
| 193 |
+
Min SH Aung, Sebastian Kaltwang, Bernardino Romera-Paredes, Brais Martinez, Aneesha Singh, Matteo Cella, Michel Valstar, Hongying Meng, Andrew Kemp, Moshen Shafizadeh, et al. The automatic detection of chronic pain-related expression: requirements, challenges and the multimodal emopain dataset. IEEE transactions on affective computing, 7(4):435-451, 2015.
|
| 194 |
+
|
| 195 |
+
Bertrand Charpentier, Daniel Zügner, and Stephan Günnemann. Posterior network: Uncertainty estimation without ood samples via density-based pseudo-counts. arXiv preprint arXiv:2006.09239, 2020.
|
| 196 |
+
|
| 197 |
+
Jacob Cohen. A coefficient of agreement for nominal scales. Educational and psychological measurement, 20(1):37-46, 1960.
|
| 198 |
+
|
| 199 |
+
Jacob Cohen. Weighted kappa: nominal scale agreement provision for scaled disagreement or partial credit. Psychological bulletin, 70(4):213, 1968.
|
| 200 |
+
|
| 201 |
+
Yin Cui, Menglin Jia, Tsung-Yi Lin, Yang Song, and Serge Belongie. Class-balanced loss based on effective number of samples. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9268-9277, 2019.
|
| 202 |
+
|
| 203 |
+
Jordi de La Torre, Domenec Puig, and Aida Valls. Weighted kappa loss function for multi-class classification of ordinal data in deep learning. Pattern Recognition Letters, 105:144-154, 2018.
|
| 204 |
+
|
| 205 |
+
Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pp. 248-255. Ieee, 2009.
|
| 206 |
+
|
| 207 |
+
Chenyou Fan, Yuze Zhang, Yi Pan, Xiaoyue Li, Chi Zhang, Rong Yuan, Di Wu, Wensheng Wang, Jian Pei, and Heng Huang. Multi-horizon time series forecasting with temporal attention learning. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 2527-2535, 2019.
|
| 208 |
+
|
| 209 |
+
Sergio Felipe, Aneesha Singh, Caroline Bradley, Amanda CdeC Williams, and Nadia Bianchi-Berthouze. Roles for personal informatics in chronic pain. In 2015 9th International Conference on Pervasive Computing Technologies for Healthcare, pp. 161-168. IEEE, 2015.
|
| 210 |
+
|
| 211 |
+
Joseph L Fleiss. Measuring nominal scale agreement among many raters. Psychological bulletin, 76(5):378, 1971.
|
| 212 |
+
|
| 213 |
+
Melody Guan, Varun Gulshan, Andrew Dai, and Geoffrey Hinton. Who said what: Modeling individual labelers improves classification. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 32, 2018.
|
| 214 |
+
|
| 215 |
+
Lingxin Hao, Daniel Q Naiman, and Daniel Q Naiman. Quantile regression. Sage, 2007.
|
| 216 |
+
|
| 217 |
+
Jennifer Healey. Recording affect in the field: Towards methods and metrics for improving ground truth labels. In International conference on affective computing and intelligent interaction, pp. 107-116. Springer, 2011.
|
| 218 |
+
|
| 219 |
+
Ninghang Hu, Gwenn Englebienne, Zhongyu Lou, and Ben Kröse. Learning to recognize human activities using soft labels. IEEE transactions on pattern analysis and machine intelligence, 39 (10):1973-1984, 2016.
|
| 220 |
+
|
| 221 |
+
Ping Hu, Stan Sclaroff, and Kate Saenko. Uncertainty-aware learning for zero-shot semantic segmentation. Advances in Neural Information Processing Systems, 33, 2020.
|
| 222 |
+
|
| 223 |
+
Gao Huang, Zhuang Liu, Laurens Van Der Maaten, and Kilian Q Weinberger. Densely connected convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 4700-4708, 2017.
|
| 224 |
+
|
| 225 |
+
Wei Ji, Shuang Yu, Junde Wu, Kai Ma, Cheng Bian, Qi Bi, Jingjing Li, Hanruo Liu, Li Cheng, and Yefeng Zheng. Learning calibrated medical image segmentation via multi-rater agreement modeling. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 12341-12351, 2021.
|
| 226 |
+
|
| 227 |
+
Davood Karimi, Haoran Dou, Simon K Warfield, and Ali Gholipour. Deep learning with noisy labels: Exploring techniques and remedies in medical image analysis. Medical Image Analysis, 65:101759, 2020.
|
| 228 |
+
|
| 229 |
+
Francis J Keefe and Andrew R Block. Development of an observation method for assessing pain behavior in chronic low back pain patients. Behavior therapy, 1982.
|
| 230 |
+
|
| 231 |
+
A Kendall, V Badrinarayanan, and R Cipolla. Bayesian segnet: Model uncertainty in deep convolutional encoder-decoder architectures for scene understanding. In British Machine Vision Conference, 2017.
|
| 232 |
+
|
| 233 |
+
Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
|
| 234 |
+
|
| 235 |
+
Andrea Kleinsmith, Nadia Bianchi-Berthouze, and Anthony Steed. Automatic recognition of non-acted affective postures. IEEE Transactions on Systems, Man, and Cybernetics, Part B, 41(4): 1027-1038, 2011.
|
| 236 |
+
|
| 237 |
+
Roger Koenker and Kevin F Hallock. Quantile regression. Journal of economic perspectives, 15(4): 143-156, 2001.
|
| 238 |
+
|
| 239 |
+
Balaji Lakshminarayanan, Alexander Pritzel, and Charles Blundell. Simple and scalable predictive uncertainty estimation using deep ensembles. arXiv preprint arXiv:1612.01474, 2016.
|
| 240 |
+
|
| 241 |
+
Thomas A Lampert, André Stumpf, and Pierre Gançarski. An empirical study into annotator agreement, ground truth estimation, and algorithm evaluation. IEEE Transactions on Image Processing, 25(6):2557-2572, 2016.
|
| 242 |
+
|
| 243 |
+
Christian Leibig, Vaneeda Allken, Murat Seçkin Ayhan, Philipp Berens, and Siegfried Wahl. Leveraging uncertainty information from deep neural networks for disease detection. Scientific reports, 7(1):1-14, 2017.
|
| 244 |
+
|
| 245 |
+
Xiang Li, Wenhai Wang, Lijun Wu, Shuo Chen, Xiaolin Hu, Jun Li, Jinhui Tang, and Jian Yang. Generalized focal loss: Learning qualified and distributed bounding boxes for dense object detection. arXiv preprint arXiv:2006.04388, 2020.
|
| 246 |
+
|
| 247 |
+
Xiang Li, Wenhai Wang, Xiaolin Hu, Jun Li, Jinhui Tang, and Jian Yang. Generalized focal loss v2: Learning reliable localization quality estimation for dense object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11632-11641, 2021.
|
| 248 |
+
|
| 249 |
+
Tsung-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He, and Piotr Dollár. Focal loss for dense object detection. In Proceedings of the IEEE international conference on computer vision, pp. 2980-2988, 2017.
|
| 250 |
+
|
| 251 |
+
Chengjiang Long and Gang Hua. Multi-class multi-annotator active learning with robust gaussian process for visual recognition. In Proceedings of the IEEE international conference on computer vision, pp. 2839-2847, 2015.
|
| 252 |
+
|
| 253 |
+
Chengjiang Long, Gang Hua, and Ashish Kapoor. Active visual recognition with expertise estimation in crowdsourcing. In Proceedings of the IEEE International Conference on Computer Vision, pp. 3000-3007, 2013.
|
| 254 |
+
|
| 255 |
+
Igor Lovchinsky, Alon Daks, Israel Malkin, Pouya Samangouei, Ardavan Saeedi, Yang Liu, Swami Sankaranarayanan, Tomer Gafner, Ben Sternlieb, Patrick Maher, et al. Discrepancy ratio: Evaluating model performance when even experts disagree on the truth. In International Conference on Learning Representations, 2019.
|
| 256 |
+
|
| 257 |
+
Lingni Ma, Jörg Stückler, Christian Kerl, and Daniel Cremers. Multi-view deep learning for consistent semantic mapping with rgb-d cameras. In 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 598-605. IEEE, 2017.
|
| 258 |
+
|
| 259 |
+
Hongying Meng, Andrea Kleinsmith, and Nadia Bianchi-Berthouze. Multi-score learning for affect recognition: the case of body postures. In International Conference on Affective Computing and Intelligent Interaction, pp. 225-234. Springer, 2011.
|
| 260 |
+
|
| 261 |
+
Janis Postels, Francesco Ferroni, Huseyin Coskun, Nassir Navab, and Federico Tombari. Sampling-free epistemic uncertainty estimation using approximated variance propagation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 2931-2940, 2019.
|
| 262 |
+
|
| 263 |
+
Pranav Rajpurkar, Jeremy Irvin, Aarti Bagul, Daisy Ding, Tony Duan, Hershel Mehta, Brandon Yang, Kaylie Zhu, Dillon Laird, Robyn L Ball, et al. Mura: Large dataset for abnormality detection in musculoskeletal radiographs. arXiv preprint arXiv:1712.06957, 2017.
|
| 264 |
+
|
| 265 |
+
Yichen Shen, Zhilu Zhang, Mert R Sabuncu, and Lin Sun. Real-time uncertainty estimation in computer vision via uncertainty-aware distribution distillation. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 707-716, 2021.
|
| 266 |
+
|
| 267 |
+
Aneesha Singh, Stefano Piana, Davide Pollarolo, Gualtiero Volpe, Giovanna Varni, Ana Tajadura-Jimenez, Amanda CdeC Williams, Antonio Camurri, and Nadia Bianchi-Berthouze. Go-with-the-flow: tracking, analysis and sonification of movement and breathing to build confidence in activity despite chronic pain. Human-Computer Interaction, 31(3-4):335-383, 2016.
|
| 268 |
+
|
| 269 |
+
James Surowiecki. The wisdom of crowds. Anchor, 2005.
|
| 270 |
+
|
| 271 |
+
Ryutaro Tanno, Ardavan Saeedi, Swami Sankaranarayanan, Daniel C Alexander, and Nathan Silberman. Learning from noisy labels by regularized estimation of annotator confusion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11244-11253, 2019.
|
| 272 |
+
|
| 273 |
+
Chongyang Wang, Yuan Gao, Akhil Mathur, Amanda C. DE C. Williams, Nicholas D Lane, and Nadia Bianchi-Berthouze. Leveraging activity recognition to enable protective behavior detection in continuous data. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 5(2), 2021a.
|
| 274 |
+
|
| 275 |
+
Chongyang Wang, Temitayo A Olugbade, Akhil Mathur, Amanda C DE C Williams, Nicholas D Lane, and Nadia Bianchi-Berthouze. Chronic pain protective behavior detection with deep learning. ACM Transactions on Computing for Healthcare, 2(3):1-24, 2021b.
|
| 276 |
+
|
| 277 |
+
Simon K Warfield, Kelly H Zou, and William M Wells. Simultaneous truth and performance level estimation (staple): an algorithm for the validation of image segmentation. IEEE transactions on medical imaging, 23(7):903-921, 2004.
|
| 278 |
+
|
| 279 |
+
Yan Yan, Rómer Rosales, Glenn Fung, Mark Schmidt, Gerardo Hermosillo, Luca Bogoni, Linda Moy, and Jennifer Dy. Modeling annotator expertise: Learning when everybody knows a bit of something. In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, pp. 932-939. JMLR Workshop and Conference Proceedings, 2010.
|
| 280 |
+
|
| 281 |
+
Yan Yan, Rómer Rosales, Glenn Fung, Ramanathan Subramanian, and Jennifer Dy. Learning from multiple annotators with varying expertise. Machine learning, 95(3):291-327, 2014.
|
| 282 |
+
|
| 283 |
+
Le Zhang, Ryutaro Tanno, Mou-Cheng Xu, Chen Jin, Joseph Jacob, Olga Ciccarelli, Frederik Barkhof, and Daniel C Alexander. Disentangling human error from the ground truth in segmentation of medical images. arXiv preprint arXiv:2007.15963, 2020.
|
| 284 |
+
|
| 285 |
+
## A APPENDIX
|
| 286 |
+
|
| 287 |
+
### A.1 DATASETS
|
| 288 |
+
|
| 289 |
+
Two medical datasets are selected, involving data of body movement sequences and bone X-rays.
|
| 290 |
+
|
| 291 |
+
#### A.1.1 EMOPAIN.
|
| 292 |
+
|
| 293 |
+
The EmoPain Aung et al. (2015) dataset contains skeleton-like movement data collected from 18 participants with CP and 12 healthy participants while they perform a variety of full-body physical rehabilitation activities (e.g. stretching forward and sitting down). In total, we have 46 activity sequences collected from these 30 participants, with each sequence lasting for about 10 minutes (or 36,000 samples). A binary task is included to detect the presence of protective behavior (e.g. hesitation, guarding) Keefe & Block (1982) exhibited by participants with CP during the performances. The detection of such behavior could be leveraged to generate automatic feedback and inform therapeutic personalized interventions Wang et al. (2021a). Four experts were recruited to provide the binary annotations of the presence or absence of protective behavior per timestep for each CP participant data sequence.
|
| 294 |
+
|
| 295 |
+
#### A.1.2 MURA.
|
| 296 |
+
|
| 297 |
+
The MURA dataset Rajpurkar et al. (2017) comprises 40,561 radiographic images of 7 upper extremity types (i.e., shoulder, humerus, elbow, forearm, wrist, hand, and finger), and is used for the binary classification of abnormality. This dataset is officially split into training (36,808 images), validation (3197 images), and testing (556 images) sets, with no overlap in subjects. The training and validation sets are publicly available, with each image labelled by a radiologist. In the testing set, the authors of MURA recruited six additional radiologists for annotation, and defined the ground truth with majority-voting among three randomly-picked radiologists. The rest three radiologists achieved Cohen's kappa with such ground truth of 0.731, 0.763, and 0.778, respectively. To simulate the opinions of different experts for the data we have access to, three synthetic annotators are created to reach Cohen’s kappa with the existing annotator at 0.80,0.75, and 0.70, respectively.
|
| 298 |
+
|
| 299 |
+
### A.2 IMPLEMENTATION DETAILS
|
| 300 |
+
|
| 301 |
+
For experiments on the EmoPain dataset, the state-of-the-art HAR-PBD network Wang et al. (2021a) is adopted as the backbone, and Leave-One-Subject-Out validation is conducted across the participants with CP. The average of the performances achieved on all the folds is reported. The training data is augmented by adding Gaussian noise and cropping, as seen in Wang et al. (2021b). The number of bins used in the general agreement distribution is set to 10 , i.e., the respective softmax layer has 11 nodes. The $\lambda$ used in the regularization function is set to 3.0 . For experiments on the MURA dataset, the Dense-169 network Huang et al. (2017) pretrained on the ImageNet dataset Deng et al. (2009) is used as the backbone. The original validation set is used as the testing set, where the first view (image) from each of the 7 upper extremity types of a subject is used. Images are all resized to be ${224} \times {224}$ , while images in the training set are further augmented with random lateral inversions and rotations of up to 30 degrees. The number of bins is set to 5, and the $\lambda$ is set to 3.0.
|
| 302 |
+
|
| 303 |
+
For all the experiments, the classifier stream is implemented with a fully-connected layer using a Softmax activation with two output nodes for the binary classification task. Adam Kingma & Ba (2014) is used as the optimizer with a learning rate ${lr} = 1\mathrm{e} - 4$ , which is reduced by $1/{10}$ if the performance is not improved after 10 epochs. The number of epochs is set to 50 . the logarithmic loss is adopted by default as written in Equation 5 and 6, while the WKL loss (8) is used for comparison when mentioned. For the agreement learning stream, the AR loss is used for its distributional variant, while the RMSE is used for its linear regression variant. We implement our method with TensorFlow deep learning library on a PC with a RTX 3080 GPU and 32 GB memory.
|
| 304 |
+
|
| 305 |
+
### A.3 AGREEMENT COMPUTATION
|
| 306 |
+
|
| 307 |
+
For a binary task, the agreement level ${\alpha }_{i}$ between annotators is computed as follows.
|
| 308 |
+
|
| 309 |
+
$$
|
| 310 |
+
{\alpha }_{i} = \frac{1}{\dot{J}}\mathop{\sum }\limits_{{j = 1}}^{\dot{J}}{w}_{i}^{j}{r}_{i}^{j}, \tag{10}
|
| 311 |
+
$$
|
| 312 |
+
|
| 313 |
+
where $\dot{J}$ is the number of annotators that have labelled the sample ${x}_{i}$ . In this way, ${\alpha }_{i} \in \left\lbrack {0,1}\right\rbrack$ stands for the agreement of annotators toward the positive class of the current binary task. In this work, we assume each sample was labelled by at least one annotator. ${w}_{i}^{j}$ is the weight for the annotation provided by $j$ -th annotator that could be used to show the different levels of expertise of annotators. The weight can be set manually given prior knowledge about the annotator, or used as a learnable parameter for the model to estimate. In this work, we treat annotators equally by setting ${w}_{i}^{j}$ to 1 . We leave the discussion on other situations to future works.
|
papers/ICLR/ICLR 2023/ICLR 2023 Workshop/ICLR 2023 Workshop TML4H/GYgu8Yq_96/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,267 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ LEARN2AGREE: FITTING WITH MULTIPLE ANNOTATORS WITHOUT OBJECTIVE GROUND TRUTH
|
| 2 |
+
|
| 3 |
+
Anonymous authors
|
| 4 |
+
|
| 5 |
+
Paper under double-blind review
|
| 6 |
+
|
| 7 |
+
§ ABSTRACT
|
| 8 |
+
|
| 9 |
+
The annotation of domain experts is important for some medical applications where the objective ground truth is ambiguous to define, e.g., the rehabilitation for some chronic diseases, and the prescreening of some musculoskeletal abnormalities without further medical examinations. However, improper uses of the annotations may hinder developing reliable models. On one hand, forcing the use of a single ground truth generated from multiple annotations is less informative for the modeling. On the other hand, feeding the model with all the annotations without proper regularization is noisy given existing disagreements. For such issues, we propose a novel Learning to Agreement (Learn2Agree) framework to tackle the challenge of learning from multiple annotators without objective ground truth. The framework has two streams, with one stream fitting with the multiple annotators and the other stream learning agreement information between annotators. In particular, the agreement learning stream produces regularization information to the classifier stream, tuning its decision to be better in line with the agreement between annotators. The proposed method can be easily added to existing backbones, with experiments on two medical datasets showed better agreement levels with annotators.
|
| 10 |
+
|
| 11 |
+
§ 1 INTRODUCTION
|
| 12 |
+
|
| 13 |
+
There exist difficulties for model development in applications where the objective ground truth is difficult to establish or ambiguous merely given the input data itself. That is, the decision-making, i.e. the detection, classification, and segmentation process, is based on not only the presented data but also the expertise or experiences of the annotator. However, the disagreements existed in the annotations hinder the definition of a good single ground truth. Therefore, an important part of supervise learning for such applications is to achieve better fitting with annotators. In this learning scenario, the input normally comprises pairs of $\left( {{\mathbf{X}}_{i},{r}_{i}^{j}}\right)$ , where ${\mathbf{X}}_{i}$ and ${r}_{i}^{j}$ are respectively the data of $i$ -th sample and the label provided by $r$ -th annotator. Given such input, naïve methods aim to provide a single set of ground truth label for model development. Therein, a common practice is to aggregate these multiple annotations with majority voting (Surowiecki, 2005). However, majority-voting could misrepresent the data instances where the disagreement between different annotators is high. This is particularly harmful for applications where differences in expertise or experiences exist in annotators.
|
| 14 |
+
|
| 15 |
+
Except for majority-voting, some have tried to estimate the ground truth label using STAPLE (Warfield et al., 2004) based on Expectation-Maximization (EM) algorithms. Nevertheless, such methods are sensitive to the variance in annotations and the data size (Lampert et al., 2016; Karimi et al.,2020). When the number of annotations per ${\mathbf{X}}_{i}$ is modest, efforts are put into creating models that utilize all the annotations with multi-score learning (Meng et al., 2011) or soft labels (Hu et al., 2016). Recent approaches have instead focused on leveraging or learning the expertise of annotators while training the model (Long et al., 2013; Long & Hua, 2015; Healey, 2011; Guan et al., 2018; Ji et al., 2021; Yan et al., 2014; 2010; Tanno et al., 2019; Zhang et al., 2020). A basic idea is to refine the classification or segmentation toward the underlying ground truth by modeling annotators.
|
| 16 |
+
|
| 17 |
+
In this paper, we focus on a hard situation when the ground truth is ambiguous to define. On one hand, this could be due to the missing of objective ground truth in a specific scenario. For instance, in the analysis of bodily movement behavior for chronic-pain (CP) rehabilitation, the self-awareness of people with CP about their exhibited pain or fear-related behaviors is low, thus physiotherapists play a key role in judging it (Felipe et al., 2015; Singh et al., 2016). However, since the physiotherapists are assessing the behavior on the basis of visual observations, they may disagree on the judgment or ground truth. Additionally, the ground truth could be temporarily missing, at a special stage of the task. For example, in abnormality prescreening for bone X-rays, except for abnormalities like fractures and hardware implantation that are obvious and deterministic, other types like degenerative diseases and miscellaneous abnormalities are mainly diagnosed with further medical examinations (Rajpurkar et al., 2017). That is, at prescreening stage, the opinion of the doctor makes the decision, which could disagree with other doctors or the final medical examination though.
|
| 18 |
+
|
| 19 |
+
θ θ θ Learn from all & Regularize with agreement Learn from majority-voted GT Learn from all
|
| 20 |
+
|
| 21 |
+
Figure 1: The proposed Learn2Agree framework regularizes the classifier that fits with all annotators with the estimated agreement information between annotators.
|
| 22 |
+
|
| 23 |
+
Thereon, unlike the traditional modeling goal that usually requires the existence of a set of ground truth labels to evaluate the performance, the objective of modeling in this paper is to improve the overall agreement between the model and annotators. Our contributions are four-fold: (i) We propose a novel Learn2Agree framework to directly leverage the agreement information stored in annotations from multiple annotators to regularize the behavior of the classifier that learns from them; (ii) To improve the robustness, we propose a general agreement distribution and an agreement regression loss to model the uncertainty in annotations; (iii) To regularize the classifier, we propose a regularization function to tune the classifier to better agree with all annotators; (iv) Our method noticeably improves existing backbones for better agreement levels with all annotators on classification tasks in two medical datasets, involving data of body movement sequences and bone X-rays.
|
| 24 |
+
|
| 25 |
+
§ 2 RELATED WORK
|
| 26 |
+
|
| 27 |
+
§ 2.1 MODELING ANNOTATORS
|
| 28 |
+
|
| 29 |
+
The leveraging or learning of annotators' expertise for better modeling is usually implemented in a two-step or multiphase manner, or integrated to run simultaneously. For the first category, one way to acquire the expertise is by referring to the prior knowledge about the annotation, e.g. the year of experience of each annotator, and the discussion held on the disagreed annotations. With such prior knowledge, studies in Long et al. (2013); Long & Hua (2015); Healey (2011) propose to distill the annotations, deciding which annotator to trust for disagreed samples. Without the access to such prior knowledge, the expertise, or behavior of an annotator can also be modeled given the annotation and the data, which could be used as a way to weight each annotator in the training of a classification model Guan et al. (2018), or adopted to refine the segmentation learned from multiple annotators Ji et al. (2021). More close to ours are the ones that simultaneously model the expertise of annotators while training the classifier. Previous efforts are seen on using probabilistic models Yan et al. (2014; 2010) driven by EM algorithms, and multi-head models that directly model annotators as confusion matrices estimated in comparison with the underlying ground truth Tanno et al. (2019); Zhang et al. (2020). While the idea behind these works may indeed work for applications where the distance between each annotator and the underlying ground truth exists and can be estimated in some ways to refine the decision-making of a model, we argue that in some cases it is (at least temporarily) difficult to assume the existence of the underlying ground truth.
|
| 30 |
+
|
| 31 |
+
§ 2.2 MODELING UNCERTAINTY
|
| 32 |
+
|
| 33 |
+
Modeling uncertainty is a popular topic in the computer vision domain, especially for tasks of semantic segmentation and object detection. Therein, methods proposed can be categorized into two groups: i) the Bayesian methods, where parameters of the posterior distribution (e.g. mean and variance) of the uncertainty are estimated with Monte Carlo dropout Leibig et al. (2017); Kendall et al. (2017); Ma et al. (2017) and parametric learning Hu et al. (2020); Charpentier et al. (2020) etc.; and ii) 'non-Bayesian' alternatives, where the distribution of uncertainty is learned with ensemble methods Lakshminarayanan et al. (2016), variance propagation Postels et al. (2019), and knowledge distillation Shen et al. (2021) etc. Except for their complex and time-consuming training or inference strategies, another characteristic of these methods is the dependence on Gaussian or Dirac delta distributions as the prior assumption.
|
| 34 |
+
|
| 35 |
+
${\widehat{P}}_{\theta }\left( x\right)$ $L\left( {{\bar{P}}_{\theta }\left( x\right) ,{r}^{1}}\right)$ ${\widetilde{P}}_{\theta }\left( x\right) \leq$ $L\left( {{\widetilde{P}}_{\theta }\left( x\right) ,{r}^{2}}\right)$ $L\left( {{\widetilde{P}}_{\theta }\left( x\right) ,{r}^{2}}\right)$ $L\left( {{\widetilde{P}}_{\theta }\left( x\right) ,{r}^{4}}\right)$ Sigmoid RELU $\widehat{y}$ $L\left( {\widehat{y},\alpha }\right)$ 0 Input Backbone $G\left( y\right)$
|
| 36 |
+
|
| 37 |
+
Figure 2: An overview of our Learn2Agree framework, comprising i) (above) the classifier stream with original prediction ${\widehat{p}}_{\theta }\left( {x}_{i}\right)$ that fits with available annotations ${\left\{ {r}_{i}^{j}\right\} }^{j = 1,\ldots ,J}$ ; and ii) (below) the agreement learning stream that learns to estimate ${\widehat{y}}_{i}$ of the agreement level ${\alpha }_{i}$ between annotators, and leverage such information to compute the regularized prediction ${\widetilde{p}}_{\theta }\left( {x}_{i}\right)$ .
|
| 38 |
+
|
| 39 |
+
§ 2.3 EVALUATION WITHOUT GROUND TRUTH
|
| 40 |
+
|
| 41 |
+
In the context of modeling multiple annotations without ground truth, typical evaluation measures rely on metrics of agreements. For example, Kleinsmith et al. (2011) uses metrics of agreement, e.g. Cohen's Kappa Cohen (1960) and Fleiss' Kappa Fleiss (1971), as the way to compare the agreement level between a system and an annotator and the agreement level between other unseen annotators, in a cross-validation manner. However, this method does not consider how to directly learn from all the annotators, and how to evaluate the performance of the model in this case. For this end, Lovchinsky et al. (2019) proposes a metric named discrepancy ratio. In short, the metric compares performances of the model-annotator vs. the annotator-annotator, where the performance can be computed as discrepancy e.g. with absolute error, or as agreement e.g. with Cohen's kappa. In this paper, we use the Cohen's kappa as the agreement calculator together with such a metric to evaluate the performance of our method. We refer to this metric as agreement ratio.
|
| 42 |
+
|
| 43 |
+
§ 3 METHOD
|
| 44 |
+
|
| 45 |
+
An overview of our proposed Learn2Agree framework is shown in Fig. 2. The core of our proposed method is to learn to estimate the agreement between different annotators based on their raw annotations, and simultaneously utilize the agreement-level estimation to regularize the training of the classification task. Therein, different components of the proposed method concern: the learning of agreement levels between annotators, and regularizing the classifier with such information. In testing or inference, the model estimates annotators' agreement level based on the current data input, which is then used to aid the classification.
|
| 46 |
+
|
| 47 |
+
In this paper, we consider a dataset comprising $N$ samples $\mathbf{X} = {\left\{ {x}_{i}\right\} }_{i = 1,\ldots ,N}$ , with each sample ${x}_{i}$ being an image or a timestep in a body movement data sequence. For each sample ${x}_{i},{r}_{i}^{j}$ denotes the annotation provided by $j$ -th annotator, with ${\alpha }_{i} \in \left\lbrack {0,1}\right\rbrack$ being the agreement computed between annotators. For a binary task, ${r}_{i}^{j} \in \{ 0,1\}$ . With such dataset $\mathcal{D} = {\left\{ {x}_{i},{r}_{i}^{j}\right\} }_{i = 1,\ldots ,N}^{j = 1,\ldots ,J}$ , the proposed method aims to improve the agreement level with all annotators. It should be noted that, for each sample ${x}_{i}$ , the method does not expect the annotations to be available from all the $J$ annotators.
|
| 48 |
+
|
| 49 |
+
§ 3.1 MODELING UNCERTAINTY IN AGREEMENT LEARNING
|
| 50 |
+
|
| 51 |
+
To enable a robust learning of the agreement between annotators, we consider modeling the uncertainty that could exist in the annotations. In our scenarios, the uncertainty comes from annotators' varying expertise exhibited in their annotations across different data samples, which may not follow specific prior distributions. Inspired by the study of Li et al. (2020) that proposed to use a general distribution for uncertainty modeling in the bounding box regression of object classification, without relying on any prior distributions, we further propose a general agreement distribution $G\left( {y}_{i}\right)$ for agreement learning (see the upper part of Fig.3). Therein, the distribution values are the possible agreement levels ${y}_{i}$ between annotators with a range of $\left\lbrack {{y}_{i}^{0},{y}_{i}^{n}}\right\rbrack$ , which is further dis-cretized into $\left\{ {{y}_{i}^{0},{y}_{i}^{1},\ldots {y}_{i}^{n - 1},{y}_{i}^{n}}\right\}$ with a uniform interval of 1 . The general agreement distribution has a property $\mathop{\sum }\limits_{{k = 0}}^{n}G\left( {y}_{i}^{k}\right) = 1$ , which can be implemented with a softmax layer with $n + 1$ nodes. The predicted agreement ${\widehat{y}}_{i}$ for regression can be computed as the weighted sum of all the distribution values
|
| 52 |
+
|
| 53 |
+
$$
|
| 54 |
+
{\widehat{y}}_{i} = \mathop{\sum }\limits_{{k = 0}}^{n}G\left( {y}_{i}^{k}\right) {y}_{i}^{k}. \tag{1}
|
| 55 |
+
$$
|
| 56 |
+
|
| 57 |
+
Agreement Agreement Agreement Learning Agreement Stream
|
| 58 |
+
|
| 59 |
+
Figure 3: The learning of the agreement ${\alpha }_{i}$ between annotators is modeled with a general agreement distribution $G\left( {y}_{i}\right)$ using agreement regression loss ${\mathcal{L}}_{AR}$ (above), with the $\mathrm{X}$ axis of the distribution being the possible agreement levels ${y}_{i}$ and the $\mathrm{Y}$ axis being the respective probabilities. This learning can also be implemented as a linear regression task that learns to approach the exact agreement level ${\alpha }_{i}$ using RMSE loss (below).
|
| 60 |
+
|
| 61 |
+
For training the predicted agreement value ${\widehat{y}}_{i}$ toward the target agreement ${\alpha }_{i}$ , inspired by the effectiveness of quantile regression in understanding the property of conditional distribution Koenker & Hallock (2001); Hao et al. (2007); Fan et al. (2019), we propose a novel Agreement Regression (AR) loss defined by
|
| 62 |
+
|
| 63 |
+
$$
|
| 64 |
+
{\mathcal{L}}_{AR}\left( {{\widehat{y}}_{i},{\alpha }_{i}}\right) = \max \left\lbrack {{\alpha }_{i}\left( {{\widehat{y}}_{i} - {\alpha }_{i}}\right) ,\left( {{\alpha }_{i} - 1}\right) \left( {{\widehat{y}}_{i} - {\alpha }_{i}}\right) }\right\rbrack . \tag{2}
|
| 65 |
+
$$
|
| 66 |
+
|
| 67 |
+
Comparing with the original quantile regression loss, the quantile $q$ is replaced with the agreement ${\alpha }_{i}$ computed at current input sample ${x}_{i}$ . The quantile $q$ is usually fixed for a dataset, as to understand the underlying distribution of the model’s output at a given quantile. By replacing $q$ with ${\alpha }_{i}$ , we optimize the general agreement distribution to focus on the given agreement level dynamically across samples.
|
| 68 |
+
|
| 69 |
+
In Li et al. (2021), the authors proposed to use the top $k$ values of the distribution and their mean to indicate the shape (flatness) of the distribution, which provides the level of uncertainty in object classification. In our case, all probabilities of the distribution are used to regularize the classifier. While this also informs the shape of the distribution for the perspective of uncertainty modeling, the skewness reflecting the high or low agreement level learned at the current data sample is also revealed. Thereon, two fully-connected layers with RELU and Sigmoid activations respectively are used to process such information and produce the agreement indicator ${\widetilde{y}}_{i}$ for regularization.
|
| 70 |
+
|
| 71 |
+
§ 3.1.1 LEARNING AGREEMENT WITH LINEAR REGRESSION.
|
| 72 |
+
|
| 73 |
+
Straightforwardly, we can also formulate the agreement learning as a plain linear regression task, modelled by a fully-connected layer with a Sigmoid activation function (see the lower part of Fig.3). Then, the predicted agreement ${\widehat{y}}_{i}$ is directly taken as the agreement indicator ${\widetilde{y}}_{i}$ for regularization. Given the predicted agreement ${\widehat{y}}_{i}$ and target agreement ${\alpha }_{i}$ at each input sample ${x}_{i}$ , by using Root Mean Squared Error (RMSE), the linear regression loss is computed as
|
| 74 |
+
|
| 75 |
+
$$
|
| 76 |
+
{\mathcal{L}}_{RMSE}\left( {\widehat{y},\alpha }\right) = {\left\lbrack \frac{1}{N}\mathop{\sum }\limits_{i}^{N}{\left( {\widehat{y}}_{i} - {\alpha }_{i}\right) }^{2}\right\rbrack }^{\frac{1}{2}}. \tag{3}
|
| 77 |
+
$$
|
| 78 |
+
|
| 79 |
+
It should be noted that, the proposed AR loss can also be used for this linear regression variant, which may help optimize the underlying distribution toward the given agreement level. In the experiments, an empirical comparison between different variants for agreement learning is conducted.
|
| 80 |
+
|
| 81 |
+
§ 3.2 REGULARIZING THE CLASSIFIER WITH AGREEMENT
|
| 82 |
+
|
| 83 |
+
Since the high-level information implied by the agreement between annotators could provide extra hints in classification tasks, we utilize the agreement indicator ${\widetilde{y}}_{i}$ to regularize the classifier training toward providing outcomes that are more in agreement with annotators. Given a binary classification task (a multi-class task can be decomposed into several binary ones), at input sample ${x}_{i}$ , we denote the original predicted probability toward the positive class of the classifier to be ${\widehat{p}}_{\theta }\left( {x}_{i}\right)$ . The general idea is that, when the learned agreement indicator is i) at chance level i.e. ${\widetilde{y}}_{i} = {0.5},{\widehat{p}}_{\theta }\left( {x}_{i}\right)$ shall stay unchanged; ii) biased toward the positive/negative class, the value of ${\widehat{p}}_{\theta }\left( {x}_{i}\right)$ shall be regularized toward the respective class. For these, we propose a novel regularization function written as
|
| 84 |
+
|
| 85 |
+
$$
|
| 86 |
+
{\widetilde{p}}_{\theta }\left( {x}_{i}\right) = \frac{{\widehat{p}}_{\theta }\left( {x}_{i}\right) {e}^{\lambda \left( {{\widetilde{y}}_{i} - {0.5}}\right) }}{{\widehat{p}}_{\theta }\left( {x}_{i}\right) {e}^{\lambda \left( {{\widetilde{y}}_{i} - {0.5}}\right) } + \left( {1 - {\widehat{p}}_{\theta }\left( {x}_{i}\right) }\right) {e}^{\lambda \left( {{0.5} - {\widetilde{y}}_{i}}\right) }},
|
| 87 |
+
$$
|
| 88 |
+
|
| 89 |
+
(4)
|
| 90 |
+
|
| 91 |
+
$D = {0.5},\lambda = {0.5}$ $p = {0.5},\lambda = {1.0}$ p=0.5, $\lambda = {1.5}$ $p = {0.5},\lambda = {2.5}$ $p = {0.5},\lambda = {3.0}$ 0.8 0.9 0.8 0.5 0.3 0.1 0.2
|
| 92 |
+
|
| 93 |
+
Figure 4: The property of the regularization function. $\mathrm{X}$ and $\mathrm{Y}$ axes are the agreement indicator ${\widetilde{y}}_{i}$ and regularized probability ${\widetilde{p}}_{\theta }\left( {x}_{i}\right)$ , respectively. ${\widetilde{p}}_{\theta }\left( {x}_{i}\right)$ is regularized to the class, for which the ${\widetilde{y}}_{i}$ is high, with $\lambda$ controlling scale.
|
| 94 |
+
|
| 95 |
+
where ${\widetilde{p}}_{\theta }\left( {x}_{i}\right)$ is the regularized probability toward the positive class of the current binary task, $\lambda$ is a hyperpa-rameter controlling the scale at which the original predicted probability ${\widehat{p}}_{\theta }\left( {x}_{i}\right)$ changes toward ${\widetilde{p}}_{\theta }\left( {x}_{i}\right)$ when the agreement indicator increases/decreases. Fig. 4 shows the property of the function: for the original predicted probability ${\widehat{p}}_{\theta }\left( {x}_{i}\right) = {0.5}$ , the function with larger $\lambda$ augments the effect of the learned agreement indicator ${\widetilde{y}}_{i}$ so that the output ${\widetilde{p}}_{\theta }\left( {x}_{i}\right)$ is regularized toward the more (dis)agreed; when ${\widetilde{y}}_{i}$ is at 0.5, where annotators are unable to reach an above-chance opinion about the task, the regularized probability stays unchanged with ${\widetilde{p}}_{\theta }\left( {x}_{i}\right) = {\widehat{p}}_{\theta }\left( {x}_{i}\right)$ .
|
| 96 |
+
|
| 97 |
+
§ 3.3 COMBATING IMBALANCES IN LOGARITHMIC LOSS
|
| 98 |
+
|
| 99 |
+
In this subsection, we first alleviate the influence of class imbalances present in the annotation of each annotator, by refining the vanilla cross-entropy loss. We further explore the use of an agreement-oriented loss that may naturally avoid such imbalances during training.
|
| 100 |
+
|
| 101 |
+
§ 3.3.1 ANNOTATION BALANCING FOR EACH ANNOTATOR.
|
| 102 |
+
|
| 103 |
+
For the classifier stream, given the regularized probability ${\widetilde{p}}_{\theta }\left( {x}_{i}\right)$ at the current input sample ${x}_{i}$ , the classifier is updated using the sum of the loss computed according to the available annotation ${r}_{i}^{j}$ from each annotator. Due to the various the nature of the task (i.e., positive samples are sparse), the annotation from each annotator could be noticeably imbalanced. Toward this problem, we use the Focal Loss (FL) Lin et al. (2017), written as follows.
|
| 104 |
+
|
| 105 |
+
$$
|
| 106 |
+
{\mathcal{L}}_{\mathrm{{FL}}}\left( {p,g}\right) = - {\left| g - p\right| }^{\gamma }\left( {g\log \left( p\right) + \left( {1 - g}\right) \log \left( {1 - p}\right) }\right) , \tag{5}
|
| 107 |
+
$$
|
| 108 |
+
|
| 109 |
+
where $p$ is the predicted probability of the model toward the positive class at the current data sample, $g \in \{ 0,1\}$ is the binary ground truth, and $\gamma \geq 0$ is the focusing parameter used to control the threshold for judging the well-classified. A larger $\gamma$ leads to a lower threshold so that more samples would be treated as the well-classified and down weighted. In our scenario, the FL function is integrated into the following loss function to compute the average loss from all annotators.
|
| 110 |
+
|
| 111 |
+
$$
|
| 112 |
+
{\mathcal{L}}_{\theta }\left( {{\widetilde{\mathbf{P}}}_{\theta },\mathbf{R}}\right) = \frac{1}{J}\mathop{\sum }\limits_{{j = 1}}^{J}\frac{1}{{\dot{N}}^{j}}\mathop{\sum }\limits_{{i = 1}}^{{\dot{N}}^{j}}{\mathcal{L}}_{FL}\left( {{\widetilde{p}}_{\theta }\left( {x}_{i}\right) ,{r}_{i}^{j}}\right) , \tag{6}
|
| 113 |
+
$$
|
| 114 |
+
|
| 115 |
+
where ${\dot{N}}^{j} \leq N$ is the number of samples that have been labelled by $j$ -th annotator, ${\widetilde{\mathbf{P}}}_{\theta } =$ ${\left\{ {\widetilde{p}}_{\theta }\left( {x}_{i}\right) \right\} }_{i = 1,\ldots ,N},\mathbf{R} = {\left\{ {r}_{i}^{j}\right\} }_{i = 1,\ldots ,{N}^{j}}^{j = 1,\ldots ,J},{r}_{i}^{j} =$ null if the $j$ -th annotator did not annotate at $i$ -th sample, and the loss is not computed here.
|
| 116 |
+
|
| 117 |
+
Additionally, searching for the $\gamma$ manually for each annotator could be cumbersome, especially for a dataset labeled by numerous annotators. In this paper, we compute $\gamma$ given the number of samples annotated by each annotator per class of each binary task. The hypothesis is that, for annotations biased more toward one class, $\gamma$ shall set to be bigger since larger number of samples tend to be well-classified. We leverage the effective number of samples Cui et al. (2019) to compute each ${\gamma }_{j}$ as follows.
|
| 118 |
+
|
| 119 |
+
$$
|
| 120 |
+
{\gamma }_{j} = \frac{\left( 1 - {\beta }^{{n}_{k}^{j}}\right) }{\left( 1 - {\beta }^{\left( {\widehat{N}}^{j} - {n}_{k}^{j}\right) }\right) }, \tag{7}
|
| 121 |
+
$$
|
| 122 |
+
|
| 123 |
+
where ${n}_{k}^{j}$ is the number of samples for the majority class $k$ in the current binary task annotated by annotator $j,\beta = \frac{{\dot{N}}^{j} - 1}{{\dot{N}}^{j}}$ .
|
| 124 |
+
|
| 125 |
+
§ 3.3.2 AGREEMENT-ORIENTED LOSS.
|
| 126 |
+
|
| 127 |
+
In de La Torre et al. (2018), a Weighted Kappa Loss (WKL) was used to compute the agreement-oriented loss between the output of a model and the annotation of an annotator. As developed from the Cohen's Kappa, this loss may guide the model to pay attention to the overall agreement level instead of the local mistake. Thus, we may be able to avoid the cumbersome work of alleviating the class imbalances as above. This loss function can be written as follows.
|
| 128 |
+
|
| 129 |
+
$$
|
| 130 |
+
{\mathcal{L}}_{\mathrm{{WKL}}} = \log \left( {1 - \kappa }\right) . \tag{8}
|
| 131 |
+
$$
|
| 132 |
+
|
| 133 |
+
The linear weighted kappa $\kappa$ Cohen ([1968]) is used in this equation, where the penalization weight is proportional to the distance between the predicted and the class. We replace the FL loss written in Equation 5, to compute the weighted kappa loss across samples and annotators using Equation 6, The value range of this loss is $( - \infty ,\log 2\rbrack$ , thus a Sigmoid function is applied before we sum the loss from each annotator. We compare this WKL loss function to the logarithmic one in our experiment.
|
| 134 |
+
|
| 135 |
+
§ 4 EXPERIMENTS
|
| 136 |
+
|
| 137 |
+
In this section, we evaluate our proposed method with data annotated by multiple human experts, where the objective ground truth is ambiguous to define. Please refer to the Appendix for dataset descriptions, implementation details, and the computation of agreement ground truth.
|
| 138 |
+
|
| 139 |
+
§ 4.1 METRIC
|
| 140 |
+
|
| 141 |
+
Following Lovchinsky et al. (2019), we evaluate the performance of a model by using the agreement ratio as follows.
|
| 142 |
+
|
| 143 |
+
$$
|
| 144 |
+
\Delta = \frac{{\mathrm{C}}_{J}^{2}}{J}\frac{\mathop{\sum }\limits_{{j = 1}}^{J}\operatorname{Sigmoid}\left( {\kappa \left( {{\widetilde{\mathbf{P}}}_{\theta },{\mathbf{R}}^{j}}\right) }\right) }{\mathop{\sum }\limits_{{j,{j}^{\prime } = 1\& j \neq {j}^{\prime }}}^{J}\operatorname{Sigmoid}\left( {\kappa \left( {{\mathbf{R}}^{j},{\mathbf{R}}^{{j}^{\prime }}}\right) }\right) }, \tag{9}
|
| 145 |
+
$$
|
| 146 |
+
|
| 147 |
+
where the numerator computes the average agreement for the pairs of predictions of the model and annotations of each annotator, and the denominator computes the average agreement between annotators with ${\mathrm{C}}_{J}^{2}$ denoting the number of different annotator pairs. $\kappa$ is the Cohen’s Kappa. The agreement ratio $\Delta > 0$ is larger than 1 when the model performs better than the average annotator Lovchinsky et al. (2019).
|
| 148 |
+
|
| 149 |
+
§ 4.2 RESULTS
|
| 150 |
+
|
| 151 |
+
§ 4.2.1 AGREEMENT-ORIENTED LOSS VS. LOGARITHMIC LOSS.
|
| 152 |
+
|
| 153 |
+
As shown in the first section of Table 1, models trained with majority-voted ground truth produce agreement ratios of 1.0417 and 0.7616 with logarithmic loss and annotation balancing (in this case is class balancing for the single majority-voted ground truth) on the EmoPain and MURA datasets, respectively. However, as shown in the second section of Table 1, directly exposing the model to all the annotations is harmful, with performances lower than the majority-voting ones of 0.9733 and 0.7564 achieved on the two datasets using logarithmic loss alone. By using the balancing method during training, the performance on the EmoPain dataset is improved to 1.0189 but is still lower than majority-voting one, while a better performance of 0.7665 than the majority-voting is achieved on
|
| 154 |
+
|
| 155 |
+
Table 1: The ablation experiment on the EmoPain and MURA datasets. Majority-voting refers to the method using the majority-voted ground truth for training. CE and WKL refer to the logarithmic and weighted kappa loss functions used in the classifier stream, respectively. Linear and Distributional refer to the agreement learning stream with linear regression and general agreement distribution, respectively. The best performance in each section is marked in bold per dataset.
|
| 156 |
+
|
| 157 |
+
max width=
|
| 158 |
+
|
| 159 |
+
Framework/Annotator CEWKLAnnotation BalanceLinearDistributional Δ↑ EmoPain Δ↑ MURA
|
| 160 |
+
|
| 161 |
+
1-4
|
| 162 |
+
Majority ✓✓ 1.0417 0.7616
|
| 163 |
+
|
| 164 |
+
1-4
|
| 165 |
+
Voting ✓ 1.0452 0.7638
|
| 166 |
+
|
| 167 |
+
1-4
|
| 168 |
+
3*Learn-from-all ✓ 0.9733 0.7564
|
| 169 |
+
|
| 170 |
+
2-4
|
| 171 |
+
✓✓ 1.0189 0.7665
|
| 172 |
+
|
| 173 |
+
2-4
|
| 174 |
+
✓ 1.0407 0.7751
|
| 175 |
+
|
| 176 |
+
1-4
|
| 177 |
+
4*Learn2Agree (Ours) ✓✓✓ 1.0477 0.7727
|
| 178 |
+
|
| 179 |
+
2-4
|
| 180 |
+
✓✓✓ 1.0508 0.7796
|
| 181 |
+
|
| 182 |
+
2-4
|
| 183 |
+
✓✓ 1.0471 0.7768
|
| 184 |
+
|
| 185 |
+
2-4
|
| 186 |
+
✓✓ 1.0547 0.7801
|
| 187 |
+
|
| 188 |
+
1-4
|
| 189 |
+
Annotator 1 X 0.9613 1.0679
|
| 190 |
+
|
| 191 |
+
1-4
|
| 192 |
+
Annotator 2 X 1.0231 0.9984
|
| 193 |
+
|
| 194 |
+
1-4
|
| 195 |
+
Annotator 3 X 1.0447 0.9743
|
| 196 |
+
|
| 197 |
+
1-4
|
| 198 |
+
Annotator 4 X 0.9732 0.9627
|
| 199 |
+
|
| 200 |
+
1-4
|
| 201 |
+
|
| 202 |
+
the MURA dataset. These results show the importance of balancing for the modeling with logarithmic loss in a learn-from-all paradigm. With the WKL loss, performances of the model in majority-voting (1.0452/0.7638) and learn-from-all (1.0407/0.7751) paradigms are further improved. This shows the advantage of the WKL loss for improving the fitting with multiple annotators, which also alleviates the need to use class balancing strategies.
|
| 203 |
+
|
| 204 |
+
§ 4.2.2 THE IMPACT OF OUR LEARN2AGREE METHOD.
|
| 205 |
+
|
| 206 |
+
For both datasets, as shown in the third section of Table 1, with our proposed Learn2Agree method using general agreement distribution, the best overall performances of 1.0547 and 0.7801 are achieved on the two datasets, respectively. For the agreement learning stream, the combination of general agreement distribution and AR loss shows better performance than its variant using linear regression and RMSE on both datasets (1.0477 with logarithmic loss and 0.7768 with WKL loss). Such results could be due to the fact that the agreement indicator ${\widetilde{y}}_{i}$ produced from the linear regression is directly the estimated agreement value ${\widehat{y}}_{i}$ , which could be largely affected by the errors made during agreement learning. In contrast, with general agreement distribution, the information passed to the classifier is first the shape and skewness of the distribution $G\left( {y}_{i}\right)$ . Thus, it is more tolerant to the errors (if) made by the weighted sum that used for regression with agreement learning.
|
| 207 |
+
|
| 208 |
+
§ 4.2.3 COMPARING WITH ANNOTATORS.
|
| 209 |
+
|
| 210 |
+
In the last section of Table 1, the annotation of each annotator is used to compute the agreement ratio against the other annotators (Equation 9).
|
| 211 |
+
|
| 212 |
+
For the EmoPain dataset, the best method in majority-voting (1.0452) and learn-from-all (1.0407) paradigms show very competitive if not better performances than annotator 3 (1.0447) who has the best agreement level with all the other annotators. Thereon, the proposed Learn2Agree method improves the performance to an even higher agreement ratio of 1.0547 against all the annotators. This performance suggests that, when adopted in real-life, the model is able to analyze the protective behavior of people with CP at a performance that is highly in agreement with the human experts.
|
| 213 |
+
|
| 214 |
+
However, for the MURA dataset, the best performance so far achieved by the Learn2Agree method of 0.7801 is still lower than annotator 1 . This suggests that, at the current task setting, the model may make around 22% errors more than the human experts. One reason could be largely due to the challenge of the task. As shown in Rajpurkar et al. (2017), where the same backbone only achieved a similar if not better performance than the other radiologists for only one (wrist) out of the seven upper extremity types. In this paper, the testing set comprises all the extremity types, which makes the experiment even more challenging. Future works may explore better backbones tackling this.
|
| 215 |
+
|
| 216 |
+
Table 2: The experiment on the EmoPain dataset for analyzing the impact of Agreement Regression (AR) loss on agreement learning.
|
| 217 |
+
|
| 218 |
+
max width=
|
| 219 |
+
|
| 220 |
+
Classifier Loss Agreement Learning Type Agreement Learning Loss Δ↑
|
| 221 |
+
|
| 222 |
+
1-4
|
| 223 |
+
3*CE Linear RMSE AR 1.0477 0.9976
|
| 224 |
+
|
| 225 |
+
2-4
|
| 226 |
+
Distributional RMSE 1.0289
|
| 227 |
+
|
| 228 |
+
2-4
|
| 229 |
+
X AR 1.0508
|
| 230 |
+
|
| 231 |
+
1-4
|
| 232 |
+
2*WKL Linear RMSE AR 1.0454 1.035
|
| 233 |
+
|
| 234 |
+
2-4
|
| 235 |
+
Distributional RMSE AR 1.0454 1.0482
|
| 236 |
+
|
| 237 |
+
1-4
|
| 238 |
+
|
| 239 |
+
Table 3: The experiment on the MURA dataset for analyzing the impact of Agreement Regression (AR) loss on agreement learning.
|
| 240 |
+
|
| 241 |
+
max width=
|
| 242 |
+
|
| 243 |
+
Classifier Loss Agreement Learning Type Agreement Learning Loss Δ↑
|
| 244 |
+
|
| 245 |
+
1-4
|
| 246 |
+
2*CE Linear RMSE AR 0.7727 0.7698
|
| 247 |
+
|
| 248 |
+
2-4
|
| 249 |
+
Distributional RMSE AR 0.7729 0.7796
|
| 250 |
+
|
| 251 |
+
1-4
|
| 252 |
+
2*WKL Linear RMSE AR 0.7707 0.7674
|
| 253 |
+
|
| 254 |
+
2-4
|
| 255 |
+
Distributional RMSE AR 0.7724 0.7773
|
| 256 |
+
|
| 257 |
+
1-4
|
| 258 |
+
|
| 259 |
+
§ 4.2.4 THE IMPACT OF AGREEMENT REGRESSION LOSS.
|
| 260 |
+
|
| 261 |
+
The proposed AR loss can be used for both the distributional and linear agreement learning stream. However, as seen in Table 2 and Table 3, the performance of linear agreement learning is better with RMSE loss rather than with the AR loss. The design of the AR loss assumes the loss computed for a given quantile is in accord with its counterpart of agreement level. Thus, such results may be due to the gap between the quantile of the underlying distribution of the linear regression and the targeted agreement level. Therefore, the resulting estimated agreement indicator using AR loss passed to the classifier may not reflect the actual agreement level. Instead, for linear regression, a vanilla loss like RMSE promises that the regression value is fitting toward the actual agreement level.
|
| 262 |
+
|
| 263 |
+
By contrast, the proposed general agreement distribution directly adopts the range of agreement levels to be the distribution values, which helps to narrow such a gap when AR loss is used. Therein, the agreement indicator is extracted from the shape and skewness of such distribution (probabilities of all distribution values), which could better reflect the agreement level when updated with AR loss. As shown, the combination of distributional agreement learning and AR loss achieves the best performance in each dataset.
|
| 264 |
+
|
| 265 |
+
§ 5 CONCLUSION
|
| 266 |
+
|
| 267 |
+
In this paper, we targeted the scenario of learning with multiple annotators where the ground truth is ambiguous to define. Two medical datasets in this scenario were adopted for the evaluation. We showed that backbones developed with majority-voted ground truth or multiple annotations can be easily enhanced to achieve better agreement levels with annotators, by leveraging the underlying agreement information stored in the annotations. For agreement learning, our experiments demonstrate the advantage of learning with the proposed general agreement distribution and agreement regression loss, in comparison with other possible variants. Future works may extend this paper to prove its efficiency in datasets having multiple classes, as only binary tasks were considered in this paper. Additionally, the learning of annotator's expertise seen in Tanno et al. (2019); Zhang et al. (2020); Ji et al. (2021) could be leveraged to weight the agreement computation and learning proposed in our method for cases where annotators are treated differently.
|
papers/ICLR/ICLR 2023/ICLR 2023 Workshop/ICLR 2023 Workshop TML4H/J4QatK02Qc/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,406 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# CONFORMAL PREDICTION MASKS: VISUALIZING UNCERTAINTY IN MEDICAL IMAGING
|
| 2 |
+
|
| 3 |
+
Anonymous authors
|
| 4 |
+
|
| 5 |
+
Paper under double-blind review
|
| 6 |
+
|
| 7 |
+
## Abstract
|
| 8 |
+
|
| 9 |
+
Estimating uncertainty in image-to-image recovery networks is an important task, particularly as such networks are being increasingly deployed in the biological and medical imaging realms. A recent conformal prediction technique derives per-pixel uncertainty intervals, guaranteed to contain the true value with a user-specified probability. Yet, these intervals are hard to comprehend and fail to express uncertainty at a conceptual level. In this paper, we introduce a new approach for uncertainty quantification and visualization, based on masking. The proposed technique produces interpretable image masks with rigorous statistical guarantees for image regression problems. Given an image recovery model, our approach computes a mask such that a desired divergence between the masked reconstructed image and the masked true image is guaranteed to be less than a specified risk level, with high probability. The mask thus identifies reliable regions of the predicted image while highlighting areas of high uncertainty. Our approach is agnostic to the underlying recovery model and the true unknown data distribution. We evaluate the proposed approach on image colorization, image completion, and super-resolution tasks, attaining high quality performance on each.
|
| 10 |
+
|
| 11 |
+
## 1 INTRODUCTION
|
| 12 |
+
|
| 13 |
+
Deep Learning has been very successful in many applications, spanning computer vision, speech recognition, natural language processing, and beyond. For many years, researchers were mainly content in developing new techniques that achieve unprecedented accuracy, without concerns for understanding the uncertainty implicit in such models. More recently, however, there has been a concerted effort within the research community to quantify the uncertainty of deep models.
|
| 14 |
+
|
| 15 |
+
This paper addresses the problem of quantifying and visualizing uncertainty in the realm of image-to-image tasks. Such problems include super-resolution, deblurring, colorization, and image completion, amongst others. Assessing uncertainty is important generally, but is particularly so in application domains such as biological and medical imaging, in which fidelity to the ground truth is paramount. If there is an area of the reconstructed image where such fidelity is unlikely or unreliable due to high uncertainty, this is crucial to convey.
|
| 16 |
+
|
| 17 |
+
Our approach to uncertainty estimation is based on masking. Specifically, we are interested in computing a mask such that the uncertain regions in the image are masked out. Based on conformal prediction (Angelopoulos & Bates, 2021a), we derive an algorithm that can apply to any existing image-recovery model and produce uncertainty a mask satisfying the following criterion: the divergence between the masked reconstructed image and the masked true image is guaranteed to be less than a specified level, with high probability. The resultant mask highlights areas in the recovered image of high uncertainty while trustworthy regions remain intact. Our distribution-free method, illustrated in Figure 1, is agnostic to the prediction model and to the choice of divergence function, which should be dictated by the application. Our contributions are as follows:
|
| 18 |
+
|
| 19 |
+
1. We introduce the notion of conformal prediction masks: a distribution-free approach to uncertainty quantification in image-to-image regression. We derive masks which visually convey regions of uncertainty while rigorously providing strong statistical guarantees for any regression model, image dataset and desired divergence measure.
|
| 20 |
+
|
| 21 |
+

|
| 22 |
+
|
| 23 |
+
Figure 1: High-level overview. Given image measurements $X$ (e.g gray-scale image) of a ground-truth image $Y$ , and a predicted image $\widehat{f}\left( X\right)$ (e.g. colorized image), the mask model outputs an uncertainty mask $\mathcal{M}\left( X\right)$ such that the divergence between the masked ground-truth and the masked prediction is below a chosen risk level with high probability.
|
| 24 |
+
|
| 25 |
+
2. We develop a practical training algorithm for computing these masks which only requires triplets of input (degraded), reconstructed and true images. The resultant mask model is trained once for all possible risk levels and is calibrated via a simple process to meet the required guarantees given a user-specified risk level and confidence probability.
|
| 26 |
+
|
| 27 |
+
3. We demonstrate the power of the method on image colorization, image completion and super-resolution tasks. By assessing our performance both visually and quantitatively, we show the resultant masks attain the probabilistic guarantee and provide interpretable uncertainty visualization without over-masking the recovered images, in contrast to competing techniques.
|
| 28 |
+
|
| 29 |
+
## 2 RELATED WORK
|
| 30 |
+
|
| 31 |
+
Bayesian Uncertainty Quantification The Bayesian paradigm defines uncertainty by assuming a distribution over the model parameters and/or activation functions. The most prevalent approach is Bayesian neural networks (MacKay, 1992; Valentin Jospin et al., 2020; Izmailov et al., 2020), which are stochastic models trained using Bayesian inference. Yet, as the number of model parameters has grown rapidly, computing the exact posteriors has became computationally intractable. This shortcoming has led to the development of approximation methods such as Monte Carlo dropout (Gal & Ghahramani, 2016; Gal et al., 2017a), stochastic gradient Markov chain Monte Carlo (Salimans et al., 2015; Chen et al., 2014), Laplacian approximations (Ritter et al., 2018) and variational inference (Blundell et al., 2015; Louizos & Welling, 2017; Posch et al., 2019). Alternative Bayesian techniques include deep Gaussian processes (Damianou & Lawrence, 2013), deep ensembles (Ashukha et al., 2020; Hu et al., 2019), and deep Bayesian active learning (Gal et al., 2017b), to name just a few. A comprehensive review on Bayesian uncertainty quantification is given in Abdar et al. (2021).
|
| 32 |
+
|
| 33 |
+
Distribution-Free Methods and Conformal Prediction Unlike Bayesian methods, the frequentist approach assumes the true model parameters are fixed with no underlying distribution. Examples of such distribution-free techniques are model ensembles (Lakshminarayanan et al., 2017; Pearce et al., 2018), bootstrap (Kim et al., 2020; Alaa & Van Der Schaar, 2020), interval regression (Pearce et al., 2018; Kivaranovic et al., 2020; Wu et al., 2021) and quantile regression (Gasthaus et al., 2019; Romano et al., 2019). An important distribution-free technique which is most relevant to our work is conformal prediction (Angelopoulos & Bates, 2021b; Shafer & Vovk, 2008). This approach relies on a labeled calibration dataset to convert point estimations into prediction regions. Conformal methods can be used with any estimator, require no retraining, are computationally efficient and provide coverage guarantees in finite samples (Lei et al., 2018). Recent development includes conformalized quantile regression (Romano et al., 2019; Sesia & Candès, 2020; Angelopoulos et al., 2022b), conformal risk control (Angelopoulos et al., 2022a; Bates et al., 2021; Angelopoulos et al., 2021) and semantic uncertainty intervals for generative adversarial networks (Sankaranarayanan et al., 2022). Sun (2022) provides an extensive survey on distribution-free conformal prediction methods.
|
| 34 |
+
|
| 35 |
+
## 3 BACKGROUND: CONFORMAL PREDICTION IN IMAGE REGRESSION
|
| 36 |
+
|
| 37 |
+
We present a brief overview of the work in (Angelopoulos et al., 2022b), which stands out in the realm of conformal prediction for image-to-image problems, and serves as the basis of our work. Let $Y \in \mathcal{Y} = {\mathbb{R}}^{N}$ be a ground-truth image in vector form, an image $X \in \mathcal{X} = {\mathbb{R}}^{M}$ be its measurements, and $\widehat{f}\left( X\right) \in \mathcal{Y}$ an estimator of $Y$ . Conformal prediction constructs uncertainty intervals
|
| 38 |
+
|
| 39 |
+
$$
|
| 40 |
+
\mathcal{T}{\left( X\right) }_{\left\lbrack i\right\rbrack } = \left\lbrack {\widehat{f}{\left( X\right) }_{\left\lbrack i\right\rbrack } - \widehat{l}{\left( X\right) }_{\left\lbrack i\right\rbrack },\widehat{f}{\left( X\right) }_{\left\lbrack i\right\rbrack } + \widehat{u}{\left( X\right) }_{\left\lbrack i\right\rbrack }}\right\rbrack ,\;i = 0,\ldots , N - 1, \tag{1}
|
| 41 |
+
$$
|
| 42 |
+
|
| 43 |
+
where $\widehat{l}{\left( X\right) }_{\left\lbrack i\right\rbrack } \geq 0$ and $\widehat{u}{\left( X\right) }_{\left\lbrack i\right\rbrack } \geq 0$ represent the uncertainty in lower and upper directions respectively. Given heuristic uncertainty values $\widetilde{l}$ and $\widetilde{u}$ , the uncertainty intervals are calibrated using a calibration dataset $\mathcal{C} \triangleq {\left\{ {X}_{k},{Y}_{k}\right\} }_{k = 1}^{K}$ to guarantee they contain at least a fraction $\alpha$ of the ground-truth pixel values with probability $1 - \delta$ . Here $\alpha \in \left( {0,1}\right)$ and $\delta \in \left( {0,1}\right)$ are user-specified risk and error levels respectively. Formally, the per-pixel uncertainty intervals are defined as follows.
|
| 44 |
+
|
| 45 |
+
Definition 1. Risk-Controlling Prediction Set (RCPS). A random set-valued function $\mathcal{T} : \mathcal{X} \rightarrow$ ${\mathcal{Y}}^{\prime } = {2}^{\mathcal{Y}}$ is an $\left( {\alpha ,\delta }\right)$ -Risk-Controlling Prediction Set if
|
| 46 |
+
|
| 47 |
+
$$
|
| 48 |
+
\mathbb{P}\left( {\mathcal{R}\left( \mathcal{T}\right) \leq 1 - \alpha }\right) \geq 1 - \delta
|
| 49 |
+
$$
|
| 50 |
+
|
| 51 |
+
Here the risk is $\mathcal{R}\left( \mathcal{T}\right) \triangleq 1 - \mathbb{E}\left\lbrack {\frac{1}{N}\left| \left\{ {i : {Y}_{\left\lbrack i\right\rbrack }^{\text{test }} \in \mathcal{T}{\left( {X}^{\text{test }}\right) }_{\left\lbrack i\right\rbrack }}\right\} \right| }\right\rbrack$ where the expectation is over a new test point $\left( {{X}^{\text{test }},{Y}^{\text{test }}}\right)$ , while the outer probability is over the calibration data.
|
| 52 |
+
|
| 53 |
+
The procedure for constructing RCPS consists of two stages. First, a machine learning system (e.g. neural network) is trained to output a point prediction $\widehat{f}$ , and heuristic lower and upper interval widths $\left( {\widetilde{l},\widetilde{u}}\right)$ . The second phase utilizes the calibration set to calibrate $\left( {\widetilde{l},\widetilde{u}}\right)$ so they contain the right fraction of ground truth pixels. The final intervals are those in (1) with the calibrated widths $\left( {\widehat{l},\widehat{u}}\right)$ .
|
| 54 |
+
|
| 55 |
+
Conformal prediction provides per-pixel uncertainty intervals with statistical guarantees in image-to-image regression problems. Yet, the per-pixel prediction sets may be difficult to comprehend on their own. To remedy this, the uncertainty intervals are visualized by passing the pixel-wise interval lengths through a colormap, where small sets render a pixel blue and large sets render it red. Thus, the redder a region is, the greater the uncertainty, and the bluer it is, the greater the confidence. The resultant uncertainty map, however, is not directly endowed with rigorous guarantees. This raises the following question: can we directly produce an uncertainty map with strong statistical guarantees?
|
| 56 |
+
|
| 57 |
+
## 4 CONFORMAL PREDICTION MASKS
|
| 58 |
+
|
| 59 |
+
Inspired by the above, we construct uncertainty masks $\mathcal{M}\left( X\right) = M\left( {X,\widehat{f}\left( X\right) }\right) \in {\left\lbrack 0,1\right\rbrack }^{N}$ such that
|
| 60 |
+
|
| 61 |
+
$$
|
| 62 |
+
\mathbb{E}\left\lbrack {\mathcal{M}{\left( {X}^{\text{test }}\right) }_{\left\lbrack i\right\rbrack } \cdot \left| {\widehat{f}{\left( {X}^{\text{test }}\right) }_{\left\lbrack i\right\rbrack } - {Y}_{\left\lbrack i\right\rbrack }^{\text{test }}}\right| }\right\rbrack \leq {\beta }_{\left\lbrack i\right\rbrack }, \tag{2}
|
| 63 |
+
$$
|
| 64 |
+
|
| 65 |
+
where the expectation is over a new test point, and ${\beta }_{\left\lbrack i\right\rbrack } \in {\mathbb{R}}^{ + }$ is user-specified risk level. Define ${\widehat{f}}_{\mathcal{M}}\left( X\right) \triangleq \mathcal{M}\left( X\right) \odot \widehat{f}\left( X\right)$ and ${Y}_{\mathcal{M}} \triangleq \mathcal{M}\left( X\right) \odot Y$ where $\odot$ represents a point-wise (Hadamard) product. Then, note that building (2) is equivalent to create the following uncertainty intervals
|
| 66 |
+
|
| 67 |
+
$$
|
| 68 |
+
{\mathcal{T}}_{\mathcal{M}}{\left( X\right) }_{\left\lbrack i\right\rbrack } = \left\lbrack {{\widehat{f}}_{\mathcal{M}}{\left( X\right) }_{\left\lbrack i\right\rbrack } - {\beta }_{\left\lbrack i\right\rbrack },{\widehat{f}}_{\mathcal{M}}{\left( X\right) }_{\left\lbrack i\right\rbrack } + {\beta }_{\left\lbrack i\right\rbrack }}\right\rbrack , \tag{3}
|
| 69 |
+
$$
|
| 70 |
+
|
| 71 |
+
which satisfies
|
| 72 |
+
|
| 73 |
+
$$
|
| 74 |
+
{Y}_{\mathcal{M}\left\lbrack i\right\rbrack }^{\text{test }} \in {\mathcal{T}}_{\mathcal{M}}{\left( {X}^{\text{test }}\right) }_{\left\lbrack i\right\rbrack }. \tag{4}
|
| 75 |
+
$$
|
| 76 |
+
|
| 77 |
+
We remark a few difference between (3) and (II): In (II) the lower and upper per-pixel uncertainty widths $\left( {\widehat{l},\widehat{u}}\right)$ depend on $X$ and are calibrated, while in (3) $\widehat{l} = \widehat{u} \equiv \beta$ are user-specified and independent of $X$ . Furthermore, the uncertainty parameters which undergo calibration are ${\left\{ \mathcal{M}{\left( X\right) }_{\left\lbrack i\right\rbrack }\right\} }_{i = 1}^{N}$ .
|
| 78 |
+
|
| 79 |
+
One may notice that the above formulation exhibits a major limitation as each value of the prediction mask is defined independently from other values. Hence, it requires the user to specify a risk level for each pixel which is cumbersome, especially in high dimension. More importantly, setting each entry of the mask independently may fail in capturing the dependency between pixels, thus, fail to express uncertainty at a conceptual level. To overcome this, we redefine our uncertainty masks to ensure with probability at least $1 - \delta$ it holds that $\mathbb{E}\left\lbrack {\begin{Vmatrix}{\widehat{f}}_{\mathcal{M}}\left( {X}^{\text{test }}\right) - {Y}_{\mathcal{M}}^{\text{test }}\end{Vmatrix}}_{1}\right\rbrack \leq \alpha$ , where $\alpha \in {\mathbb{R}}^{ + }$ is a global risk level and $\parallel Z{\parallel }_{1} \triangleq \mathop{\sum }\limits_{{i = 1}}^{N}Z\left\lbrack i\right\rbrack$ is the L1 norm of an arbitrary image $Z$ . Furthermore, the latter formulation can be generalized to any divergence measure $d : \mathcal{Y} \times \mathcal{Y} \rightarrow {\mathbb{R}}^{ + }$ such that
|
| 80 |
+
|
| 81 |
+
$$
|
| 82 |
+
\mathbb{E}\left\lbrack {d\left( {{\widehat{f}}_{\mathcal{M}}\left( {X}^{\text{test }}\right) ,{Y}_{\mathcal{M}}^{\text{test }}}\right) }\right\rbrack \leq \alpha . \tag{5}
|
| 83 |
+
$$
|
| 84 |
+
|
| 85 |
+
Note we avoid trivial solutions, e.g. a zero-mask, which satisfy (5) yet provide no useful information. Thus, we seek solutions that employ the least masking required to meet (5), with high probability.
|
| 86 |
+
|
| 87 |
+
The above formulation enjoys several benefits. First, the current definition of the mask captures pixel-dependency. Thus, rather than focusing on individual pixels, the resultant map would mask out (or reduce) regions of high uncertainty within the predicated image to guarantee the divergence remains below the given risk level. Second, it accepts any divergence measure, each leading to a different mask. For example, selecting $d\left( {\cdot , \cdot }\right)$ to be a distortion measure may underline uncertainty regions of high-frequency objects (e.g. edges), while setting $d\left( {\cdot , \cdot }\right)$ to be a perceptual loss may highlight semantic factors within the image. Formally, we refers to these uncertainty masks as Risk-Controlling Prediction Masks, which are defined below.
|
| 88 |
+
|
| 89 |
+
Definition 2. Risk-Controlling Prediction Mask (RCPM). A random function $\mathcal{M} : \mathcal{X} \times \mathcal{Y} \rightarrow$ ${\left\lbrack 0,1\right\rbrack }^{\mathcal{Y}}$ is an $\left( {\alpha ,\delta }\right)$ -Risk-Controlling Prediction Mask if
|
| 90 |
+
|
| 91 |
+
$$
|
| 92 |
+
\mathbb{P}\left( {\mathbb{E}\left\lbrack {\mathcal{R}\left( \mathcal{M}\right) }\right\rbrack \leq \alpha }\right) \geq 1 - \delta
|
| 93 |
+
$$
|
| 94 |
+
|
| 95 |
+
where the risk is defined as $\mathcal{R}\left( \mathcal{M}\right) \triangleq d\left( {{\widehat{f}}_{\mathcal{M}}\left( {X}^{\text{test }}\right) ,{Y}_{\mathcal{M}}^{\text{test }}}\right)$ for given a divergence $d\left( {\cdot , \cdot }\right)$ . The outer probability is over the calibration data, while the expectation taken over a test point $\left( {{X}^{\text{test }},{Y}^{\text{test }}}\right)$ .
|
| 96 |
+
|
| 97 |
+
As for RCPS, the procedure for creating RCPM includes two main stages. First, given a predictor $\widehat{f}$ , we require a heuristic notion of a non-zero uncertainty mask $\widetilde{M}$ . In particular, we train a neural network to output a mask given the measurements and the predicted image as inputs. Second, given a divergence measure, we use the calibration set to calibrate the heuristic mask until the divergence measure decreases below the desired risk level. The final outputs are the calibrated mask and the original prediction multiplied by the mask. The overall method is outlined in Algorithm 1. Following the latter, we now discuss notion of initial uncertainty masks and the subsequent calibration process.
|
| 98 |
+
|
| 99 |
+
## Algorithm 1 Generating RCPM
|
| 100 |
+
|
| 101 |
+
---
|
| 102 |
+
|
| 103 |
+
1. Given a regression model $\widehat{f}$ , train a model which outputs an initial mask $\widetilde{\mathcal{M}}$ .
|
| 104 |
+
|
| 105 |
+
2. Calibrate $\widetilde{\mathcal{M}}$ using the calibration dataset to obtain $\mathcal{M}$ (e.g. using Algorithm 2).
|
| 106 |
+
|
| 107 |
+
3. Given $X$ at inference, output the risk-controlling masked prediction ${\widehat{f}}_{\mathcal{M}}\left( X\right) = \mathcal{M}\left( X\right) \odot \widehat{f}\left( X\right)$ .
|
| 108 |
+
|
| 109 |
+
---
|
| 110 |
+
|
| 111 |
+
### 4.1 INITIAL ESTIMATION OF UNCERTAINTY MASKS
|
| 112 |
+
|
| 113 |
+
Here we present two notions of uncertainty masks. The first concept, based on (Angelopoulos et al., 2022b), translates given uncertainty intervals into a heuristic mask. In the second we develop a process for training a neural network which accepts the input and the predicted images and outputs an uncertainty mask based on a given divergence between the prediction and the ground-truth image.
|
| 114 |
+
|
| 115 |
+
#### 4.1.1 INTERVALS TO MASKS
|
| 116 |
+
|
| 117 |
+
In (Angelopoulos et al., 2022b), the authors propose to build uncertainty intervals based on four heuristic notions of lower and upper interval widths $\widetilde{l}$ and $\widetilde{u}$ : (1) Regression to the magnitude of the residual; (2) one Gaussian per pixel; (3) softmax outputs; and (4) pixel-wise quantile regression. Then, we build a mask by setting the pixel-values to be inversely proportional to the interval sizes:
|
| 118 |
+
|
| 119 |
+
$$
|
| 120 |
+
\widetilde{\mathcal{M}}{\left( X\right) }_{\left\lbrack i\right\rbrack } \propto {\left( {\widetilde{u}}_{\left\lbrack i\right\rbrack } - {\widetilde{l}}_{\left\lbrack i\right\rbrack }\right) }^{-1}. \tag{6}
|
| 121 |
+
$$
|
| 122 |
+
|
| 123 |
+
Thus, the resultant mask holds high values at pixels with small-size intervals (high confidence) and smaller values at pixels with larger intervals corresponding to high uncertainty regions. However this approach requires first creating uncertainty intervals, hence, we next introduce a technique which directly produces an uncertainty mask.
|
| 124 |
+
|
| 125 |
+
#### 4.1.2 MASK REGRESSION
|
| 126 |
+
|
| 127 |
+
Here, we introduce a notion of an uncertainty mask represented by a neural network $\widetilde{\mathcal{M}}\left( {X;\theta }\right) \in$ ${\left\lbrack 0,1\right\rbrack }^{N}$ with parameters $\theta$ . The mask model is trained to output a mask which satisfies
|
| 128 |
+
|
| 129 |
+
$$
|
| 130 |
+
\mathbb{E}\left\lbrack {d\left( {{\widehat{f}}_{\widetilde{\mathcal{M}}}\left( {X}^{\text{train }}\right) ,{Y}_{\widetilde{\mathcal{M}}}^{\text{train }}}\right) }\right\rbrack \leq \alpha . \tag{7}
|
| 131 |
+
$$
|
| 132 |
+
|
| 133 |
+
where here the expectation is over the training samples $\mathcal{D} \triangleq {\left\{ {X}_{j},{Y}_{j}\right\} }_{j = 1}^{J}$ used to train $\widehat{f}$ . To derive our loss function, we start with formulating the following problem for a given a triplet $\left( {X, Y,\widehat{f}\left( X\right) }\right)$
|
| 134 |
+
|
| 135 |
+
$$
|
| 136 |
+
\mathop{\min }\limits_{\theta }\parallel \widetilde{\mathcal{M}}\left( {X,\widehat{f}\left( X\right) }\right) - \mathbb{1}{\parallel }_{2}^{2}\text{ subject to }d\left( {{\widehat{f}}_{\widetilde{\mathcal{M}}}\left( X\right) ,{Y}_{\widetilde{\mathcal{M}}}}\right) \leq \alpha , \tag{8}
|
| 137 |
+
$$
|
| 138 |
+
|
| 139 |
+
where 1 is an image of all ones, representing no masking. The constraint in the above corresponds to (7), while the objective aims to find the minimal solution, i.e., the solution that masks the image the least (avoding trivial solutions). The Lagrangian of the problem is given by
|
| 140 |
+
|
| 141 |
+
$$
|
| 142 |
+
\mathcal{L}\left( {\theta ,\mu }\right) \triangleq \parallel \widetilde{\mathcal{M}}\left( {X,\widehat{f}\left( X\right) }\right) - \mathbb{1}{\parallel }_{2}^{2} + \mu \left( {d\left( {{\widehat{f}}_{\widetilde{\mathcal{M}}}\left( X\right) ,{Y}_{\widetilde{\mathcal{M}}}}\right) - \alpha }\right) \tag{9}
|
| 143 |
+
$$
|
| 144 |
+
|
| 145 |
+
where $\mu > 0$ is the dual variable, considered as an hyperparameter. Given $\mu$ , the optimal mask can be obtained by minimizing $\mathcal{L}\left( {\theta ,\mu }\right)$ with respect to $\theta$ , which is equivalent to minimizing
|
| 146 |
+
|
| 147 |
+
$$
|
| 148 |
+
\parallel \widetilde{\mathcal{M}}\left( {X,\widehat{f}\left( X\right) }\right) - \mathbb{1}{\parallel }_{2}^{2} + \mu \cdot d\left( {{\widehat{f}}_{\widetilde{\mathcal{M}}}\left( X\right) ,{Y}_{\widetilde{\mathcal{M}}}}\right) \tag{10}
|
| 149 |
+
$$
|
| 150 |
+
|
| 151 |
+
since $\alpha$ does not depend on $\theta$ . Thus, we train our mask model using the following loss function:
|
| 152 |
+
|
| 153 |
+
$$
|
| 154 |
+
\mathcal{L}\left( {\mathcal{D},\theta }\right) \triangleq \mathop{\sum }\limits_{{\left( {X, Y}\right) \in \mathcal{D}}}\parallel \widetilde{\mathcal{M}}\left( {X,\widehat{f}\left( X\right) }\right) - \mathbb{1}{\parallel }_{2}^{2} + \mu \cdot d\left( {{\widehat{f}}_{\widetilde{\mathcal{M}}}\left( X\right) ,{Y}_{\widetilde{\mathcal{M}}}}\right) . \tag{11}
|
| 155 |
+
$$
|
| 156 |
+
|
| 157 |
+
The proposed approach facilitates the use of any differentiable distortion measure and is agnostic to the prediction model $\widehat{f}$ . Furthermore, notice that the loss function is independent of $\alpha$ , hence, can be trained once for all values of $\alpha$ . Thus, the output mask acts only as an initial uncertainty map which may not satisfy (5) and need to be calibrated. Following proper calibration, discussed next, our mask model attains (5) without requiring the ground-truth $Y$ . Lastly, this approach directly outputs uncertainty masks and thus it is the focus of our work.
|
| 158 |
+
|
| 159 |
+
### 4.2 MASK CALIBRATION
|
| 160 |
+
|
| 161 |
+
We consider the $\widetilde{M}\left( X\right)$ as an initial estimation of our uncertainty mask which needs to calibrated to provide the guarantee in Definition 2. As the calibration process is not the focus of our work, we perform a simple calibration outlined in Algorithm 2. The core of the calibration employs a parametric function $C\left( {\cdot ;\lambda }\right)$ pixel-wise to obtain a mask ${\mathcal{M}}_{\lambda }{\left( X\right) }_{\left\lbrack i\right\rbrack } \triangleq C\left( {\mathcal{M}{\left( X\right) }_{\left\lbrack i\right\rbrack };\lambda }\right)$ . In general, $C\left( {\cdot ;\lambda }\right)$ can be any monotonic non-decreasing function. Here we consider the following form
|
| 162 |
+
|
| 163 |
+
$$
|
| 164 |
+
{\mathcal{M}}_{\lambda }{\left( X\right) }_{\left\lbrack i\right\rbrack } \triangleq \min \left( {\frac{\lambda }{1 - \widetilde{\mathcal{M}}{\left( X\right) }_{\left\lbrack i\right\rbrack } + \epsilon },1}\right) \;\forall i = 1,\ldots , N, \tag{12}
|
| 165 |
+
$$
|
| 166 |
+
|
| 167 |
+
---
|
| 168 |
+
|
| 169 |
+
${}^{1}$ A small value $\epsilon$ is added to the denominator to ensure numerical stability.
|
| 170 |
+
|
| 171 |
+
---
|
| 172 |
+
|
| 173 |
+
which has been found empirically to perform well in our experiments. To set $\lambda > 0$ , we use the calibration dataset $\mathcal{C} \triangleq {\left\{ {X}_{k},{Y}_{k}\right\} }_{k = 1}^{K}$ such that for any pair $\left( {{X}_{k},{Y}_{k}}\right) \in \mathcal{C}$ we compute
|
| 174 |
+
|
| 175 |
+
$$
|
| 176 |
+
{\lambda }_{k} \triangleq \max \left\{ {\widehat{\lambda } : d\left( {{\widehat{f}}_{{\mathcal{M}}_{\widehat{\lambda }}}\left( {X}_{k}\right) ,{Y}_{k{\mathcal{M}}_{\widehat{\lambda }}}}\right) \leq \alpha }\right\} . \tag{13}
|
| 177 |
+
$$
|
| 178 |
+
|
| 179 |
+
Finally, $\lambda$ is taken to be the $1 - \delta$ quantile of ${\left\{ {\lambda }_{k}\right\} }_{k = 1}^{K}$ , i.e. the maximal value for which at least $\delta$ fraction of the calibration set satisfies condition (5). Thus, assuming the calibration and test sets are i.i.d samples from the same distribution, the calibrated mask is guaranteed to satisfy Definition 2
|
| 180 |
+
|
| 181 |
+
## Algorithm 2 Calibration Process
|
| 182 |
+
|
| 183 |
+
---
|
| 184 |
+
|
| 185 |
+
Input: Calibration data $\mathcal{C} \triangleq {\left\{ {X}_{k},{Y}_{k}\right\} }_{k = 1}^{K}$ ; risk level $\alpha$ ; error rate $\delta$ ; underlying predictor $\widehat{f}$ ; heuristic mask
|
| 186 |
+
|
| 187 |
+
$\widetilde{\mathcal{M}}$ ; a monotonic non-decreasing function $C\left( {\cdot ;\lambda }\right) : \left\lbrack {0,1}\right\rbrack \rightarrow \left\lbrack {0,1}\right\rbrack$ parameterized by $\lambda > 0$ .
|
| 188 |
+
|
| 189 |
+
1. For a given $\widetilde{\lambda } > 0$ , define ${\mathcal{M}}_{\widetilde{\lambda }}{\left( X\right) }_{\left\lbrack i\right\rbrack } \triangleq C\left( {\widetilde{\mathcal{M}}{\left( X\right) }_{\left\lbrack i\right\rbrack };\widetilde{\lambda }}\right)$ for all $i = 1,\ldots , N$ .
|
| 190 |
+
|
| 191 |
+
2. For each pair $\left( {{X}_{k},{Y}_{k}}\right) \in \mathcal{C}$ , set ${\lambda }_{k} \triangleq \max \left\{ {\widehat{\lambda } : d\left( {{\widehat{f}}_{{\mathcal{M}}_{\widehat{\lambda }}}\left( {X}_{k}\right) ,{Y}_{k}{}_{{\mathcal{M}}_{\widehat{\lambda }}}}\right) \leq \alpha }\right\}$ .
|
| 192 |
+
|
| 193 |
+
3. Set $\lambda$ to be the $1 - \delta$ quantile of ${\left\{ {\lambda }_{k}\right\} }_{k = 1}^{K}$ .
|
| 194 |
+
|
| 195 |
+
4. Define the final mask model as ${\mathcal{M}}_{\lambda }{\left( X\right) }_{\left\lbrack i\right\rbrack } \triangleq C\left( {\widetilde{\mathcal{M}}{\left( X\right) }_{\left\lbrack i\right\rbrack };\lambda }\right)$ .
|
| 196 |
+
|
| 197 |
+
Output: Calibrated uncertainty mask model ${\mathcal{M}}_{\lambda }$ .
|
| 198 |
+
|
| 199 |
+
---
|
| 200 |
+
|
| 201 |
+
## 5 EXPERIMENTS
|
| 202 |
+
|
| 203 |
+
### 5.1 DATASETS AND TASKS
|
| 204 |
+
|
| 205 |
+
Datasets Two data-sets are used in our experiments:
|
| 206 |
+
|
| 207 |
+
Places365 (Zhou et al., 2017): A large collection of 256x256 images from 365 scene categories. We use 1,803,460 images for training and 36,500 images for validation/test.
|
| 208 |
+
|
| 209 |
+
Rat Astrocyte Cells (Ljosa et al., 2012): A dataset of 1,200 uncompressed images of scanned rat cells of resolution ${990} \times {708}$ . We crop the images into ${256} \times {256}$ tiles, and randomly split them into train and validation/test sets of sizes 373,744 and 11,621 respectively. The tiles are partially overlapped as we use stride of 32 pixels when cropping the images.
|
| 210 |
+
|
| 211 |
+
Tasks We consider the following image-to-image tasks (illustrated in Figure 4):
|
| 212 |
+
|
| 213 |
+
Image Completion: Using gray-scale version of Places365, we remove middle vertical and horizontal stripes of 32 pixel width, and aim to reconstruct the missing part.
|
| 214 |
+
|
| 215 |
+
Super Resolution: We experiment with this task on the two data-sets. The images are scaled down to ${64} \times {64}$ images where the goal is to reconstruct the original images.
|
| 216 |
+
|
| 217 |
+
Colorization: We convert the Places365 images to grayscale and aim to recover their colors.
|
| 218 |
+
|
| 219 |
+
### 5.2 EXPERIMENTAL SETTINGS
|
| 220 |
+
|
| 221 |
+
Image-to-Image Models We start with training models for the above three tasks. Note that these models are not intended to be state-of-the-art, but rather used to demonstrate the uncertainty estimation technique proposed in this work. We use the same model architecture for all tasks: an 8 layer U-Net. For each task we train two versions of the network: (i) A simple regressor; and (ii) A conditional GAN, where the generator plays the role of the reconstruction model. For the GAN, the discriminator is implemented as a 4 layer CNN. We use the L1 loss as the objective for the regressor, and add an adversarial loss for the conditional GAN, as in Isola et al. (2017). All models are trained for 10 epochs using Adam optimizer with a learning rate of $1\mathrm{e} - 5$ and a batch size of 50 .
|
| 222 |
+
|
| 223 |
+
Mask Model For our mask model we use an 8 layer U-Net architecture for simplicity and compatibility with previous works. The input to the mask model are the measurement image and the predicated image, concatenated on the channel axis. The output is a mask having the same shape as the predicted image with values within the range $\left\lbrack {0,1}\right\rbrack$ . The mask model is trained using the loss function (11) with $\mu = 2$ , a learning rate of $1\mathrm{e} - 5$ and a batch size of 25 .
|
| 224 |
+
|
| 225 |
+
Experiments We consider the L1, L2, SSIM and LPIPS as our divergence measures. We set aside 1,000 samples from each validation set for calibration and use the remaining samples for evaluation. We demonstrate the flexibility of our approach by conducting experiments on a variety of 12 settings: (i) Image Completion: \{Regressor, GAN\} $\times \{ \mathrm{L}1,\mathrm{{LPIPS}}\}$ ; (ii) Super Resolution: \{Regressor, GAN\} $\times \{ \mathrm{L}1,\mathrm{{SSIM}}\}$ ; and (iii) Colorization: $\{$ Regressor, GAN $\} \times \{ \mathrm{L}1,\mathrm{L}2\}$ .
|
| 226 |
+
|
| 227 |
+
Risk and Error Levels Recall that given a predicted image, our goal is to find a mask that, when applied to both the prediction and the (unknown) reference image, reduces the distortion between them to a predefined risk level $\alpha$ with high probability $\delta$ . Here we fix $\delta = {0.9}$ and set $\alpha$ to be the 0.1-quantile of each measure computed on a random sample from the validation set, i.e. roughly ${10}\%$ of the predictions are already considered sufficiently good and do not require masking at all.
|
| 228 |
+
|
| 229 |
+
### 5.3 COMPETING TECHNIQUES FOR COMPARISON
|
| 230 |
+
|
| 231 |
+
Quantile - Interval-Based Technique We compare our method to the quantile regression option presented in (Angelopoulos et al. 2022b), denoted by Quantile. While their calibrated uncertainty intervals are markedly different from the expected distortion we consider, we can use these intervals and transform them into a mask using (6). For completeness, we also report the performance of the quantile regression even when it is less suitable, i.e. when the underlying model is a GAN and when the divergence function is different from L1. We note again that for the sake of a fair comparison, our implementation of the mask model uses exactly the same architecture as the quantile regressor.
|
| 232 |
+
|
| 233 |
+
Opt - Oracle We also compare our method with an oracle, denoted Opt, which given a ground-truth image computes an optimal mask by minimizing (10). We perform gradient descent using Adam optimizer with a learning rate of 0.01 , iterating until the divergence term decreases below the risk level $\alpha$ . This approach is performed to each test image individually, thus no calibration needed.
|
| 234 |
+
|
| 235 |
+
Comparison Metrics Given a mask $\mathcal{M}\left( X\right)$ we assess its performance using the following metrics: (i) Average mask size $s\left( {\mathcal{M}\left( X\right) }\right) \triangleq \frac{1}{N}\parallel \mathcal{M}\left( X\right) - \mathbb{1}{\parallel }_{1}$ ; (ii) correlation $\operatorname{Corr}\left( {\mathcal{M}, d}\right)$ between the mask size and the full (unmasked) divergence value; and (iii) correlation $\operatorname{Corr}\left( {\mathcal{M},{\mathcal{M}}_{opt}}\right)$ between the size of the given mask and the size of the optimal mask ${\mathcal{M}}_{\text{opt }}$ obtained by Opt.
|
| 236 |
+
|
| 237 |
+
### 5.4 RESULTS AND DISCUSSION
|
| 238 |
+
|
| 239 |
+
We now show a series of results that demonstrate our proposed uncertainty masking approach, and its comparison with Opt and Quantile ${}^{2}$ . We begin with a representative visual illustration of our proposed mask for several test cases in Figure 2. As can be seen, the produced masks indeed identify sub-regions of high uncertainty. In the image completion task the bottom left corner is richer in details and thus there is high uncertainty regarding this part in the reconstructed image. In the colorization task, the mask highlights the colored area of the bus which is the most unreliable region since can be colorized with a large variety of colors. In the super resolution task the mask marks regions of edges and text while trustworthy parts such as smooth surfaces remain unmasked.
|
| 240 |
+
|
| 241 |
+
We present quantitative results in Table 1, showing that our method exhibits smaller mask sizes, aligned well with the masks obtained by Opt. In contrast, Quantile overestimates and produces larger masks as expected. In terms of the correlation $\operatorname{Corr}\left( {\mathcal{M}, d}\right)$ , our method shows high agreement, while Quantile lags behind. This correlation indicates a much desired adaptivity of the estimated mask to the complexity of image content and thus to the corresponding uncertainty. We provide a complement illustration of the results in Figure 3 in the Appendix. As seen from the top row, all three methods meet the probabilistic guarantees regarding the divergence/loss with fewer than ${10}\%$ exceptions, as required. Naturally, Opt does not have outliers since each mask is optimally calibrated by its computation. The spread of loss values tends to be higher with Quantile, indicating weaker performance. The middle and bottom rows are consistent with results in Table 1, showing that our approach tends to produce masks that are close in size to those of Opt; while Quantile produces larger, and thus inferior, masked areas. We note that the colorization task seem to be more challenging, resulting in a marginal performance increase for our method compared to Quantile.
|
| 242 |
+
|
| 243 |
+
---
|
| 244 |
+
|
| 245 |
+
${}^{2}$ Due to space limitations, we show more extensive experimental results in the Appendix, while presenting a selected portion of them here.
|
| 246 |
+
|
| 247 |
+
---
|
| 248 |
+
|
| 249 |
+

|
| 250 |
+
|
| 251 |
+
Figure 2: Examples of conformal prediction masks. The images from left to right are the measurement, ground-truth, model prediction, our calibrated mask trained with L1 loss and the ground-truth L1 error. Tasks are image completion (top), colorization (middle) and super resolution (bottom).
|
| 252 |
+
|
| 253 |
+
Table 1: Quantitative results. Arrows points to the better direction where best results are in blue.
|
| 254 |
+
|
| 255 |
+
<table><tr><td rowspan="2">Network</td><td rowspan="2">Distance</td><td colspan="3">$s\left( \mathcal{M}\right)$(↓)</td><td colspan="2">$\operatorname{Corr}\left( {\mathcal{M}, d}\right)$</td><td colspan="2">$\operatorname{Corr}\left( {\mathcal{M},{\mathcal{M}}_{\text{opt }}}\right)$</td></tr><tr><td>Opt</td><td>Ours</td><td>Quantile</td><td>Ours</td><td>Quantile</td><td>Ours</td><td>Quantile</td></tr><tr><td colspan="9">Image Completion - Places365</td></tr><tr><td>Regression</td><td>L1</td><td>0.09</td><td>0.10</td><td>0.15</td><td>0.89</td><td>0.78</td><td>0.89</td><td>0.76</td></tr><tr><td>Regression</td><td>LPIPS</td><td>0.01</td><td>0.01</td><td>0.20</td><td>0.54</td><td>0.51</td><td>0.89</td><td>0.77</td></tr><tr><td>GAN</td><td>L1</td><td>0.09</td><td>0.09</td><td>0.14</td><td>0.95</td><td>0.85</td><td>0.94</td><td>0.80</td></tr><tr><td>GAN</td><td>LPIPS</td><td>0.01</td><td>0.01</td><td>0.08</td><td>0.31</td><td>0.24</td><td>0.50</td><td>0.23</td></tr><tr><td colspan="9">Super Resolution - Rat Astrocyte Cells</td></tr><tr><td>Regression</td><td>L1</td><td>0.24</td><td>0.26</td><td>0.28</td><td>0.99</td><td>0.54</td><td>0.95</td><td>0.88</td></tr><tr><td>Regression</td><td>SSIM</td><td>0.03</td><td>0.03</td><td>0.13</td><td>0.66</td><td>0.64</td><td>0.82</td><td>0.57</td></tr><tr><td>GAN</td><td>L1</td><td>0.26</td><td>0.30</td><td>0.40</td><td>0.94</td><td>0.63</td><td>0.80</td><td>0.72</td></tr><tr><td>GAN</td><td>SSIM</td><td>0.03</td><td>0.03</td><td>0.13</td><td>0.79</td><td>0.63</td><td>0.83</td><td>0.63</td></tr><tr><td colspan="9">Super Resolution - Places365</td></tr><tr><td>Regression</td><td>L1</td><td>0.30</td><td>0.36</td><td>0.39</td><td>0.99</td><td>0.97</td><td>0.95</td><td>0.94</td></tr><tr><td>Regression</td><td>SSIM</td><td>0.10</td><td>0.23</td><td>0.48</td><td>0.89</td><td>0.85</td><td>0.94</td><td>0.84</td></tr><tr><td>GAN</td><td>L1</td><td>0.37</td><td>0.38</td><td>0.47</td><td>0.97</td><td>0.81</td><td>0.95</td><td>0.67</td></tr><tr><td>GAN</td><td>SSIM</td><td>0.10</td><td>0.12</td><td>0.51</td><td>0.86</td><td>0.81</td><td>0.92</td><td>0.86</td></tr><tr><td colspan="9">Colorization - Places365</td></tr><tr><td>Regression</td><td>L1</td><td>0.27</td><td>0.37</td><td>0.40</td><td>0.68</td><td>0.43</td><td>0.57</td><td>0.46</td></tr><tr><td>Regression</td><td>L2</td><td>0.18</td><td>0.37</td><td>0.38</td><td>0.57</td><td>0.30</td><td>0.60</td><td>0.48</td></tr><tr><td>GAN</td><td>L1</td><td>0.27</td><td>0.38</td><td>0.40</td><td>0.58</td><td>0.40</td><td>0.60</td><td>0.52</td></tr><tr><td>GAN</td><td>L2</td><td>0.18</td><td>0.36</td><td>0.38</td><td>0.42</td><td>0.28</td><td>0.59</td><td>0.49</td></tr></table>
|
| 256 |
+
|
| 257 |
+
## 6 CONCLUSIONS
|
| 258 |
+
|
| 259 |
+
Uncertainty assessment in image-to-image regression problems is a challenging task, due to the implied complexity, the high dimensions involved, and the need to offer an effective and meaningful visualization of the estimated results. This work proposes a novel approach towards these challenges by constructing a conformal mask that visually-differentiate between trustworthy and uncertain regions in an estimated image. This mask provides a measure of uncertainty accompanied by an statistical guarantee, stating that with high probability, the divergence between the original and the recovered images over the non-masked regions is below a desired risk level. The presented paradigm is flexible, being agnostic to the choice of divergence measure, and the regression method employed.
|
| 260 |
+
|
| 261 |
+
## REFERENCES
|
| 262 |
+
|
| 263 |
+
Moloud Abdar, Farhad Pourpanah, Sadiq Hussain, Dana Rezazadegan, Li Liu, Mohammad Ghavamzadeh, Paul Fieguth, Xiaochun Cao, Abbas Khosravi, U Rajendra Acharya, et al. A review of uncertainty quantification in deep learning: Techniques, applications and challenges. Information Fusion, 76:243-297, 2021.
|
| 264 |
+
|
| 265 |
+
Ahmed Alaa and Mihaela Van Der Schaar. Frequentist uncertainty in recurrent neural networks via blockwise influence functions. In International Conference on Machine Learning, pp. 175-190. PMLR, 2020.
|
| 266 |
+
|
| 267 |
+
Anastasios N. Angelopoulos and Stephen Bates. A gentle introduction to conformal prediction and distribution-free uncertainty quantification. CoRR, abs/2107.07511, 2021a. URL https: //arxiv.org/abs/2107.07511.
|
| 268 |
+
|
| 269 |
+
Anastasios N Angelopoulos and Stephen Bates. A gentle introduction to conformal prediction and distribution-free uncertainty quantification. arXiv preprint arXiv:2107.07511, 2021b.
|
| 270 |
+
|
| 271 |
+
Anastasios N Angelopoulos, Stephen Bates, Emmanuel J Candès, Michael I Jordan, and Lihua Lei. Learn then test: Calibrating predictive algorithms to achieve risk control. arXiv preprint arXiv:2110.01052, 2021.
|
| 272 |
+
|
| 273 |
+
Anastasios N Angelopoulos, Stephen Bates, Adam Fisch, Lihua Lei, and Tal Schuster. Conformal risk control. arXiv preprint arXiv:2208.02814, 2022a.
|
| 274 |
+
|
| 275 |
+
Anastasios N Angelopoulos, Amit P Kohli, Stephen Bates, Michael I Jordan, Jitendra Malik, Thayer Alshaabi, Srigokul Upadhyayula, and Yaniv Romano. Image-to-image regression with distribution-free uncertainty quantification and applications in imaging. arXiv preprint arXiv:2202.05265, 2022b.
|
| 276 |
+
|
| 277 |
+
Arsenii Ashukha, Alexander Lyzhov, Dmitry Molchanov, and Dmitry Vetrov. Pitfalls of in-domain uncertainty estimation and ensembling in deep learning. arXiv preprint arXiv:2002.06470, 2020.
|
| 278 |
+
|
| 279 |
+
Stephen Bates, Anastasios Angelopoulos, Lihua Lei, Jitendra Malik, and Michael Jordan. Distribution-free, risk-controlling prediction sets. Journal of the ACM (JACM), 68(6):1-34, 2021.
|
| 280 |
+
|
| 281 |
+
Charles Blundell, Julien Cornebise, Koray Kavukcuoglu, and Daan Wierstra. Weight uncertainty in neural network. In International conference on machine learning, pp. 1613-1622. PMLR, 2015.
|
| 282 |
+
|
| 283 |
+
Tianqi Chen, Emily Fox, and Carlos Guestrin. Stochastic gradient hamiltonian monte carlo. In International conference on machine learning, pp. 1683-1691. PMLR, 2014.
|
| 284 |
+
|
| 285 |
+
Andreas Damianou and Neil D Lawrence. Deep gaussian processes. In Artificial intelligence and statistics, pp. 207-215. PMLR, 2013.
|
| 286 |
+
|
| 287 |
+
Yarin Gal and Zoubin Ghahramani. Dropout as a bayesian approximation: Representing model uncertainty in deep learning. In international conference on machine learning, pp. 1050-1059. PMLR, 2016.
|
| 288 |
+
|
| 289 |
+
Yarin Gal, Jiri Hron, and Alex Kendall. Concrete dropout. Advances in neural information processing systems, 30, 2017a.
|
| 290 |
+
|
| 291 |
+
Yarin Gal, Riashat Islam, and Zoubin Ghahramani. Deep bayesian active learning with image data. In International Conference on Machine Learning, pp. 1183-1192. PMLR, 2017b.
|
| 292 |
+
|
| 293 |
+
Jan Gasthaus, Konstantinos Benidis, Yuyang Wang, Syama Sundar Rangapuram, David Salinas, Valentin Flunkert, and Tim Januschowski. Probabilistic forecasting with spline quantile function rnns. In The 22nd international conference on artificial intelligence and statistics, pp. 1901-1910. PMLR, 2019.
|
| 294 |
+
|
| 295 |
+
Ruihan Hu, Qijun Huang, Sheng Chang, Hao Wang, and Jin He. The MBPEP: a deep ensemble pruning algorithm providing high quality uncertainty prediction. Applied Intelligence, 49(8): 2942-2955, 2019.
|
| 296 |
+
|
| 297 |
+
Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A Efros. Image-to-image translation with conditional adversarial networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1125-1134, 2017.
|
| 298 |
+
|
| 299 |
+
Pavel Izmailov, Wesley J Maddox, Polina Kirichenko, Timur Garipov, Dmitry Vetrov, and Andrew Gordon Wilson. Subspace inference for Bayesian deep learning. In Uncertainty in Artificial Intelligence, pp. 1169-1179. PMLR, 2020.
|
| 300 |
+
|
| 301 |
+
Byol Kim, Chen Xu, and Rina Barber. Predictive inference is free with the jackknife+-after-bootstrap. Advances in Neural Information Processing Systems, 33:4138-4149, 2020.
|
| 302 |
+
|
| 303 |
+
Danijel Kivaranovic, Kory D Johnson, and Hannes Leeb. Adaptive, distribution-free prediction intervals for deep networks. In International Conference on Artificial Intelligence and Statistics, pp. 4346-4356. PMLR, 2020.
|
| 304 |
+
|
| 305 |
+
Balaji Lakshminarayanan, Alexander Pritzel, and Charles Blundell. Simple and scalable predictive uncertainty estimation using deep ensembles. Advances in neural information processing systems, 30, 2017.
|
| 306 |
+
|
| 307 |
+
Jing Lei, Max G'Sell, Alessandro Rinaldo, Ryan J Tibshirani, and Larry Wasserman. Distribution-free predictive inference for regression. Journal of the American Statistical Association, 113 (523):1094-1111, 2018.
|
| 308 |
+
|
| 309 |
+
Vebjorn Ljosa, Katherine L Sokolnicki, and Anne E Carpenter. Annotated high-throughput microscopy image sets for validation. Nature methods, 9(7):637-637, 2012.
|
| 310 |
+
|
| 311 |
+
Christos Louizos and Max Welling. Multiplicative normalizing flows for variational bayesian neural networks. In International Conference on Machine Learning, pp. 2218-2227. PMLR, 2017.
|
| 312 |
+
|
| 313 |
+
David JC MacKay. Bayesian interpolation. Neural computation, 4(3):415-447, 1992.
|
| 314 |
+
|
| 315 |
+
Tim Pearce, Alexandra Brintrup, Mohamed Zaki, and Andy Neely. High-quality prediction intervals for deep learning: A distribution-free, ensembled approach. In International conference on machine learning, pp. 4075-4084. PMLR, 2018.
|
| 316 |
+
|
| 317 |
+
Konstantin Posch, Jan Steinbrener, and Jürgen Pilz. Variational inference to measure model uncertainty in deep neural networks. arXiv preprint arXiv:1902.10189, 2019.
|
| 318 |
+
|
| 319 |
+
Hippolyt Ritter, Aleksandar Botev, and David Barber. A scalable Laplace approximation for neural networks. In 6th International Conference on Learning Representations, ICLR 2018-Conference Track Proceedings, volume 6. International Conference on Representation Learning, 2018.
|
| 320 |
+
|
| 321 |
+
Yaniv Romano, Evan Patterson, and Emmanuel Candes. Conformalized quantile regression. Advances in neural information processing systems, 32, 2019.
|
| 322 |
+
|
| 323 |
+
Tim Salimans, Diederik Kingma, and Max Welling. Markov chain monte carlo and variational inference: Bridging the gap. In International conference on machine learning, pp. 1218-1226. PMLR, 2015.
|
| 324 |
+
|
| 325 |
+
Swami Sankaranarayanan, Anastasios N Angelopoulos, Stephen Bates, Yaniv Romano, and Phillip Isola. Semantic uncertainty intervals for disentangled latent spaces. arXiv preprint arXiv:2207.10074, 2022.
|
| 326 |
+
|
| 327 |
+
Matteo Sesia and Emmanuel J Candès. A comparison of some conformal quantile regression methods. Stat, 9(1):e261, 2020.
|
| 328 |
+
|
| 329 |
+
Glenn Shafer and Vladimir Vovk. A tutorial on conformal prediction. Journal of Machine Learning Research, 9(3), 2008.
|
| 330 |
+
|
| 331 |
+
Sophia Sun. Conformal methods for quantifying uncertainty in spatiotemporal data: A survey. arXiv preprint arXiv:2209.03580, 2022.
|
| 332 |
+
|
| 333 |
+
Laurent Valentin Jospin, Wray Buntine, Farid Boussaid, Hamid Laga, and Mohammed Bennamoun. Hands-on Bayesian neural networks-a tutorial for deep learning users. arXiv e-prints, pp. arXiv- 2007, 2020.
|
| 334 |
+
|
| 335 |
+
Dongxia Wu, Liyao Gao, Xinyue Xiong, Matteo Chinazzi, Alessandro Vespignani, Yi-An Ma, and Rose Yu. Quantifying uncertainty in deep spatiotemporal forecasting. arXiv preprint arXiv:2105.11982, 2021.
|
| 336 |
+
|
| 337 |
+
Bolei Zhou, Agata Lapedriza, Aditya Khosla, Aude Oliva, and Antonio Torralba. Places: A 10 million image database for scene recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017.
|
| 338 |
+
|
| 339 |
+
## A APPENDIX: QUANTITATIVE RESULTS
|
| 340 |
+
|
| 341 |
+
Here we provide additional graphical results to complement the quantitative results of Table 1,
|
| 342 |
+
|
| 343 |
+

|
| 344 |
+
|
| 345 |
+
Figure 3: Quantitative results. (Top) Distribution of divergence values after masking, (middle) histograms of mask sizes, and (bottom) correlation with mask sizes obtained by Opt.
|
| 346 |
+
|
| 347 |
+
## B APPENDIX: TASK ILLUSTRATIONS
|
| 348 |
+
|
| 349 |
+
In the Figure below we illustrate the three tasks we experiment with in this work.
|
| 350 |
+
|
| 351 |
+

|
| 352 |
+
|
| 353 |
+
Figure 4: The three tasks we experimented with: 1) Image Colorization on the left column 2) Gray-scale Image Completion on the middle column, and 3) Super Resolution on the right column.
|
| 354 |
+
|
| 355 |
+
## C APPENDIX: MORE RESULTS
|
| 356 |
+
|
| 357 |
+
Here we provide the results of all 12 settings explored in our work, corresponding to three inverse problems, two regression techniques and two metrics per each, along the following breakdown:
|
| 358 |
+
|
| 359 |
+
- Image Completion: $\{$ Regressor, GAN $\} \times \{$ L1, LPIPS $\}$ ;
|
| 360 |
+
|
| 361 |
+
- Super Resolution: $\{$ Regressor, GAN $\} \times \{ \mathrm{L}1,\mathrm{{SSIM}}\}$ ; and
|
| 362 |
+
|
| 363 |
+
- Colorization: $\{$ Regressor, GAN $\} \times \{ \mathrm{L}1,\mathrm{\;L}2\}$ .
|
| 364 |
+
|
| 365 |
+
We start in Figures 5-7 with the obtained distributions of masked distortion values obtain by our method, Opt, and Quantile. The goal here is to show that all three methods meet the required divergence condition with exceptions which do not surpass the destination probability $\delta = {0.1}$ .
|
| 366 |
+
|
| 367 |
+
Figures 8-10 present histograms of the obtained mask sizes for the three methods. The goal is to get minimal areas in these masks, so as to keep most of the image content unmasked.
|
| 368 |
+
|
| 369 |
+
Figures 11-13 conclude these results by bringing graphs showing the inter-relation between the Opt mask size and the ones given by our technique and Quantile.
|
| 370 |
+
|
| 371 |
+

|
| 372 |
+
|
| 373 |
+
Figure 5: Image Completion. Distribution of the masked divergence values versus the chosen risk level (shown as a horizontal dashed line) for the three tested methods - Opt, Quantile and ours.
|
| 374 |
+
|
| 375 |
+

|
| 376 |
+
|
| 377 |
+
Figure 6: Super Resolution. Distribution of the masked divergence values versus the chosen risk level (shown as a horizontal dashed line) for the three tested methods - Opt, Quantile and ours.
|
| 378 |
+
|
| 379 |
+

|
| 380 |
+
|
| 381 |
+
Figure 7: Colorization. Distribution of the masked divergence values versus the chosen risk level (shown as a horizontal dashed line) for the three tested methods - Opt, Quantile and ours.
|
| 382 |
+
|
| 383 |
+

|
| 384 |
+
|
| 385 |
+
Figure 8: Image Completion. Histograms of the calibrated mask sizes for the three tested methods - Opt, Quantile and ours.
|
| 386 |
+
|
| 387 |
+

|
| 388 |
+
|
| 389 |
+
Figure 9: Super Resolution. Histograms of the calibrated mask sizes for the three tested methods - Opt, Quantile and ours.
|
| 390 |
+
|
| 391 |
+

|
| 392 |
+
|
| 393 |
+
Figure 10: Colorization. Histograms of the calibrated mask sizes for the three tested methods - Opt, Quantile and ours.
|
| 394 |
+
|
| 395 |
+

|
| 396 |
+
|
| 397 |
+
Figure 11: Image Completion. Correlation between the mask sizes produced by our method and Qunatile versus the mask size obtained by Opt.
|
| 398 |
+
|
| 399 |
+

|
| 400 |
+
|
| 401 |
+
Figure 12: Super Resolution. Correlation between the mask sizes produced by our method and Qunatile versus the mask size obtained by Opt.
|
| 402 |
+
|
| 403 |
+

|
| 404 |
+
|
| 405 |
+
Figure 13: Colorization. Correlation between the mask sizes produced by our method and Qunatile versus the mask size obtained by Opt.
|
| 406 |
+
|
papers/ICLR/ICLR 2023/ICLR 2023 Workshop/ICLR 2023 Workshop TML4H/J4QatK02Qc/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,310 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ CONFORMAL PREDICTION MASKS: VISUALIZING UNCERTAINTY IN MEDICAL IMAGING
|
| 2 |
+
|
| 3 |
+
Anonymous authors
|
| 4 |
+
|
| 5 |
+
Paper under double-blind review
|
| 6 |
+
|
| 7 |
+
§ ABSTRACT
|
| 8 |
+
|
| 9 |
+
Estimating uncertainty in image-to-image recovery networks is an important task, particularly as such networks are being increasingly deployed in the biological and medical imaging realms. A recent conformal prediction technique derives per-pixel uncertainty intervals, guaranteed to contain the true value with a user-specified probability. Yet, these intervals are hard to comprehend and fail to express uncertainty at a conceptual level. In this paper, we introduce a new approach for uncertainty quantification and visualization, based on masking. The proposed technique produces interpretable image masks with rigorous statistical guarantees for image regression problems. Given an image recovery model, our approach computes a mask such that a desired divergence between the masked reconstructed image and the masked true image is guaranteed to be less than a specified risk level, with high probability. The mask thus identifies reliable regions of the predicted image while highlighting areas of high uncertainty. Our approach is agnostic to the underlying recovery model and the true unknown data distribution. We evaluate the proposed approach on image colorization, image completion, and super-resolution tasks, attaining high quality performance on each.
|
| 10 |
+
|
| 11 |
+
§ 1 INTRODUCTION
|
| 12 |
+
|
| 13 |
+
Deep Learning has been very successful in many applications, spanning computer vision, speech recognition, natural language processing, and beyond. For many years, researchers were mainly content in developing new techniques that achieve unprecedented accuracy, without concerns for understanding the uncertainty implicit in such models. More recently, however, there has been a concerted effort within the research community to quantify the uncertainty of deep models.
|
| 14 |
+
|
| 15 |
+
This paper addresses the problem of quantifying and visualizing uncertainty in the realm of image-to-image tasks. Such problems include super-resolution, deblurring, colorization, and image completion, amongst others. Assessing uncertainty is important generally, but is particularly so in application domains such as biological and medical imaging, in which fidelity to the ground truth is paramount. If there is an area of the reconstructed image where such fidelity is unlikely or unreliable due to high uncertainty, this is crucial to convey.
|
| 16 |
+
|
| 17 |
+
Our approach to uncertainty estimation is based on masking. Specifically, we are interested in computing a mask such that the uncertain regions in the image are masked out. Based on conformal prediction (Angelopoulos & Bates, 2021a), we derive an algorithm that can apply to any existing image-recovery model and produce uncertainty a mask satisfying the following criterion: the divergence between the masked reconstructed image and the masked true image is guaranteed to be less than a specified level, with high probability. The resultant mask highlights areas in the recovered image of high uncertainty while trustworthy regions remain intact. Our distribution-free method, illustrated in Figure 1, is agnostic to the prediction model and to the choice of divergence function, which should be dictated by the application. Our contributions are as follows:
|
| 18 |
+
|
| 19 |
+
1. We introduce the notion of conformal prediction masks: a distribution-free approach to uncertainty quantification in image-to-image regression. We derive masks which visually convey regions of uncertainty while rigorously providing strong statistical guarantees for any regression model, image dataset and desired divergence measure.
|
| 20 |
+
|
| 21 |
+
< g r a p h i c s >
|
| 22 |
+
|
| 23 |
+
Figure 1: High-level overview. Given image measurements $X$ (e.g gray-scale image) of a ground-truth image $Y$ , and a predicted image $\widehat{f}\left( X\right)$ (e.g. colorized image), the mask model outputs an uncertainty mask $\mathcal{M}\left( X\right)$ such that the divergence between the masked ground-truth and the masked prediction is below a chosen risk level with high probability.
|
| 24 |
+
|
| 25 |
+
2. We develop a practical training algorithm for computing these masks which only requires triplets of input (degraded), reconstructed and true images. The resultant mask model is trained once for all possible risk levels and is calibrated via a simple process to meet the required guarantees given a user-specified risk level and confidence probability.
|
| 26 |
+
|
| 27 |
+
3. We demonstrate the power of the method on image colorization, image completion and super-resolution tasks. By assessing our performance both visually and quantitatively, we show the resultant masks attain the probabilistic guarantee and provide interpretable uncertainty visualization without over-masking the recovered images, in contrast to competing techniques.
|
| 28 |
+
|
| 29 |
+
§ 2 RELATED WORK
|
| 30 |
+
|
| 31 |
+
Bayesian Uncertainty Quantification The Bayesian paradigm defines uncertainty by assuming a distribution over the model parameters and/or activation functions. The most prevalent approach is Bayesian neural networks (MacKay, 1992; Valentin Jospin et al., 2020; Izmailov et al., 2020), which are stochastic models trained using Bayesian inference. Yet, as the number of model parameters has grown rapidly, computing the exact posteriors has became computationally intractable. This shortcoming has led to the development of approximation methods such as Monte Carlo dropout (Gal & Ghahramani, 2016; Gal et al., 2017a), stochastic gradient Markov chain Monte Carlo (Salimans et al., 2015; Chen et al., 2014), Laplacian approximations (Ritter et al., 2018) and variational inference (Blundell et al., 2015; Louizos & Welling, 2017; Posch et al., 2019). Alternative Bayesian techniques include deep Gaussian processes (Damianou & Lawrence, 2013), deep ensembles (Ashukha et al., 2020; Hu et al., 2019), and deep Bayesian active learning (Gal et al., 2017b), to name just a few. A comprehensive review on Bayesian uncertainty quantification is given in Abdar et al. (2021).
|
| 32 |
+
|
| 33 |
+
Distribution-Free Methods and Conformal Prediction Unlike Bayesian methods, the frequentist approach assumes the true model parameters are fixed with no underlying distribution. Examples of such distribution-free techniques are model ensembles (Lakshminarayanan et al., 2017; Pearce et al., 2018), bootstrap (Kim et al., 2020; Alaa & Van Der Schaar, 2020), interval regression (Pearce et al., 2018; Kivaranovic et al., 2020; Wu et al., 2021) and quantile regression (Gasthaus et al., 2019; Romano et al., 2019). An important distribution-free technique which is most relevant to our work is conformal prediction (Angelopoulos & Bates, 2021b; Shafer & Vovk, 2008). This approach relies on a labeled calibration dataset to convert point estimations into prediction regions. Conformal methods can be used with any estimator, require no retraining, are computationally efficient and provide coverage guarantees in finite samples (Lei et al., 2018). Recent development includes conformalized quantile regression (Romano et al., 2019; Sesia & Candès, 2020; Angelopoulos et al., 2022b), conformal risk control (Angelopoulos et al., 2022a; Bates et al., 2021; Angelopoulos et al., 2021) and semantic uncertainty intervals for generative adversarial networks (Sankaranarayanan et al., 2022). Sun (2022) provides an extensive survey on distribution-free conformal prediction methods.
|
| 34 |
+
|
| 35 |
+
§ 3 BACKGROUND: CONFORMAL PREDICTION IN IMAGE REGRESSION
|
| 36 |
+
|
| 37 |
+
We present a brief overview of the work in (Angelopoulos et al., 2022b), which stands out in the realm of conformal prediction for image-to-image problems, and serves as the basis of our work. Let $Y \in \mathcal{Y} = {\mathbb{R}}^{N}$ be a ground-truth image in vector form, an image $X \in \mathcal{X} = {\mathbb{R}}^{M}$ be its measurements, and $\widehat{f}\left( X\right) \in \mathcal{Y}$ an estimator of $Y$ . Conformal prediction constructs uncertainty intervals
|
| 38 |
+
|
| 39 |
+
$$
|
| 40 |
+
\mathcal{T}{\left( X\right) }_{\left\lbrack i\right\rbrack } = \left\lbrack {\widehat{f}{\left( X\right) }_{\left\lbrack i\right\rbrack } - \widehat{l}{\left( X\right) }_{\left\lbrack i\right\rbrack },\widehat{f}{\left( X\right) }_{\left\lbrack i\right\rbrack } + \widehat{u}{\left( X\right) }_{\left\lbrack i\right\rbrack }}\right\rbrack ,\;i = 0,\ldots ,N - 1, \tag{1}
|
| 41 |
+
$$
|
| 42 |
+
|
| 43 |
+
where $\widehat{l}{\left( X\right) }_{\left\lbrack i\right\rbrack } \geq 0$ and $\widehat{u}{\left( X\right) }_{\left\lbrack i\right\rbrack } \geq 0$ represent the uncertainty in lower and upper directions respectively. Given heuristic uncertainty values $\widetilde{l}$ and $\widetilde{u}$ , the uncertainty intervals are calibrated using a calibration dataset $\mathcal{C} \triangleq {\left\{ {X}_{k},{Y}_{k}\right\} }_{k = 1}^{K}$ to guarantee they contain at least a fraction $\alpha$ of the ground-truth pixel values with probability $1 - \delta$ . Here $\alpha \in \left( {0,1}\right)$ and $\delta \in \left( {0,1}\right)$ are user-specified risk and error levels respectively. Formally, the per-pixel uncertainty intervals are defined as follows.
|
| 44 |
+
|
| 45 |
+
Definition 1. Risk-Controlling Prediction Set (RCPS). A random set-valued function $\mathcal{T} : \mathcal{X} \rightarrow$ ${\mathcal{Y}}^{\prime } = {2}^{\mathcal{Y}}$ is an $\left( {\alpha ,\delta }\right)$ -Risk-Controlling Prediction Set if
|
| 46 |
+
|
| 47 |
+
$$
|
| 48 |
+
\mathbb{P}\left( {\mathcal{R}\left( \mathcal{T}\right) \leq 1 - \alpha }\right) \geq 1 - \delta
|
| 49 |
+
$$
|
| 50 |
+
|
| 51 |
+
Here the risk is $\mathcal{R}\left( \mathcal{T}\right) \triangleq 1 - \mathbb{E}\left\lbrack {\frac{1}{N}\left| \left\{ {i : {Y}_{\left\lbrack i\right\rbrack }^{\text{ test }} \in \mathcal{T}{\left( {X}^{\text{ test }}\right) }_{\left\lbrack i\right\rbrack }}\right\} \right| }\right\rbrack$ where the expectation is over a new test point $\left( {{X}^{\text{ test }},{Y}^{\text{ test }}}\right)$ , while the outer probability is over the calibration data.
|
| 52 |
+
|
| 53 |
+
The procedure for constructing RCPS consists of two stages. First, a machine learning system (e.g. neural network) is trained to output a point prediction $\widehat{f}$ , and heuristic lower and upper interval widths $\left( {\widetilde{l},\widetilde{u}}\right)$ . The second phase utilizes the calibration set to calibrate $\left( {\widetilde{l},\widetilde{u}}\right)$ so they contain the right fraction of ground truth pixels. The final intervals are those in (1) with the calibrated widths $\left( {\widehat{l},\widehat{u}}\right)$ .
|
| 54 |
+
|
| 55 |
+
Conformal prediction provides per-pixel uncertainty intervals with statistical guarantees in image-to-image regression problems. Yet, the per-pixel prediction sets may be difficult to comprehend on their own. To remedy this, the uncertainty intervals are visualized by passing the pixel-wise interval lengths through a colormap, where small sets render a pixel blue and large sets render it red. Thus, the redder a region is, the greater the uncertainty, and the bluer it is, the greater the confidence. The resultant uncertainty map, however, is not directly endowed with rigorous guarantees. This raises the following question: can we directly produce an uncertainty map with strong statistical guarantees?
|
| 56 |
+
|
| 57 |
+
§ 4 CONFORMAL PREDICTION MASKS
|
| 58 |
+
|
| 59 |
+
Inspired by the above, we construct uncertainty masks $\mathcal{M}\left( X\right) = M\left( {X,\widehat{f}\left( X\right) }\right) \in {\left\lbrack 0,1\right\rbrack }^{N}$ such that
|
| 60 |
+
|
| 61 |
+
$$
|
| 62 |
+
\mathbb{E}\left\lbrack {\mathcal{M}{\left( {X}^{\text{ test }}\right) }_{\left\lbrack i\right\rbrack } \cdot \left| {\widehat{f}{\left( {X}^{\text{ test }}\right) }_{\left\lbrack i\right\rbrack } - {Y}_{\left\lbrack i\right\rbrack }^{\text{ test }}}\right| }\right\rbrack \leq {\beta }_{\left\lbrack i\right\rbrack }, \tag{2}
|
| 63 |
+
$$
|
| 64 |
+
|
| 65 |
+
where the expectation is over a new test point, and ${\beta }_{\left\lbrack i\right\rbrack } \in {\mathbb{R}}^{ + }$ is user-specified risk level. Define ${\widehat{f}}_{\mathcal{M}}\left( X\right) \triangleq \mathcal{M}\left( X\right) \odot \widehat{f}\left( X\right)$ and ${Y}_{\mathcal{M}} \triangleq \mathcal{M}\left( X\right) \odot Y$ where $\odot$ represents a point-wise (Hadamard) product. Then, note that building (2) is equivalent to create the following uncertainty intervals
|
| 66 |
+
|
| 67 |
+
$$
|
| 68 |
+
{\mathcal{T}}_{\mathcal{M}}{\left( X\right) }_{\left\lbrack i\right\rbrack } = \left\lbrack {{\widehat{f}}_{\mathcal{M}}{\left( X\right) }_{\left\lbrack i\right\rbrack } - {\beta }_{\left\lbrack i\right\rbrack },{\widehat{f}}_{\mathcal{M}}{\left( X\right) }_{\left\lbrack i\right\rbrack } + {\beta }_{\left\lbrack i\right\rbrack }}\right\rbrack , \tag{3}
|
| 69 |
+
$$
|
| 70 |
+
|
| 71 |
+
which satisfies
|
| 72 |
+
|
| 73 |
+
$$
|
| 74 |
+
{Y}_{\mathcal{M}\left\lbrack i\right\rbrack }^{\text{ test }} \in {\mathcal{T}}_{\mathcal{M}}{\left( {X}^{\text{ test }}\right) }_{\left\lbrack i\right\rbrack }. \tag{4}
|
| 75 |
+
$$
|
| 76 |
+
|
| 77 |
+
We remark a few difference between (3) and (II): In (II) the lower and upper per-pixel uncertainty widths $\left( {\widehat{l},\widehat{u}}\right)$ depend on $X$ and are calibrated, while in (3) $\widehat{l} = \widehat{u} \equiv \beta$ are user-specified and independent of $X$ . Furthermore, the uncertainty parameters which undergo calibration are ${\left\{ \mathcal{M}{\left( X\right) }_{\left\lbrack i\right\rbrack }\right\} }_{i = 1}^{N}$ .
|
| 78 |
+
|
| 79 |
+
One may notice that the above formulation exhibits a major limitation as each value of the prediction mask is defined independently from other values. Hence, it requires the user to specify a risk level for each pixel which is cumbersome, especially in high dimension. More importantly, setting each entry of the mask independently may fail in capturing the dependency between pixels, thus, fail to express uncertainty at a conceptual level. To overcome this, we redefine our uncertainty masks to ensure with probability at least $1 - \delta$ it holds that $\mathbb{E}\left\lbrack {\begin{Vmatrix}{\widehat{f}}_{\mathcal{M}}\left( {X}^{\text{ test }}\right) - {Y}_{\mathcal{M}}^{\text{ test }}\end{Vmatrix}}_{1}\right\rbrack \leq \alpha$ , where $\alpha \in {\mathbb{R}}^{ + }$ is a global risk level and $\parallel Z{\parallel }_{1} \triangleq \mathop{\sum }\limits_{{i = 1}}^{N}Z\left\lbrack i\right\rbrack$ is the L1 norm of an arbitrary image $Z$ . Furthermore, the latter formulation can be generalized to any divergence measure $d : \mathcal{Y} \times \mathcal{Y} \rightarrow {\mathbb{R}}^{ + }$ such that
|
| 80 |
+
|
| 81 |
+
$$
|
| 82 |
+
\mathbb{E}\left\lbrack {d\left( {{\widehat{f}}_{\mathcal{M}}\left( {X}^{\text{ test }}\right) ,{Y}_{\mathcal{M}}^{\text{ test }}}\right) }\right\rbrack \leq \alpha . \tag{5}
|
| 83 |
+
$$
|
| 84 |
+
|
| 85 |
+
Note we avoid trivial solutions, e.g. a zero-mask, which satisfy (5) yet provide no useful information. Thus, we seek solutions that employ the least masking required to meet (5), with high probability.
|
| 86 |
+
|
| 87 |
+
The above formulation enjoys several benefits. First, the current definition of the mask captures pixel-dependency. Thus, rather than focusing on individual pixels, the resultant map would mask out (or reduce) regions of high uncertainty within the predicated image to guarantee the divergence remains below the given risk level. Second, it accepts any divergence measure, each leading to a different mask. For example, selecting $d\left( {\cdot , \cdot }\right)$ to be a distortion measure may underline uncertainty regions of high-frequency objects (e.g. edges), while setting $d\left( {\cdot , \cdot }\right)$ to be a perceptual loss may highlight semantic factors within the image. Formally, we refers to these uncertainty masks as Risk-Controlling Prediction Masks, which are defined below.
|
| 88 |
+
|
| 89 |
+
Definition 2. Risk-Controlling Prediction Mask (RCPM). A random function $\mathcal{M} : \mathcal{X} \times \mathcal{Y} \rightarrow$ ${\left\lbrack 0,1\right\rbrack }^{\mathcal{Y}}$ is an $\left( {\alpha ,\delta }\right)$ -Risk-Controlling Prediction Mask if
|
| 90 |
+
|
| 91 |
+
$$
|
| 92 |
+
\mathbb{P}\left( {\mathbb{E}\left\lbrack {\mathcal{R}\left( \mathcal{M}\right) }\right\rbrack \leq \alpha }\right) \geq 1 - \delta
|
| 93 |
+
$$
|
| 94 |
+
|
| 95 |
+
where the risk is defined as $\mathcal{R}\left( \mathcal{M}\right) \triangleq d\left( {{\widehat{f}}_{\mathcal{M}}\left( {X}^{\text{ test }}\right) ,{Y}_{\mathcal{M}}^{\text{ test }}}\right)$ for given a divergence $d\left( {\cdot , \cdot }\right)$ . The outer probability is over the calibration data, while the expectation taken over a test point $\left( {{X}^{\text{ test }},{Y}^{\text{ test }}}\right)$ .
|
| 96 |
+
|
| 97 |
+
As for RCPS, the procedure for creating RCPM includes two main stages. First, given a predictor $\widehat{f}$ , we require a heuristic notion of a non-zero uncertainty mask $\widetilde{M}$ . In particular, we train a neural network to output a mask given the measurements and the predicted image as inputs. Second, given a divergence measure, we use the calibration set to calibrate the heuristic mask until the divergence measure decreases below the desired risk level. The final outputs are the calibrated mask and the original prediction multiplied by the mask. The overall method is outlined in Algorithm 1. Following the latter, we now discuss notion of initial uncertainty masks and the subsequent calibration process.
|
| 98 |
+
|
| 99 |
+
§ ALGORITHM 1 GENERATING RCPM
|
| 100 |
+
|
| 101 |
+
1. Given a regression model $\widehat{f}$ , train a model which outputs an initial mask $\widetilde{\mathcal{M}}$ .
|
| 102 |
+
|
| 103 |
+
2. Calibrate $\widetilde{\mathcal{M}}$ using the calibration dataset to obtain $\mathcal{M}$ (e.g. using Algorithm 2).
|
| 104 |
+
|
| 105 |
+
3. Given $X$ at inference, output the risk-controlling masked prediction ${\widehat{f}}_{\mathcal{M}}\left( X\right) = \mathcal{M}\left( X\right) \odot \widehat{f}\left( X\right)$ .
|
| 106 |
+
|
| 107 |
+
§ 4.1 INITIAL ESTIMATION OF UNCERTAINTY MASKS
|
| 108 |
+
|
| 109 |
+
Here we present two notions of uncertainty masks. The first concept, based on (Angelopoulos et al., 2022b), translates given uncertainty intervals into a heuristic mask. In the second we develop a process for training a neural network which accepts the input and the predicted images and outputs an uncertainty mask based on a given divergence between the prediction and the ground-truth image.
|
| 110 |
+
|
| 111 |
+
§ 4.1.1 INTERVALS TO MASKS
|
| 112 |
+
|
| 113 |
+
In (Angelopoulos et al., 2022b), the authors propose to build uncertainty intervals based on four heuristic notions of lower and upper interval widths $\widetilde{l}$ and $\widetilde{u}$ : (1) Regression to the magnitude of the residual; (2) one Gaussian per pixel; (3) softmax outputs; and (4) pixel-wise quantile regression. Then, we build a mask by setting the pixel-values to be inversely proportional to the interval sizes:
|
| 114 |
+
|
| 115 |
+
$$
|
| 116 |
+
\widetilde{\mathcal{M}}{\left( X\right) }_{\left\lbrack i\right\rbrack } \propto {\left( {\widetilde{u}}_{\left\lbrack i\right\rbrack } - {\widetilde{l}}_{\left\lbrack i\right\rbrack }\right) }^{-1}. \tag{6}
|
| 117 |
+
$$
|
| 118 |
+
|
| 119 |
+
Thus, the resultant mask holds high values at pixels with small-size intervals (high confidence) and smaller values at pixels with larger intervals corresponding to high uncertainty regions. However this approach requires first creating uncertainty intervals, hence, we next introduce a technique which directly produces an uncertainty mask.
|
| 120 |
+
|
| 121 |
+
§ 4.1.2 MASK REGRESSION
|
| 122 |
+
|
| 123 |
+
Here, we introduce a notion of an uncertainty mask represented by a neural network $\widetilde{\mathcal{M}}\left( {X;\theta }\right) \in$ ${\left\lbrack 0,1\right\rbrack }^{N}$ with parameters $\theta$ . The mask model is trained to output a mask which satisfies
|
| 124 |
+
|
| 125 |
+
$$
|
| 126 |
+
\mathbb{E}\left\lbrack {d\left( {{\widehat{f}}_{\widetilde{\mathcal{M}}}\left( {X}^{\text{ train }}\right) ,{Y}_{\widetilde{\mathcal{M}}}^{\text{ train }}}\right) }\right\rbrack \leq \alpha . \tag{7}
|
| 127 |
+
$$
|
| 128 |
+
|
| 129 |
+
where here the expectation is over the training samples $\mathcal{D} \triangleq {\left\{ {X}_{j},{Y}_{j}\right\} }_{j = 1}^{J}$ used to train $\widehat{f}$ . To derive our loss function, we start with formulating the following problem for a given a triplet $\left( {X,Y,\widehat{f}\left( X\right) }\right)$
|
| 130 |
+
|
| 131 |
+
$$
|
| 132 |
+
\mathop{\min }\limits_{\theta }\parallel \widetilde{\mathcal{M}}\left( {X,\widehat{f}\left( X\right) }\right) - \mathbb{1}{\parallel }_{2}^{2}\text{ subject to }d\left( {{\widehat{f}}_{\widetilde{\mathcal{M}}}\left( X\right) ,{Y}_{\widetilde{\mathcal{M}}}}\right) \leq \alpha , \tag{8}
|
| 133 |
+
$$
|
| 134 |
+
|
| 135 |
+
where 1 is an image of all ones, representing no masking. The constraint in the above corresponds to (7), while the objective aims to find the minimal solution, i.e., the solution that masks the image the least (avoding trivial solutions). The Lagrangian of the problem is given by
|
| 136 |
+
|
| 137 |
+
$$
|
| 138 |
+
\mathcal{L}\left( {\theta ,\mu }\right) \triangleq \parallel \widetilde{\mathcal{M}}\left( {X,\widehat{f}\left( X\right) }\right) - \mathbb{1}{\parallel }_{2}^{2} + \mu \left( {d\left( {{\widehat{f}}_{\widetilde{\mathcal{M}}}\left( X\right) ,{Y}_{\widetilde{\mathcal{M}}}}\right) - \alpha }\right) \tag{9}
|
| 139 |
+
$$
|
| 140 |
+
|
| 141 |
+
where $\mu > 0$ is the dual variable, considered as an hyperparameter. Given $\mu$ , the optimal mask can be obtained by minimizing $\mathcal{L}\left( {\theta ,\mu }\right)$ with respect to $\theta$ , which is equivalent to minimizing
|
| 142 |
+
|
| 143 |
+
$$
|
| 144 |
+
\parallel \widetilde{\mathcal{M}}\left( {X,\widehat{f}\left( X\right) }\right) - \mathbb{1}{\parallel }_{2}^{2} + \mu \cdot d\left( {{\widehat{f}}_{\widetilde{\mathcal{M}}}\left( X\right) ,{Y}_{\widetilde{\mathcal{M}}}}\right) \tag{10}
|
| 145 |
+
$$
|
| 146 |
+
|
| 147 |
+
since $\alpha$ does not depend on $\theta$ . Thus, we train our mask model using the following loss function:
|
| 148 |
+
|
| 149 |
+
$$
|
| 150 |
+
\mathcal{L}\left( {\mathcal{D},\theta }\right) \triangleq \mathop{\sum }\limits_{{\left( {X,Y}\right) \in \mathcal{D}}}\parallel \widetilde{\mathcal{M}}\left( {X,\widehat{f}\left( X\right) }\right) - \mathbb{1}{\parallel }_{2}^{2} + \mu \cdot d\left( {{\widehat{f}}_{\widetilde{\mathcal{M}}}\left( X\right) ,{Y}_{\widetilde{\mathcal{M}}}}\right) . \tag{11}
|
| 151 |
+
$$
|
| 152 |
+
|
| 153 |
+
The proposed approach facilitates the use of any differentiable distortion measure and is agnostic to the prediction model $\widehat{f}$ . Furthermore, notice that the loss function is independent of $\alpha$ , hence, can be trained once for all values of $\alpha$ . Thus, the output mask acts only as an initial uncertainty map which may not satisfy (5) and need to be calibrated. Following proper calibration, discussed next, our mask model attains (5) without requiring the ground-truth $Y$ . Lastly, this approach directly outputs uncertainty masks and thus it is the focus of our work.
|
| 154 |
+
|
| 155 |
+
§ 4.2 MASK CALIBRATION
|
| 156 |
+
|
| 157 |
+
We consider the $\widetilde{M}\left( X\right)$ as an initial estimation of our uncertainty mask which needs to calibrated to provide the guarantee in Definition 2. As the calibration process is not the focus of our work, we perform a simple calibration outlined in Algorithm 2. The core of the calibration employs a parametric function $C\left( {\cdot ;\lambda }\right)$ pixel-wise to obtain a mask ${\mathcal{M}}_{\lambda }{\left( X\right) }_{\left\lbrack i\right\rbrack } \triangleq C\left( {\mathcal{M}{\left( X\right) }_{\left\lbrack i\right\rbrack };\lambda }\right)$ . In general, $C\left( {\cdot ;\lambda }\right)$ can be any monotonic non-decreasing function. Here we consider the following form
|
| 158 |
+
|
| 159 |
+
$$
|
| 160 |
+
{\mathcal{M}}_{\lambda }{\left( X\right) }_{\left\lbrack i\right\rbrack } \triangleq \min \left( {\frac{\lambda }{1 - \widetilde{\mathcal{M}}{\left( X\right) }_{\left\lbrack i\right\rbrack } + \epsilon },1}\right) \;\forall i = 1,\ldots ,N, \tag{12}
|
| 161 |
+
$$
|
| 162 |
+
|
| 163 |
+
${}^{1}$ A small value $\epsilon$ is added to the denominator to ensure numerical stability.
|
| 164 |
+
|
| 165 |
+
which has been found empirically to perform well in our experiments. To set $\lambda > 0$ , we use the calibration dataset $\mathcal{C} \triangleq {\left\{ {X}_{k},{Y}_{k}\right\} }_{k = 1}^{K}$ such that for any pair $\left( {{X}_{k},{Y}_{k}}\right) \in \mathcal{C}$ we compute
|
| 166 |
+
|
| 167 |
+
$$
|
| 168 |
+
{\lambda }_{k} \triangleq \max \left\{ {\widehat{\lambda } : d\left( {{\widehat{f}}_{{\mathcal{M}}_{\widehat{\lambda }}}\left( {X}_{k}\right) ,{Y}_{k{\mathcal{M}}_{\widehat{\lambda }}}}\right) \leq \alpha }\right\} . \tag{13}
|
| 169 |
+
$$
|
| 170 |
+
|
| 171 |
+
Finally, $\lambda$ is taken to be the $1 - \delta$ quantile of ${\left\{ {\lambda }_{k}\right\} }_{k = 1}^{K}$ , i.e. the maximal value for which at least $\delta$ fraction of the calibration set satisfies condition (5). Thus, assuming the calibration and test sets are i.i.d samples from the same distribution, the calibrated mask is guaranteed to satisfy Definition 2
|
| 172 |
+
|
| 173 |
+
§ ALGORITHM 2 CALIBRATION PROCESS
|
| 174 |
+
|
| 175 |
+
Input: Calibration data $\mathcal{C} \triangleq {\left\{ {X}_{k},{Y}_{k}\right\} }_{k = 1}^{K}$ ; risk level $\alpha$ ; error rate $\delta$ ; underlying predictor $\widehat{f}$ ; heuristic mask
|
| 176 |
+
|
| 177 |
+
$\widetilde{\mathcal{M}}$ ; a monotonic non-decreasing function $C\left( {\cdot ;\lambda }\right) : \left\lbrack {0,1}\right\rbrack \rightarrow \left\lbrack {0,1}\right\rbrack$ parameterized by $\lambda > 0$ .
|
| 178 |
+
|
| 179 |
+
1. For a given $\widetilde{\lambda } > 0$ , define ${\mathcal{M}}_{\widetilde{\lambda }}{\left( X\right) }_{\left\lbrack i\right\rbrack } \triangleq C\left( {\widetilde{\mathcal{M}}{\left( X\right) }_{\left\lbrack i\right\rbrack };\widetilde{\lambda }}\right)$ for all $i = 1,\ldots ,N$ .
|
| 180 |
+
|
| 181 |
+
2. For each pair $\left( {{X}_{k},{Y}_{k}}\right) \in \mathcal{C}$ , set ${\lambda }_{k} \triangleq \max \left\{ {\widehat{\lambda } : d\left( {{\widehat{f}}_{{\mathcal{M}}_{\widehat{\lambda }}}\left( {X}_{k}\right) ,{Y}_{k}{}_{{\mathcal{M}}_{\widehat{\lambda }}}}\right) \leq \alpha }\right\}$ .
|
| 182 |
+
|
| 183 |
+
3. Set $\lambda$ to be the $1 - \delta$ quantile of ${\left\{ {\lambda }_{k}\right\} }_{k = 1}^{K}$ .
|
| 184 |
+
|
| 185 |
+
4. Define the final mask model as ${\mathcal{M}}_{\lambda }{\left( X\right) }_{\left\lbrack i\right\rbrack } \triangleq C\left( {\widetilde{\mathcal{M}}{\left( X\right) }_{\left\lbrack i\right\rbrack };\lambda }\right)$ .
|
| 186 |
+
|
| 187 |
+
Output: Calibrated uncertainty mask model ${\mathcal{M}}_{\lambda }$ .
|
| 188 |
+
|
| 189 |
+
§ 5 EXPERIMENTS
|
| 190 |
+
|
| 191 |
+
§ 5.1 DATASETS AND TASKS
|
| 192 |
+
|
| 193 |
+
Datasets Two data-sets are used in our experiments:
|
| 194 |
+
|
| 195 |
+
Places365 (Zhou et al., 2017): A large collection of 256x256 images from 365 scene categories. We use 1,803,460 images for training and 36,500 images for validation/test.
|
| 196 |
+
|
| 197 |
+
Rat Astrocyte Cells (Ljosa et al., 2012): A dataset of 1,200 uncompressed images of scanned rat cells of resolution ${990} \times {708}$ . We crop the images into ${256} \times {256}$ tiles, and randomly split them into train and validation/test sets of sizes 373,744 and 11,621 respectively. The tiles are partially overlapped as we use stride of 32 pixels when cropping the images.
|
| 198 |
+
|
| 199 |
+
Tasks We consider the following image-to-image tasks (illustrated in Figure 4):
|
| 200 |
+
|
| 201 |
+
Image Completion: Using gray-scale version of Places365, we remove middle vertical and horizontal stripes of 32 pixel width, and aim to reconstruct the missing part.
|
| 202 |
+
|
| 203 |
+
Super Resolution: We experiment with this task on the two data-sets. The images are scaled down to ${64} \times {64}$ images where the goal is to reconstruct the original images.
|
| 204 |
+
|
| 205 |
+
Colorization: We convert the Places365 images to grayscale and aim to recover their colors.
|
| 206 |
+
|
| 207 |
+
§ 5.2 EXPERIMENTAL SETTINGS
|
| 208 |
+
|
| 209 |
+
Image-to-Image Models We start with training models for the above three tasks. Note that these models are not intended to be state-of-the-art, but rather used to demonstrate the uncertainty estimation technique proposed in this work. We use the same model architecture for all tasks: an 8 layer U-Net. For each task we train two versions of the network: (i) A simple regressor; and (ii) A conditional GAN, where the generator plays the role of the reconstruction model. For the GAN, the discriminator is implemented as a 4 layer CNN. We use the L1 loss as the objective for the regressor, and add an adversarial loss for the conditional GAN, as in Isola et al. (2017). All models are trained for 10 epochs using Adam optimizer with a learning rate of $1\mathrm{e} - 5$ and a batch size of 50 .
|
| 210 |
+
|
| 211 |
+
Mask Model For our mask model we use an 8 layer U-Net architecture for simplicity and compatibility with previous works. The input to the mask model are the measurement image and the predicated image, concatenated on the channel axis. The output is a mask having the same shape as the predicted image with values within the range $\left\lbrack {0,1}\right\rbrack$ . The mask model is trained using the loss function (11) with $\mu = 2$ , a learning rate of $1\mathrm{e} - 5$ and a batch size of 25 .
|
| 212 |
+
|
| 213 |
+
Experiments We consider the L1, L2, SSIM and LPIPS as our divergence measures. We set aside 1,000 samples from each validation set for calibration and use the remaining samples for evaluation. We demonstrate the flexibility of our approach by conducting experiments on a variety of 12 settings: (i) Image Completion: {Regressor, GAN} $\times \{ \mathrm{L}1,\mathrm{{LPIPS}}\}$ ; (ii) Super Resolution: {Regressor, GAN} $\times \{ \mathrm{L}1,\mathrm{{SSIM}}\}$ ; and (iii) Colorization: $\{$ Regressor, GAN $\} \times \{ \mathrm{L}1,\mathrm{L}2\}$ .
|
| 214 |
+
|
| 215 |
+
Risk and Error Levels Recall that given a predicted image, our goal is to find a mask that, when applied to both the prediction and the (unknown) reference image, reduces the distortion between them to a predefined risk level $\alpha$ with high probability $\delta$ . Here we fix $\delta = {0.9}$ and set $\alpha$ to be the 0.1-quantile of each measure computed on a random sample from the validation set, i.e. roughly ${10}\%$ of the predictions are already considered sufficiently good and do not require masking at all.
|
| 216 |
+
|
| 217 |
+
§ 5.3 COMPETING TECHNIQUES FOR COMPARISON
|
| 218 |
+
|
| 219 |
+
Quantile - Interval-Based Technique We compare our method to the quantile regression option presented in (Angelopoulos et al. 2022b), denoted by Quantile. While their calibrated uncertainty intervals are markedly different from the expected distortion we consider, we can use these intervals and transform them into a mask using (6). For completeness, we also report the performance of the quantile regression even when it is less suitable, i.e. when the underlying model is a GAN and when the divergence function is different from L1. We note again that for the sake of a fair comparison, our implementation of the mask model uses exactly the same architecture as the quantile regressor.
|
| 220 |
+
|
| 221 |
+
Opt - Oracle We also compare our method with an oracle, denoted Opt, which given a ground-truth image computes an optimal mask by minimizing (10). We perform gradient descent using Adam optimizer with a learning rate of 0.01, iterating until the divergence term decreases below the risk level $\alpha$ . This approach is performed to each test image individually, thus no calibration needed.
|
| 222 |
+
|
| 223 |
+
Comparison Metrics Given a mask $\mathcal{M}\left( X\right)$ we assess its performance using the following metrics: (i) Average mask size $s\left( {\mathcal{M}\left( X\right) }\right) \triangleq \frac{1}{N}\parallel \mathcal{M}\left( X\right) - \mathbb{1}{\parallel }_{1}$ ; (ii) correlation $\operatorname{Corr}\left( {\mathcal{M},d}\right)$ between the mask size and the full (unmasked) divergence value; and (iii) correlation $\operatorname{Corr}\left( {\mathcal{M},{\mathcal{M}}_{opt}}\right)$ between the size of the given mask and the size of the optimal mask ${\mathcal{M}}_{\text{ opt }}$ obtained by Opt.
|
| 224 |
+
|
| 225 |
+
§ 5.4 RESULTS AND DISCUSSION
|
| 226 |
+
|
| 227 |
+
We now show a series of results that demonstrate our proposed uncertainty masking approach, and its comparison with Opt and Quantile ${}^{2}$ . We begin with a representative visual illustration of our proposed mask for several test cases in Figure 2. As can be seen, the produced masks indeed identify sub-regions of high uncertainty. In the image completion task the bottom left corner is richer in details and thus there is high uncertainty regarding this part in the reconstructed image. In the colorization task, the mask highlights the colored area of the bus which is the most unreliable region since can be colorized with a large variety of colors. In the super resolution task the mask marks regions of edges and text while trustworthy parts such as smooth surfaces remain unmasked.
|
| 228 |
+
|
| 229 |
+
We present quantitative results in Table 1, showing that our method exhibits smaller mask sizes, aligned well with the masks obtained by Opt. In contrast, Quantile overestimates and produces larger masks as expected. In terms of the correlation $\operatorname{Corr}\left( {\mathcal{M},d}\right)$ , our method shows high agreement, while Quantile lags behind. This correlation indicates a much desired adaptivity of the estimated mask to the complexity of image content and thus to the corresponding uncertainty. We provide a complement illustration of the results in Figure 3 in the Appendix. As seen from the top row, all three methods meet the probabilistic guarantees regarding the divergence/loss with fewer than ${10}\%$ exceptions, as required. Naturally, Opt does not have outliers since each mask is optimally calibrated by its computation. The spread of loss values tends to be higher with Quantile, indicating weaker performance. The middle and bottom rows are consistent with results in Table 1, showing that our approach tends to produce masks that are close in size to those of Opt; while Quantile produces larger, and thus inferior, masked areas. We note that the colorization task seem to be more challenging, resulting in a marginal performance increase for our method compared to Quantile.
|
| 230 |
+
|
| 231 |
+
${}^{2}$ Due to space limitations, we show more extensive experimental results in the Appendix, while presenting a selected portion of them here.
|
| 232 |
+
|
| 233 |
+
< g r a p h i c s >
|
| 234 |
+
|
| 235 |
+
Figure 2: Examples of conformal prediction masks. The images from left to right are the measurement, ground-truth, model prediction, our calibrated mask trained with L1 loss and the ground-truth L1 error. Tasks are image completion (top), colorization (middle) and super resolution (bottom).
|
| 236 |
+
|
| 237 |
+
Table 1: Quantitative results. Arrows points to the better direction where best results are in blue.
|
| 238 |
+
|
| 239 |
+
max width=
|
| 240 |
+
|
| 241 |
+
2*Network 2*Distance 3|c|$s\left( \mathcal{M}\right)$ (↓) 2|c|$\operatorname{Corr}\left( {\mathcal{M},d}\right)$ 2|c|$\operatorname{Corr}\left( {\mathcal{M},{\mathcal{M}}_{\text{ opt }}}\right)$
|
| 242 |
+
|
| 243 |
+
3-9
|
| 244 |
+
Opt Ours Quantile Ours Quantile Ours Quantile
|
| 245 |
+
|
| 246 |
+
1-9
|
| 247 |
+
9|c|Image Completion - Places365
|
| 248 |
+
|
| 249 |
+
1-9
|
| 250 |
+
Regression L1 0.09 0.10 0.15 0.89 0.78 0.89 0.76
|
| 251 |
+
|
| 252 |
+
1-9
|
| 253 |
+
Regression LPIPS 0.01 0.01 0.20 0.54 0.51 0.89 0.77
|
| 254 |
+
|
| 255 |
+
1-9
|
| 256 |
+
GAN L1 0.09 0.09 0.14 0.95 0.85 0.94 0.80
|
| 257 |
+
|
| 258 |
+
1-9
|
| 259 |
+
GAN LPIPS 0.01 0.01 0.08 0.31 0.24 0.50 0.23
|
| 260 |
+
|
| 261 |
+
1-9
|
| 262 |
+
9|c|Super Resolution - Rat Astrocyte Cells
|
| 263 |
+
|
| 264 |
+
1-9
|
| 265 |
+
Regression L1 0.24 0.26 0.28 0.99 0.54 0.95 0.88
|
| 266 |
+
|
| 267 |
+
1-9
|
| 268 |
+
Regression SSIM 0.03 0.03 0.13 0.66 0.64 0.82 0.57
|
| 269 |
+
|
| 270 |
+
1-9
|
| 271 |
+
GAN L1 0.26 0.30 0.40 0.94 0.63 0.80 0.72
|
| 272 |
+
|
| 273 |
+
1-9
|
| 274 |
+
GAN SSIM 0.03 0.03 0.13 0.79 0.63 0.83 0.63
|
| 275 |
+
|
| 276 |
+
1-9
|
| 277 |
+
9|c|Super Resolution - Places365
|
| 278 |
+
|
| 279 |
+
1-9
|
| 280 |
+
Regression L1 0.30 0.36 0.39 0.99 0.97 0.95 0.94
|
| 281 |
+
|
| 282 |
+
1-9
|
| 283 |
+
Regression SSIM 0.10 0.23 0.48 0.89 0.85 0.94 0.84
|
| 284 |
+
|
| 285 |
+
1-9
|
| 286 |
+
GAN L1 0.37 0.38 0.47 0.97 0.81 0.95 0.67
|
| 287 |
+
|
| 288 |
+
1-9
|
| 289 |
+
GAN SSIM 0.10 0.12 0.51 0.86 0.81 0.92 0.86
|
| 290 |
+
|
| 291 |
+
1-9
|
| 292 |
+
9|c|Colorization - Places365
|
| 293 |
+
|
| 294 |
+
1-9
|
| 295 |
+
Regression L1 0.27 0.37 0.40 0.68 0.43 0.57 0.46
|
| 296 |
+
|
| 297 |
+
1-9
|
| 298 |
+
Regression L2 0.18 0.37 0.38 0.57 0.30 0.60 0.48
|
| 299 |
+
|
| 300 |
+
1-9
|
| 301 |
+
GAN L1 0.27 0.38 0.40 0.58 0.40 0.60 0.52
|
| 302 |
+
|
| 303 |
+
1-9
|
| 304 |
+
GAN L2 0.18 0.36 0.38 0.42 0.28 0.59 0.49
|
| 305 |
+
|
| 306 |
+
1-9
|
| 307 |
+
|
| 308 |
+
§ 6 CONCLUSIONS
|
| 309 |
+
|
| 310 |
+
Uncertainty assessment in image-to-image regression problems is a challenging task, due to the implied complexity, the high dimensions involved, and the need to offer an effective and meaningful visualization of the estimated results. This work proposes a novel approach towards these challenges by constructing a conformal mask that visually-differentiate between trustworthy and uncertain regions in an estimated image. This mask provides a measure of uncertainty accompanied by an statistical guarantee, stating that with high probability, the divergence between the original and the recovered images over the non-masked regions is below a desired risk level. The presented paradigm is flexible, being agnostic to the choice of divergence measure, and the regression method employed.
|
papers/ICLR/ICLR 2023/ICLR 2023 Workshop/ICLR 2023 Workshop TML4H/LsEd-S3ofyW/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,109 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# STASIS: REINFORCEMENT LEARNING SIMULATORS FOR HUMAN-CENTRIC REAL-WORLD ENVIRONMENTS
|
| 2 |
+
|
| 3 |
+
Anonymous authors
|
| 4 |
+
|
| 5 |
+
Paper under double-blind review
|
| 6 |
+
|
| 7 |
+
## Abstract
|
| 8 |
+
|
| 9 |
+
We introduce Stasis, a suite of reinforcement learning (RL) environments that aim to maintain realism for human-centric agents operating in real-world environments. Through representation learning and alignment with real-world offline data, Stasis allows RL systems to be trained in offline environments with tunable characteristics, such as observability, heterogeneity and levels of missing data. The resulting RL agents are capable of maintaining a level of performance and robustness that is comparable to agents trained in real-world online environments, while avoiding the high cost and risk associated with making mistakes during online training. We provide examples of two environments that will be part of Stasis and discuss its implications for the deployment of RL-based systems in sensitive and high-risk sectors.
|
| 10 |
+
|
| 11 |
+
## 1 INTRODUCTION
|
| 12 |
+
|
| 13 |
+
Reinforcement Learning (RL) is becoming increasingly popular for a variety of tasks ranging from robotic control and autonomous driving to artificial intelligence in the gaming domain. Despite its potential, the lack of realistic simulators for RL agents operating in the real-world is a major limitation for the development of RL agents. Current simulators lack the capability to model real-world applications of RL. This includes missing key components such as heterogeneity within the environment, specifically within the reward function, and observability, as all real-world environment are inherently perceived as partially observable. Furthermore, these simulators often lack the ability to handle missing data, either irregularly sampled data or observations missing at random (due design of device) and missing not-at-random (due to outcomes). Lastly, they lack the ability to generate observed data, as simulators should be thought of as a generative model of the real-world, where we want to generate samples close to the observed data.
|
| 14 |
+
|
| 15 |
+
The lack of realistic simulators for RL agents hinders the development of agents that can be successfully deployed to real-world tasks. The high cost and risk associated with making mistakes during online training makes it an important problem to address. To this end, we introduce Stasis, a suite of RL environments that aim to maintain realism for human-centric agents operating in real-world environments. Through representation learning and alignment with real-world offline data, Stasis allows RL systems to be trained in offline environments with tunable characteristics, such as observability, heterogeneity and levels of missing data. The resulting RL agents are capable of maintaining a level of performance and robustness that is comparable to agents trained in real-world online environments, while avoiding the high cost and risk associated with making mistakes during online training.
|
| 16 |
+
|
| 17 |
+
Related Work. The most similar work to the one we present here is the Gymnasium, formerly known as OpenAI gym as seen in Brockman et al. (2016), and the Safety Gym by Ray et al. (2019). Both of these are suites of environments where RL agents can be trained without requiring real-world deployment. However, they both place emphasis on robotics and control, with Safety Gym making use of MuJoCo (Todorov et al., 2012) with a focus on constrained RL. The Stasis library which we introduce here will focus on open problems related to RL in healthcare, including partial observability, heterogeneity, missing data and make use of labelled real-world data through offline RL (Levine et al., 2020).
|
| 18 |
+
|
| 19 |
+
## 2 UNDERLYING FRAMEWORK & CONSIDERATIONS
|
| 20 |
+
|
| 21 |
+
On-going efforts to build simulated environments for benchmarking conventional and emerging RL algorithms center on emulating the realism and practicality of real world settings. Evaluating algorithms in this fashion affords practitioners the ability to rigorously examine the suitability of an algorithm before initial deployment in the real world. In this paper, we identify four core themes that are important for representing human-centric real-world settings. Specifically, settings where the decision-making policy directly interacts with a human, or provides actions for a human to execute within their own environment.
|
| 22 |
+
|
| 23 |
+
Observability. Observability determines the full-range of information from the environment available to the agent for decision-making. In real-world settings, environments are typically partially observable; the agent only has access to a limited view of the current state of the environment (Littman, 2009). This can make it difficult to learn an optimal policy, as the agent may be missing important information or have to rely on incomplete observations to determine its actions. Therefore, a well-designed observability mechanism that captures the relevant information is critical for learning a good policy. However, increasing observability can also lead to higher computational and memory requirements, making it important to strike a balance between having enough information to make informed decisions and keeping the complexity manageable.
|
| 24 |
+
|
| 25 |
+
Heterogeneity. In real-world settings, the reward signal may vary between agents operating within a single environment. As such, learning a single policy that aims to optimize the reward for all agents is often difficult, leading to sub-optimal performance for select agents (Chen et al., 2022; Jin et al., 2022). Generally, this can result in a situation where some agents learn different, unintended behaviors. In multi-agent systems, this can lead to a lack of coordination, potentially hampering the functioning of the overall system. In applications such as healthcare where data from heterogeneous subjects are often used to make decisions for single subject, failing to account for heterogeneity in the reward signal can lead to an alignment problem, severely impacting the relevancy of the learned policy. Mitigating these challenges may require algorithms to leverage techniques from areas of research such as multi-agent reinforcement learning, or to directly modify the reward functions to account for the heterogeneity between agents.
|
| 26 |
+
|
| 27 |
+
Missing Data. The effectiveness of a policy in reinforcement learning is closely tied to the quality and quantity of data used to train the model. If the agent encounters missing data, such as incomplete or unavailable state or reward information, it may be unable to accurately estimate the value of different actions, leading to suboptimal decisions (Awan et al., 2022; Lizotte et al., 2008; Shortreed et al., 2011). Missing data can happen due to irregular sampling, where data is missing at random, which can occur due to various factors such as technical failures or data collection constraints. Additionally, data may be missing not at random, such as when specific actions or states are more likely to be absent. In healthcare application, this can be due to phenomena such as self-selection bias, where participants in the study exercise control over whether or not they participate in the study or how much data they provide. Therefore, it is essential to consider the consequences of missing data and address it using techniques such as imputation, data augmentation, or other advanced methods for handling missing data in reinforcement learning (Shortreed et al., 2011; Awan et al., 2022).
|
| 28 |
+
|
| 29 |
+
Offline Data. Previously collected experiential data from agents interacting within a given environment can be used to enhance the robustness and reliability of the simulated environment. We envision that offline data can be used to improve the following aspects of the simulated environment: (1) state representation - offline data can be used to provide more accurate state representations for the agent, including information about the environment, objects, and other agents (Zang et al., 2022); (2) model dynamics - the interactions between objects and agents in the environment can be modeled more accurately using offline data, allowing for a more realistic representation of the environment's dynamics Kidambi et al. (2020). Lastly, in most real-world environment, decision-making policies are rarely trained from scratch, rather offline data is commonly used to learn policies that achieve an acceptable level of performance (Levine et al., 2020). As such, incorporating available offline data into simulated environments allows for a pre-training phase, where policies are first initialized using offline data before being deployed within the environment.
|
| 30 |
+
|
| 31 |
+
## 3 THE SIMULATOR
|
| 32 |
+
|
| 33 |
+
The structure of the simulator is similar to the structure of the Gymnasium API (Brockman et al., 2016). To initialize an environment you pull it from the library's collection and then proceed to train your agent using the simulated environment. Each environment has the same structure regarding methods, in order for the users to be able to switch among environments and train on different scenarios with ease.
|
| 34 |
+
|
| 35 |
+
The difference to the Gymnasium API is that the environments also share parameters related to problems you will find in real world applications, in order to make the environments more realistic and thus the agents more robust to real world data. When initializing an environment, you will be able to specify the complexity of the problem, but also some parameters that are important in a healthcare context (Awrahman et al., 2022) and which are problematic in the collection and curation of healthcare data (Pezoulas et al., 2019). The shared parameters, when it is possible for an environment, will be able to tune aspects such as the heterogeneity of the simulation (Angus & Chang, 2021), incorporate missing data or have partial observability and add stochasticity or noise to the simulation. This can look different for every environment, but the purpose of the parameters is shared.
|
| 36 |
+
|
| 37 |
+
### 3.1 HEALTHY TRAVELING SALESMAN
|
| 38 |
+
|
| 39 |
+
The first environment, which is already implemented, is a simulated weighted travelling salesman problem (Lu et al., 2020). The environment is initialized by providing a set of coordinates that have to be visited each once and an exposure type (e.g. greenspace or bars) that has to be maximized or minimized. Then the environment will collect information on the possible routes you can take to visit each coordinate using the OpenRouteService API (Neis & Zipf, 2008) and the exposures around the locations of interest using the Overpass API (Olbricht, 2015) which both use data collected from OpenStreetMap or OSM for short (OpenStreetMap contributors, 2017). In the current implementation one of the coordinates to be provided is the starting location and the rest of the coordinates are the ones that are visited and then the agent needs to return to the starting point. The task is thus finding the optimized circle in a graph and complexity of the problem is increased by simply adding more coordinates.
|
| 40 |
+
|
| 41 |
+
The reward for each action is a weighted average between the distance covered and the time spent at exposure at each route, which is also tunable at input depending on what the agent should focus on optimizing. The possible exposure information are limited only by the possible types of locations you can collect from OSM. In terms of the parameters mentioned before on making the simulations more realistic to collected data, these are still a work in progress, but possible concepts discussed include modifying heterogeneity by adding different constraints in the possible actions of different simulated users. Some users have trouble moving large distances or want to avoid certain trigger areas, which can be encompassed in the reward function. In terms of missing data and stochasticity we can add parameters that tune the amount of information and noise we see in the possible routes. They can also be encompassed in a way that matches what we see from GPS collecting devices like smartphones and smartwatches, where missing data are not missing at random, but there are certain time-periods that continuously data are not being collected by the smart devices (Barnett & Onnela, 2018).
|
| 42 |
+
|
| 43 |
+
The following figures 1 and 2 are examples of rendering of the environment, where the green areas are greenspace locations (Novack et al., 2018) collected by the Overpass API, the blue arrows are the coordinates of the locations to be visited and the red home arrow indicates the coordinates of the starting and ending location. This is the visualization after an episode has been run using a Deep-QN agent (Mnih et al., 2013) trained on maximizing greenspace exposure on a set of 8 coordinates in Boston, MA (Figure 1) and a set of 6 coordinates in Bronx, NY (Figure 2).
|
| 44 |
+
|
| 45 |
+
A big advantage of this environment is that all the information in the simulation are collected from the real world. Researchers that want to use this environment and possess offline GPS data can also encompass them to enrich the information in the environment and make it even more realistic (Gur et al., 2022). Using GPS trajectories, we can collect information on areas that people want to avoid, areas with more traffic or areas which can be traversed faster and encompass this information in the reward function of the environment. The GPS data can also be used to gain information of people's
|
| 46 |
+
|
| 47 |
+

|
| 48 |
+
|
| 49 |
+
home and work locations or locations they like to visit frequently, thus making the environment adjust to a specific person's patterns and visit locations.
|
| 50 |
+
|
| 51 |
+
### 3.2 RESOURCE ALLOCATION IN CLINICAL SETTINGS
|
| 52 |
+
|
| 53 |
+
The third environment model in the Stasis library demonstrates a common problem found in clinical settings: dynamic resource allocation. Allocating resources efficiently and operationally requires decision making on a case-by-case basis, considering both the resources available and the individual conditions of multiple patients, which are not necessarily a function of only the resources themselves.
|
| 54 |
+
|
| 55 |
+
This environment's properties can be highly complex due to the sophistication of modern clinical settings. However, for its first iteration, it will be limited to a general setting. The action space will involve selecting resources from a given set, where each resource allocation carries a cost and takes up the resource for a certain period of time. We will not know the exact cost or duration, as these vary depending on the resource in question. For example, it takes less time to record blood pressure than to perform a blood test, as some people need extra preparation time for invasive procedures. The state space will include the set of available resources, their characteristics, the occupancy of the clinical section, the time and date, other features that help forecast future occupancy, and relevant patient features, outcomes of used resources, and further diagnosis.
|
| 56 |
+
|
| 57 |
+
The main goal of this framework is to understand the relationship between resources and patient outcomes. Domain expertise can be incorporated into the simulator by adjusting the outcomes from different resources and by using imitation learning or feedback. This allows the agent to be grounded in best practices, while giving it the opportunity to explore different strategies, in the safe, nondestructive environment of the simulator.
|
| 58 |
+
|
| 59 |
+
## 4 FUTURE DIRECTIONS
|
| 60 |
+
|
| 61 |
+
As the initiative grows, it will be important to focus on community building. This can be accomplished by creating a leaderboard, hosting workshops, and adding existing standalone environments. Such efforts will help to bring more researchers and practitioners to the platform, creating a strong community. Another focus of the library will be taking advantage of existing data to build more realistic environments. By leveraging existing offline data, the library could potentially use algorithms such as pretraining or initialization phases to further refine the environment and help it to behave in the most realistic way possible. Finally, there should be an active goal to make the environments relevant and useful in a medical or clinical context. To do this, researchers and developers will seek to collaborate with medical professionals to ensure the simulators are based on real world observations and are as accurate as possible. By doing so, Stasis can become a valuable tool for medical professionals.
|
| 62 |
+
|
| 63 |
+
## REFERENCES
|
| 64 |
+
|
| 65 |
+
Derek C. Angus and Chung-Chou H. Chang. Heterogeneity of Treatment Effect. JAMA, 326(22): 2312, 12 2021. doi: 10.1001/jama.2021.20552. URL http://dx.doi.org/10.1001/ jama.2021.20552
|
| 66 |
+
|
| 67 |
+
Saqib Ejaz Awan, Mohammed Bennamoun, Ferdous Sohel, Frank Sanfilippo, and Girish Dwivedi. A reinforcement learning-based approach for imputing missing data. Neural Computing and Applications, 34(12):9701-9716, 6 2022. ISSN 14333058. doi: 10.1007/ S00521-022-06958-3/TABLES/13. URL https://link.springer.com/article/ 10.1007/s00521-022-06958-3
|
| 68 |
+
|
| 69 |
+
Banan Jamil Awrahman, Chia Aziz Fatah, and Mzhda Yasin Hamaamin. A Review of the Role and Challenges of Big Data in Healthcare Informatics and Analytics. Computational Intelligence and Neuroscience, 2022:1-10, 9 2022. doi: 10.1155/2022/5317760. URL http://dx.doi.org/ 10.1155/2022/5317760
|
| 70 |
+
|
| 71 |
+
Ian Barnett and Jukka-Pekka Onnela. Inferring mobility measures from GPS traces with missing data. Biostatistics, 21(2):e98-e112, 10 2018. doi: 10.1093/biostatistics/kxy059. URL http: //dx.doi.org/10.1093/biostatistics/kxy059.
|
| 72 |
+
|
| 73 |
+
Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, and Wojciech Zaremba. OpenAI Gym. arXiv: Learning, 6 2016. URL https://www.arxiv.org/pdf/1606.01540.
|
| 74 |
+
|
| 75 |
+
Elynn Y. Chen, Rui Song, and Michael I. Jordan. Reinforcement Learning with Heterogeneous Data: Estimation and Inference. 1 2022. doi: 10.48550/arxiv.2202.00088. URL https://arxiv.org/abs/2202.00088v1
|
| 76 |
+
|
| 77 |
+
Izzeddin Gur, Ofir Nachum, and Aleksandra Faust. Targeted environment design from offline data, 2022. URL https://openreview.net/forum?id=Is5Hpwq2R-h.
|
| 78 |
+
|
| 79 |
+
Hao Jin, Yang Peng, Wenhao Yang, Shusen Wang, and Zhihua Zhang. Federated Reinforcement Learning with Environment Heterogeneity, 5 2022. ISSN 2640-3498. URL https: //proceedings.mlr.press/v151/jin22a.html
|
| 80 |
+
|
| 81 |
+
Rahul Kidambi, Aravind Rajeswaran, Praneeth Netrapalli, and Thorsten Joachims. MOReL : Model-Based Offline Reinforcement Learning. Advances in Neural Information Processing Systems, 2020-Decem, 5 2020. ISSN 10495258. doi: 10.48550/arxiv.2005.05951. URL https:// arxiv.org/abs/2005.05951v3
|
| 82 |
+
|
| 83 |
+
Sergey Levine, Aviral Kumar, George Tucker, and Justin Fu. Offline reinforcement learning: Tutorial, review, and perspectives on open problems. arXiv preprint arXiv:2005.01643, 2020.
|
| 84 |
+
|
| 85 |
+
Michael L. Littman. A tutorial on partially observable Markov decision processes. Journal of Mathematical Psychology, 53(3):119-125, 6 2009. ISSN 0022-2496. doi: 10.1016/J.JMP.2009. 01.005 .
|
| 86 |
+
|
| 87 |
+
Daniel J. Lizotte, Lacey Gunter, Eric B. Laber, and Susan A. Murphy. Missing data and uncertainty in batch reinforcement learning. 2008.
|
| 88 |
+
|
| 89 |
+
Hao Lu, Xingwen Zhang, and Shuang Yang. A Learning-based Iterative Method for Solving Vehicle Routing Problems. International Conference on Learning Representations, 42020. URL https://www.openreview.net/pdf?id=BJe1334YDH
|
| 90 |
+
|
| 91 |
+
Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wier-stra, and Martin Riedmiller. Playing Atari with Deep Reinforcement Learning. arXiv: Learning, 12 2013. URL http://cs.nyu.edu/~koray/publis/mnih-atari-2013.pdf.
|
| 92 |
+
|
| 93 |
+
P Neis and A Zipf. OpenRouteService.org - Combining Open Standards and Open Geodata. The State of the Map. 2nd Open Street Maps Conference, Limerik. Ireland., 2008.
|
| 94 |
+
|
| 95 |
+
Tessio Novack, Zhiyong Wang, and Alexander Zipf. A System for Generating Customized Pleasant Pedestrian Routes Based on OpenStreetMap Data. Sensors, 18(11):3794, 11 2018. doi: 10.3390/ s18113794. URL http://dx.doi.org/10.3390/s18113794.
|
| 96 |
+
|
| 97 |
+
Roland M. Olbricht. Data Retrieval for Small Spatial Regions in OpenStreetMap. Lecture Notes in Geoinformation and Cartography, pp. 101-122, 2015. doi: 10.1007/978-3-319-14280-7\\\{_\}6. URLhttp://dx.doi.org/10.1007/978-3-319-14280-7_6
|
| 98 |
+
|
| 99 |
+
OpenStreetMap contributors. Planet dump retrieved from https://planet.osm.org . https://www.openstreetmap.org,2017.
|
| 100 |
+
|
| 101 |
+
Vasileios C. Pezoulas, Konstantina D. Kourou, Fanis Kalatzis, Themis P. Exarchos, Aliki Venet-sanopoulou, Evi Zampeli, Saviana Gandolfo, Fotini Skopouli, Salvatore De Vita, Athanasios G. Tzioufas, and Dimitrios I. Fotiadis. Medical data quality assessment: On the development of an automated framework for medical data curation. Computers in Biology and Medicine, 107: 270-283, 4 2019. doi: 10.1016/j.compbiomed.2019.03.001. URL http://dx.doi.org/ 10.1016/j.compbiomed.2019.03.001
|
| 102 |
+
|
| 103 |
+
Alex Ray, Joshua Achiam, and Dario Amodei. Benchmarking safe exploration in deep reinforcement learning. arXiv preprint arXiv:1910.01708, 7(1):2, 2019.
|
| 104 |
+
|
| 105 |
+
Susan M Shortreed, Eric Laber, Daniel J Lizotte, - T Scott Stroup, Joelle Pineau, Susan A Murphy, S M Shortreed, J Pineau, T S Stroup, E Laber, - D J Lizotte, S A Murphy, and D J Lizotte. Informing sequential clinical decision-making through reinforcement learning: an empirical study. Mach Learn, 84:109-136, 2011. doi: 10.1007/s10994-010-5229-0.
|
| 106 |
+
|
| 107 |
+
Emanuel Todorov, Tom Erez, and Yuval Tassa. Mujoco: A physics engine for model-based control. In 2012 IEEE/RSJ international conference on intelligent robots and systems, pp. 5026-5033. IEEE, 2012.
|
| 108 |
+
|
| 109 |
+
Hongyu Zang, Xin Li, Jie Yu, Chen Liu, Riashat Islam, Remi Tachet Des Combes, and Romain Laroche. Behavior Prior Representation learning for Offline Reinforcement Learning. 11 2022. doi: 10.48550/arxiv.2211.00863. URL https://arxiv.org/abs/2211.00863v2.
|
papers/ICLR/ICLR 2023/ICLR 2023 Workshop/ICLR 2023 Workshop TML4H/LsEd-S3ofyW/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,61 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ STASIS: REINFORCEMENT LEARNING SIMULATORS FOR HUMAN-CENTRIC REAL-WORLD ENVIRONMENTS
|
| 2 |
+
|
| 3 |
+
Anonymous authors
|
| 4 |
+
|
| 5 |
+
Paper under double-blind review
|
| 6 |
+
|
| 7 |
+
§ ABSTRACT
|
| 8 |
+
|
| 9 |
+
We introduce Stasis, a suite of reinforcement learning (RL) environments that aim to maintain realism for human-centric agents operating in real-world environments. Through representation learning and alignment with real-world offline data, Stasis allows RL systems to be trained in offline environments with tunable characteristics, such as observability, heterogeneity and levels of missing data. The resulting RL agents are capable of maintaining a level of performance and robustness that is comparable to agents trained in real-world online environments, while avoiding the high cost and risk associated with making mistakes during online training. We provide examples of two environments that will be part of Stasis and discuss its implications for the deployment of RL-based systems in sensitive and high-risk sectors.
|
| 10 |
+
|
| 11 |
+
§ 1 INTRODUCTION
|
| 12 |
+
|
| 13 |
+
Reinforcement Learning (RL) is becoming increasingly popular for a variety of tasks ranging from robotic control and autonomous driving to artificial intelligence in the gaming domain. Despite its potential, the lack of realistic simulators for RL agents operating in the real-world is a major limitation for the development of RL agents. Current simulators lack the capability to model real-world applications of RL. This includes missing key components such as heterogeneity within the environment, specifically within the reward function, and observability, as all real-world environment are inherently perceived as partially observable. Furthermore, these simulators often lack the ability to handle missing data, either irregularly sampled data or observations missing at random (due design of device) and missing not-at-random (due to outcomes). Lastly, they lack the ability to generate observed data, as simulators should be thought of as a generative model of the real-world, where we want to generate samples close to the observed data.
|
| 14 |
+
|
| 15 |
+
The lack of realistic simulators for RL agents hinders the development of agents that can be successfully deployed to real-world tasks. The high cost and risk associated with making mistakes during online training makes it an important problem to address. To this end, we introduce Stasis, a suite of RL environments that aim to maintain realism for human-centric agents operating in real-world environments. Through representation learning and alignment with real-world offline data, Stasis allows RL systems to be trained in offline environments with tunable characteristics, such as observability, heterogeneity and levels of missing data. The resulting RL agents are capable of maintaining a level of performance and robustness that is comparable to agents trained in real-world online environments, while avoiding the high cost and risk associated with making mistakes during online training.
|
| 16 |
+
|
| 17 |
+
Related Work. The most similar work to the one we present here is the Gymnasium, formerly known as OpenAI gym as seen in Brockman et al. (2016), and the Safety Gym by Ray et al. (2019). Both of these are suites of environments where RL agents can be trained without requiring real-world deployment. However, they both place emphasis on robotics and control, with Safety Gym making use of MuJoCo (Todorov et al., 2012) with a focus on constrained RL. The Stasis library which we introduce here will focus on open problems related to RL in healthcare, including partial observability, heterogeneity, missing data and make use of labelled real-world data through offline RL (Levine et al., 2020).
|
| 18 |
+
|
| 19 |
+
§ 2 UNDERLYING FRAMEWORK & CONSIDERATIONS
|
| 20 |
+
|
| 21 |
+
On-going efforts to build simulated environments for benchmarking conventional and emerging RL algorithms center on emulating the realism and practicality of real world settings. Evaluating algorithms in this fashion affords practitioners the ability to rigorously examine the suitability of an algorithm before initial deployment in the real world. In this paper, we identify four core themes that are important for representing human-centric real-world settings. Specifically, settings where the decision-making policy directly interacts with a human, or provides actions for a human to execute within their own environment.
|
| 22 |
+
|
| 23 |
+
Observability. Observability determines the full-range of information from the environment available to the agent for decision-making. In real-world settings, environments are typically partially observable; the agent only has access to a limited view of the current state of the environment (Littman, 2009). This can make it difficult to learn an optimal policy, as the agent may be missing important information or have to rely on incomplete observations to determine its actions. Therefore, a well-designed observability mechanism that captures the relevant information is critical for learning a good policy. However, increasing observability can also lead to higher computational and memory requirements, making it important to strike a balance between having enough information to make informed decisions and keeping the complexity manageable.
|
| 24 |
+
|
| 25 |
+
Heterogeneity. In real-world settings, the reward signal may vary between agents operating within a single environment. As such, learning a single policy that aims to optimize the reward for all agents is often difficult, leading to sub-optimal performance for select agents (Chen et al., 2022; Jin et al., 2022). Generally, this can result in a situation where some agents learn different, unintended behaviors. In multi-agent systems, this can lead to a lack of coordination, potentially hampering the functioning of the overall system. In applications such as healthcare where data from heterogeneous subjects are often used to make decisions for single subject, failing to account for heterogeneity in the reward signal can lead to an alignment problem, severely impacting the relevancy of the learned policy. Mitigating these challenges may require algorithms to leverage techniques from areas of research such as multi-agent reinforcement learning, or to directly modify the reward functions to account for the heterogeneity between agents.
|
| 26 |
+
|
| 27 |
+
Missing Data. The effectiveness of a policy in reinforcement learning is closely tied to the quality and quantity of data used to train the model. If the agent encounters missing data, such as incomplete or unavailable state or reward information, it may be unable to accurately estimate the value of different actions, leading to suboptimal decisions (Awan et al., 2022; Lizotte et al., 2008; Shortreed et al., 2011). Missing data can happen due to irregular sampling, where data is missing at random, which can occur due to various factors such as technical failures or data collection constraints. Additionally, data may be missing not at random, such as when specific actions or states are more likely to be absent. In healthcare application, this can be due to phenomena such as self-selection bias, where participants in the study exercise control over whether or not they participate in the study or how much data they provide. Therefore, it is essential to consider the consequences of missing data and address it using techniques such as imputation, data augmentation, or other advanced methods for handling missing data in reinforcement learning (Shortreed et al., 2011; Awan et al., 2022).
|
| 28 |
+
|
| 29 |
+
Offline Data. Previously collected experiential data from agents interacting within a given environment can be used to enhance the robustness and reliability of the simulated environment. We envision that offline data can be used to improve the following aspects of the simulated environment: (1) state representation - offline data can be used to provide more accurate state representations for the agent, including information about the environment, objects, and other agents (Zang et al., 2022); (2) model dynamics - the interactions between objects and agents in the environment can be modeled more accurately using offline data, allowing for a more realistic representation of the environment's dynamics Kidambi et al. (2020). Lastly, in most real-world environment, decision-making policies are rarely trained from scratch, rather offline data is commonly used to learn policies that achieve an acceptable level of performance (Levine et al., 2020). As such, incorporating available offline data into simulated environments allows for a pre-training phase, where policies are first initialized using offline data before being deployed within the environment.
|
| 30 |
+
|
| 31 |
+
§ 3 THE SIMULATOR
|
| 32 |
+
|
| 33 |
+
The structure of the simulator is similar to the structure of the Gymnasium API (Brockman et al., 2016). To initialize an environment you pull it from the library's collection and then proceed to train your agent using the simulated environment. Each environment has the same structure regarding methods, in order for the users to be able to switch among environments and train on different scenarios with ease.
|
| 34 |
+
|
| 35 |
+
The difference to the Gymnasium API is that the environments also share parameters related to problems you will find in real world applications, in order to make the environments more realistic and thus the agents more robust to real world data. When initializing an environment, you will be able to specify the complexity of the problem, but also some parameters that are important in a healthcare context (Awrahman et al., 2022) and which are problematic in the collection and curation of healthcare data (Pezoulas et al., 2019). The shared parameters, when it is possible for an environment, will be able to tune aspects such as the heterogeneity of the simulation (Angus & Chang, 2021), incorporate missing data or have partial observability and add stochasticity or noise to the simulation. This can look different for every environment, but the purpose of the parameters is shared.
|
| 36 |
+
|
| 37 |
+
§ 3.1 HEALTHY TRAVELING SALESMAN
|
| 38 |
+
|
| 39 |
+
The first environment, which is already implemented, is a simulated weighted travelling salesman problem (Lu et al., 2020). The environment is initialized by providing a set of coordinates that have to be visited each once and an exposure type (e.g. greenspace or bars) that has to be maximized or minimized. Then the environment will collect information on the possible routes you can take to visit each coordinate using the OpenRouteService API (Neis & Zipf, 2008) and the exposures around the locations of interest using the Overpass API (Olbricht, 2015) which both use data collected from OpenStreetMap or OSM for short (OpenStreetMap contributors, 2017). In the current implementation one of the coordinates to be provided is the starting location and the rest of the coordinates are the ones that are visited and then the agent needs to return to the starting point. The task is thus finding the optimized circle in a graph and complexity of the problem is increased by simply adding more coordinates.
|
| 40 |
+
|
| 41 |
+
The reward for each action is a weighted average between the distance covered and the time spent at exposure at each route, which is also tunable at input depending on what the agent should focus on optimizing. The possible exposure information are limited only by the possible types of locations you can collect from OSM. In terms of the parameters mentioned before on making the simulations more realistic to collected data, these are still a work in progress, but possible concepts discussed include modifying heterogeneity by adding different constraints in the possible actions of different simulated users. Some users have trouble moving large distances or want to avoid certain trigger areas, which can be encompassed in the reward function. In terms of missing data and stochasticity we can add parameters that tune the amount of information and noise we see in the possible routes. They can also be encompassed in a way that matches what we see from GPS collecting devices like smartphones and smartwatches, where missing data are not missing at random, but there are certain time-periods that continuously data are not being collected by the smart devices (Barnett & Onnela, 2018).
|
| 42 |
+
|
| 43 |
+
The following figures 1 and 2 are examples of rendering of the environment, where the green areas are greenspace locations (Novack et al., 2018) collected by the Overpass API, the blue arrows are the coordinates of the locations to be visited and the red home arrow indicates the coordinates of the starting and ending location. This is the visualization after an episode has been run using a Deep-QN agent (Mnih et al., 2013) trained on maximizing greenspace exposure on a set of 8 coordinates in Boston, MA (Figure 1) and a set of 6 coordinates in Bronx, NY (Figure 2).
|
| 44 |
+
|
| 45 |
+
A big advantage of this environment is that all the information in the simulation are collected from the real world. Researchers that want to use this environment and possess offline GPS data can also encompass them to enrich the information in the environment and make it even more realistic (Gur et al., 2022). Using GPS trajectories, we can collect information on areas that people want to avoid, areas with more traffic or areas which can be traversed faster and encompass this information in the reward function of the environment. The GPS data can also be used to gain information of people's
|
| 46 |
+
|
| 47 |
+
< g r a p h i c s >
|
| 48 |
+
|
| 49 |
+
home and work locations or locations they like to visit frequently, thus making the environment adjust to a specific person's patterns and visit locations.
|
| 50 |
+
|
| 51 |
+
§ 3.2 RESOURCE ALLOCATION IN CLINICAL SETTINGS
|
| 52 |
+
|
| 53 |
+
The third environment model in the Stasis library demonstrates a common problem found in clinical settings: dynamic resource allocation. Allocating resources efficiently and operationally requires decision making on a case-by-case basis, considering both the resources available and the individual conditions of multiple patients, which are not necessarily a function of only the resources themselves.
|
| 54 |
+
|
| 55 |
+
This environment's properties can be highly complex due to the sophistication of modern clinical settings. However, for its first iteration, it will be limited to a general setting. The action space will involve selecting resources from a given set, where each resource allocation carries a cost and takes up the resource for a certain period of time. We will not know the exact cost or duration, as these vary depending on the resource in question. For example, it takes less time to record blood pressure than to perform a blood test, as some people need extra preparation time for invasive procedures. The state space will include the set of available resources, their characteristics, the occupancy of the clinical section, the time and date, other features that help forecast future occupancy, and relevant patient features, outcomes of used resources, and further diagnosis.
|
| 56 |
+
|
| 57 |
+
The main goal of this framework is to understand the relationship between resources and patient outcomes. Domain expertise can be incorporated into the simulator by adjusting the outcomes from different resources and by using imitation learning or feedback. This allows the agent to be grounded in best practices, while giving it the opportunity to explore different strategies, in the safe, nondestructive environment of the simulator.
|
| 58 |
+
|
| 59 |
+
§ 4 FUTURE DIRECTIONS
|
| 60 |
+
|
| 61 |
+
As the initiative grows, it will be important to focus on community building. This can be accomplished by creating a leaderboard, hosting workshops, and adding existing standalone environments. Such efforts will help to bring more researchers and practitioners to the platform, creating a strong community. Another focus of the library will be taking advantage of existing data to build more realistic environments. By leveraging existing offline data, the library could potentially use algorithms such as pretraining or initialization phases to further refine the environment and help it to behave in the most realistic way possible. Finally, there should be an active goal to make the environments relevant and useful in a medical or clinical context. To do this, researchers and developers will seek to collaborate with medical professionals to ensure the simulators are based on real world observations and are as accurate as possible. By doing so, Stasis can become a valuable tool for medical professionals.
|
papers/ICLR/ICLR 2023/ICLR 2023 Workshop/ICLR 2023 Workshop TML4H/SG3ztVYDubA/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,547 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# EXPLAINING MULTICLASS CLASSIFIERS WITH CATE- GORICAL VALUES: A CASE STUDY IN RADIOGRAPHY
|
| 2 |
+
|
| 3 |
+
Anonymous authors
|
| 4 |
+
|
| 5 |
+
Paper under double-blind review
|
| 6 |
+
|
| 7 |
+
## Abstract
|
| 8 |
+
|
| 9 |
+
Explainability of machine learning methods is of fundamental importance in healthcare to calibrate trust. A large branch of explainable machine learning uses tools linked to the Shapley value, which have nonetheless found difficult to interpret and potentially misleading. Taking multi-class classification as a reference task we argue that a critical issue in these methods is that they disregard structure of the models output. We develop the Categorical Shapley value as a theoretically-grounded method to explain the output of multi-class classifiers, in terms of transition (or flipping) probabilities across classes. We demonstrate on a case study composed of three example scenarios on pneumonia detection and subtyping using radiography images.
|
| 10 |
+
|
| 11 |
+
## 1 INTRODUCTION
|
| 12 |
+
|
| 13 |
+
Machine learning (ML) has emerged as a powerful tool in healthcare with the potential to revolutionize the way we diagnose, treat and prevent diseases. ML algorithms have a wide range of applications including early detection of diseases, risk prediction in patients developing certain conditions, optimisation of treatment plans, improved prognosis, assistance in clinical decision-making, gene expression analysis, genomic classification, improved personalize patient care and more. However, the adoption of ML in clinical practice has often been hampered by the opaqueness of the ML models. This opaqueness may trigger skepticism in clinicians and other end-users such as patients or care-givers to trust models recommendations without understanding the reasoning behind their predictions, which delays and / or decreases the adoption of state-of-the art technologies and hinders further advances.
|
| 14 |
+
|
| 15 |
+
Various methods have been proposed in the literature to enhance the explainability of ML models (XAI). Among these, (local) feature attribution methods such as SHAP (Lundberg & Lee, 2017) or variants (e.g. Frye et al., 2020; Chen et al., 2018; Heskes et al., 2020) have gained considerable traction. In fact Shapley value based explanations are the most popular explainability method according to a recent study by Bhatt et al. (2020) These methods, supported by a number of axioms (properties) such as nullity, linearity and efficiency, provide insight into the contribution of each feature toward the model's decision. There is, however, a growing scrutiny into the utility of these techniques, which are judged to be un-intuitive and potentially misleading (Kumar et al., 2020; Mittelstadt et al., 2019), and do not support contrastive statements (Miller, 2019). While part of these issues may be rooted in misinterpretations of the technical tools involved ${}^{1}$ , in this paper we argue that a critical flaw in current approaches in the area is failure to capture relevant structure of the object one wishes to explain (the explicandum). In contrast, we take the position that attributive explanations should comply with the nature of the explicandum: in particular, if the model output is a RV, we should represent marginal contributions as RVs as well. Our contribution, which we dub the Categorical Shapley value, can fully support statements such as "the probability that the feature ${x}_{i}$ made $x$ being classified as a viral pneumonia rather than an healthy lung is $y$ ", which we develop, experiment and discuss in this paper within the context of radiography.
|
| 16 |
+
|
| 17 |
+
---
|
| 18 |
+
|
| 19 |
+
${}^{1}$ For instance, the Shapley value is a descriptive rather than prescriptive tool. This means that, in general, one should not expect that changing the feature with the highest Shapley value should lead to the largest change in the outcome.
|
| 20 |
+
|
| 21 |
+
---
|
| 22 |
+
|
| 23 |
+
### 1.1 The Shapley Value and its Applicaiton to Explain Multiclass Classifiers
|
| 24 |
+
|
| 25 |
+
For concreteness, we focus here on multi-class classification ( $d$ classes) as one of the most common tasks in ML. Let $f : \mathcal{X} \subseteq {\mathbb{R}}^{n} \mapsto \mathcal{Y}$ be a (trained) multi-class classifier and $x \in \mathcal{X}$ an input point. One common strategy to explain the behaviour of the model at $x$ is to attribute an importance score to each input feature through the computation of the Shapley value (SV) (Shapley, 1953a). In order to do so, one must first construct a cooperative game $v$ where players correspond to features and coalitions correspond to features being used: that is $v\left( S\right) = f\left( {x}_{\mid S}\right)$ , where $S \in {2}^{n}$ . ${}^{2}$ Then, for each $i \in \left\lbrack n\right\rbrack$ , the Shapley value is given by
|
| 26 |
+
|
| 27 |
+
$$
|
| 28 |
+
{\psi }_{i}\left( v\right) = \mathop{\sum }\limits_{{S \in {2}^{\left\lbrack n\right\rbrack \smallsetminus i}}}p\left( S\right) \left\lbrack {v\left( {S \cup i}\right) - v\left( S\right) }\right\rbrack = {\mathbb{E}}_{S \sim p\left( S\right) }\left\lbrack {v\left( {S \cup i}\right) - v\left( S\right) }\right\rbrack ; \tag{1}
|
| 29 |
+
$$
|
| 30 |
+
|
| 31 |
+
where $p\left( S\right) = \frac{1}{n}{\left( \begin{matrix} n - 1 \\ \left| S\right| \end{matrix}\right) }^{-1}$ if $i \notin S$ and 0 otherwise. The quantity $v\left( {S \cup i}\right) - v\left( S\right)$ is called marginal contribution of $i$ to coalition $S$ . See Roth (1988) for an in depth discussion of the SV and surrounding topics.
|
| 32 |
+
|
| 33 |
+
Historically, the SV has being developed as an answer to the question: how can we assign a worth (or value) to each player $i$ ? The SV does so by distributing "fairly" the grand payoff $v\left( \left\lbrack n\right\rbrack \right)$ among players, so that(i)if a player never contributes to the payoff, their worth is null,(ii)if any two players have indistinguishable marginal contributions, they have the same worth, and (iii) if $v$ is a linear combination of two games, say $u$ and $w$ , then the worth of $i$ for $v$ is the corresponding linear combination of their worth for $u$ and $w$ . The game $v$ could typically represent an economic or political process (e.g. a vote) and, critically, would be modelled as a real-valued set function; i.e. $v : {2}^{d} \mapsto \mathbb{R}$ , so that ${\psi }_{i}\left( v\right) \in \mathbb{R}$ .
|
| 34 |
+
|
| 35 |
+
## 2 CATEGORICAL GAMES AND VALUES
|
| 36 |
+
|
| 37 |
+
In our case, the grand payoff is the output $f\left( x\right)$ that determines the class the model assigns to $x$ . Whilest in practice $f$ could be implemented in various ways, several modern ML models (e.g. neural nets) output distributions over the classes - e.g. through a softmax layer. Equivalently, one may think of $f\left( x\right)$ as an $E$ -valued (categorical) random variable. Using the one-hot-encoding convention, we identify $E = {\left\{ {e}_{s}\right\} }_{s = 1}^{d}$ as the one-hot vectors of the canonical base of ${\mathbb{R}}^{d}$ . Now, however, it becomes unclear which real number should be assigned to a difference of random variables. Moreover, also averaging over coalitions $S$ , as done in Eq. (1), may induce a semantic gap in this context. To recover the standard pipeline to compute the SV, one may settle for explaining the logits or the class probabilities as if they were independent scalars. However this may lead to paradoxical explanations that attribute high importance to a certain feature (say ${x}_{1}$ ) for all classes, failing to capture the fact that an increase in the likelihood of a given class must necessarily result in an aggregated decrease of the likelihood of the others. Here we show how to avoid step which causes loss of structure and rather explain $f\left( x\right)$ directly.
|
| 38 |
+
|
| 39 |
+
For a player $i$ and a coalition $S$ not containing $i$ , we need to relate $v\left( S\right)$ with $v\left( {S \cup i}\right)$ in order to quantify the marginal contribution of $i$ to $S$ . This relationship is not just in terms of the marginal distributions of these two variables, but also of their dependence. In this paper, we assume a simple dependency structure between all variables $v\left( S\right)$ , in that $v\left( S\right) = \widetilde{v}\left( {S,\varepsilon }\right)$ for $\varepsilon \sim p\left( \varepsilon \right)$ where $\widetilde{v}$ is a deterministic mapping to $E$ , and $\varepsilon$ is a random variable distributed according to some $p\left( \varepsilon \right)$ . Let $v\left( S\right)$ be a $d$ -way categorical distribution with natural parameters ${\theta }_{S, j}$ , in that
|
| 40 |
+
|
| 41 |
+
$$
|
| 42 |
+
\mathbb{P}\left( {v\left( S\right) = j}\right) = \frac{{e}^{{\theta }_{S, j}}}{\mathop{\sum }\limits_{k}{e}^{{\theta }_{S, k}}} = \operatorname{Softmax}\left( {\theta }_{S}\right) .
|
| 43 |
+
$$
|
| 44 |
+
|
| 45 |
+
We call such $v$ a Categorical game. We can implement the aforementioned dependency assumption by the Gumbel-argmax reparameterization (Papandreou &Yuille,2011): $\widetilde{v}\left( {S,\varepsilon }\right) =$ $\arg \mathop{\max }\limits_{k}\left\{ {{\theta }_{S, k} + {\varepsilon }_{k}}\right\}$ , where ${\varepsilon }_{1},\ldots ,{\varepsilon }_{d}$ are independent standard Gumbel variables.
|
| 46 |
+
|
| 47 |
+
Given this construction, we redefine the marginal contribution of $i$ to $S$ as the random variable $\widetilde{v}\left( {S \cup i,\varepsilon }\right) - \widetilde{v}\left( {S,\varepsilon }\right)$ for $\varepsilon \sim p\left( \varepsilon \right)$ . This RV takes values in the set $E - E = \left\{ {e - {e}^{\prime } \mid e,{e}^{\prime } \in E}\right\}$ ; we shall call its distribution
|
| 48 |
+
|
| 49 |
+
$$
|
| 50 |
+
{q}_{i, S}\left( z\right) = \mathbb{P}\left( {v\left( {S \cup i}\right) - v\left( S\right) = z \mid S}\right) ,\;z \in E - E.
|
| 51 |
+
$$
|
| 52 |
+
|
| 53 |
+
---
|
| 54 |
+
|
| 55 |
+
${}^{2}$ In practice, out-of-coalitions features must often be given a value; this could be an arbitrary baseline, a global or a conditional average Sundararajan & Najmi (2020); Aas et al. (2021).
|
| 56 |
+
|
| 57 |
+
---
|
| 58 |
+
|
| 59 |
+
Note that ${q}_{i, S}\left( x\right)$ is a conditional distribution, given $S \in {2}^{\left\lbrack n\right\rbrack \smallsetminus i}$ and $E - E$ is a set containing $0 \in {\mathbb{R}}^{d}$ and all vectors that have exactly two non-zero entries, one with value +1 and the other -1 .
|
| 60 |
+
|
| 61 |
+
We can view this as a generalized difference operation $v\left( {S \cup i}\right) \ominus v\left( S\right)$ between random variables rather then deterministic values, where the $\ominus$ incorporates the above dependency assumption. We define our Categorical Shapely value as the random variable $\xi \left( v\right) = {\left\{ {\xi }_{i}\right\} }_{i \in \left\lbrack n\right\rbrack }$ , where
|
| 62 |
+
|
| 63 |
+
$$
|
| 64 |
+
{\xi }_{i}\left( v\right) = v\left( {{S}_{i} \cup i}\right) \ominus v\left( {S}_{i}\right) = \widetilde{v}\left( {S \cup i,\varepsilon }\right) - \widetilde{v}\left( {S,\varepsilon }\right) \;\text{ for }\varepsilon \sim p\left( \varepsilon \right) \text{ and }S \sim p\left( S\right) . \tag{2}
|
| 65 |
+
$$
|
| 66 |
+
|
| 67 |
+
Note these RVs have multiple sources of randomness, which are independent from each other. We can marginalise over $p\left( S\right)$ to obtain the distribution ${q}_{i}\left( x\right)$ of ${\xi }_{i}\left( v\right)$ : for every $z \in E - E$ :
|
| 68 |
+
|
| 69 |
+
$$
|
| 70 |
+
{q}_{i}\left( z\right) = \mathbb{P}\left( {{\xi }_{i}\left( v\right) = z}\right) = {\mathbb{E}}_{{S}_{i} \sim {p}^{i}}\left\lbrack {{q}_{{S}_{i}, i}\left( z\right) }\right\rbrack = \mathop{\sum }\limits_{{{S}_{i} \in {2}^{\left\lbrack n\right\rbrack \smallsetminus i}}}p\left( {S}_{i}\right) {q}_{{S}_{i}, i}\left( z\right) . \tag{3}
|
| 71 |
+
$$
|
| 72 |
+
|
| 73 |
+
One major advantage of this novel construction is that now the distribution of the Categorical SV is straightforward to interpret. Indeed, the probability masses at each point $z = {e}_{r} - {e}_{s} \in E - E$ are interpretable as the probability (averaged over coalitions) that player $i$ causes the payoff of $v$ (and hence the prediction of $f$ to flip from class $s$ to class $r$ . We refer to ${q}_{i}\left( {{e}_{r} - {e}_{s}}\right)$ as the transition probability induced by feature $i$ .
|
| 74 |
+
|
| 75 |
+
Interestingly we can derive a closed form analytical expression for the ${q}_{i, S}$ ’s and, hence, for the ${q}_{i}$ ’s. We do this in Section A. The following proposition relates the categorical Shapley value with the standard SV and gives a number of properties that can be derived for the categorical SV.
|
| 76 |
+
|
| 77 |
+
Proposition 2.1. Let $\xi$ be the Categorical Shapley value defined in equation 2 Then:
|
| 78 |
+
|
| 79 |
+
1. $\mathbb{E}\left\lbrack {{\xi }_{i}\left( v\right) }\right\rbrack = {\psi }_{i}\left( {\mathbb{E}\left\lbrack v\right\rbrack }\right) \in {\left\lbrack -1,1\right\rbrack }^{d}$ , where $\mathbb{E}\left\lbrack v\right\rbrack$ is the $n$ -players game defined as $\mathbb{E}\left\lbrack v\right\rbrack \left( S\right) =$ $\mathbb{E}\left\lbrack {v\left( S\right) }\right\rbrack = \operatorname{Softmax}\left( {\theta }_{S}\right)$ ;
|
| 80 |
+
|
| 81 |
+
2. If $i$ is a null player, i.e. $v\left( {S \cup i}\right) = v\left( S\right)$ for all $S \neq \varnothing$ , then ${\xi }_{i}\left( v\right) = {\delta }_{0}$ , where ${\delta }_{0}$ is the Dirac delta centered in $0 \in {\mathbb{R}}^{d}$ ;
|
| 82 |
+
|
| 83 |
+
3. If $v = {v}^{\prime }$ with probability $\pi \in \left\lbrack {0,1}\right\rbrack$ and $v = {v}^{\prime \prime }$ with probability $1 - \pi$ (independent from $S$ ), then ${q}_{i}\left( z\right) = \mathbb{P}\left( {{\xi }_{i}\left( v\right) = z}\right) = \pi \mathbb{P}\left( {{\xi }_{i}\left( {v}^{\prime }\right) = z}\right) + \left( {1 - \pi }\right) \mathbb{P}\left( {{\xi }_{i}\left( {v}^{\prime \prime }\right) = z}\right) =$ $\pi {q}_{i}^{\prime }\left( z\right) + \left( {1 - \pi }\right) {q}_{i}^{\prime \prime }\left( z\right)$ .
|
| 84 |
+
|
| 85 |
+
4. $v\left( \left\lbrack n\right\rbrack \right) \ominus v\left( \varnothing \right) = \mathop{\sum }\limits_{{i \in \left\lbrack n\right\rbrack }}{\mathbb{E}}_{S \sim p\left( S\right) }\left\lbrack {{\xi }_{i}\left( v\right) }\right\rbrack$ , where the sum on the right hand side is the sum of (dependent) $E - E$ -valued random variables.
|
| 86 |
+
|
| 87 |
+
Property 1. essentially shows that the categorical SV is strictly more expressive than the traditional Shapley values, whilst putting (standard) SVs for multi-class classifier under a new light. Proprieties 2., 3 . and 4. may be seen as the "adaptations" to the Categorical SV of the null player, linearity and efficiency axioms, respectively. In particular, note that the standard linearity axiom would be of little consequence in this context as taking a linear combination of categorical RVs does not lead to another categorical RV. Instead, property 3. addresses the common situation where the classifier one wishes to explain is a (probabilistic) ensemble, relating the distributions of the respective Categorical SVs. See Section C for a brief discussion of related work in the cooperative game theory literature.
|
| 88 |
+
|
| 89 |
+
## 3 DETECTING PNEUOMONIA IN CHEST X-RAYS: A CASE STUDY
|
| 90 |
+
|
| 91 |
+
This section employs the Categorical SV (CSV) to analyse a commonly used deep learning architecture, ResNet-18 (He et al., 2015) for pneumonia detection and subtyping using radiography images, which is casted as a multiclass classification problem based categorising subjects into three classes: healthy controls (HC - class 0), bacterial pneumonia cases (BP - class 1), viral pneumonia cases (VP - class 2) and . The model has been trained on chest X-ray images collected from pediatric patients, aged one to five, as part of their routine clinical care in Guangzhou Women and Children's Medical Center (Kermany et al., 2018). The aim is to show the importance of using structured explanations even when the model is fine-tuned to the problem of interest, in this case with a mean balanced
|
| 92 |
+
|
| 93 |
+

|
| 94 |
+
|
| 95 |
+
Figure 1: Three example subject radiography images and Categorical Shapley values relative to the depicted patches, plotted as matrices. (Left) Ground-truth: VP Prediction: BP. Patch representing two artifacts which should not impact the model decision (Center) Ground-truth: VP Prediction: VP. Two patches, the red one on the left highlighting a section where pneumonia is visible and blue selecting a patch middle mediastinum. (Right) Ground-truth: VP Prediction: BP. The red patch is relative to a pneumonia area, the yellow one highlights the heart of the patient.
|
| 96 |
+
|
| 97 |
+
accuracy score of 84.7%. We select three example scenarios (as depicted in Figure 1) to analyse different use-cases where CSV empowers the decision process.
|
| 98 |
+
|
| 99 |
+
Case One: Artifacts Figure 1 (Left) shows an example scenario of an image with artifacts which have been identified in red. The probability distribution of the model for the ground-truth class BP and the predicted class VP are given as 0.4789 and 0.4808 respectively. Using Categorical SV, the contribution of the artifacts for the transition of the prediction probability to the correct class is identified as 12.7%, which implies the presence of these artifacts as a root cause behind the confusion between BP and VP, which might be mitigated by further pre-processing or ensemble classification designs.
|
| 100 |
+
|
| 101 |
+
Case Two: Correct Classification Figure 1 (Center) shows a correctly classified VP. However, even though the main affected area in this patient is depicted by red by independent experts, the contribution of this area to the decision has been found negligible (around 1%, see the left matrix under the Center image), making the model's recommendation untrustworthy. Furthermore, the transition probabilities calculated for the region middle mediastinum (depicted in blue), which is not expected to be a region of interest for pneumonia, can found as high as 13.3% from VP to HC, flagging this region as incorrectly important for the decision process of the model.
|
| 102 |
+
|
| 103 |
+
Case Three: Incorrect Classification When the incorrectly classified case pictured in Figure 1 (Right) is analysed, the transition probability for the area in red, which is labelled as a main affected area of VP by independent experts, from the prediction class BP to the ground-truth class VP is calculated as zero. The heart region identified by yellow on the other hand, is shown to exhibit over $5\%$ transition probability to the VP and BP classes, although this value would be expected to be close to zero. Both of these findings help highlight inconsistencies in the behaviour of the model.
|
| 104 |
+
|
| 105 |
+
## 4 DISCUSSION AND CONCLUSION
|
| 106 |
+
|
| 107 |
+
By analysing three example scenarios in Section 3, we have underlined the importance of using model explainability even for fine-tuned, seemingly highly performing models, especially for use with critically important application areas such as healthcare. Employing categorical games and values empowers a structural understanding of the multiclass classification problem by providing information about transition probabilities across classes informing about flipping decisions, in addition to the feature contribution information obtained from classical methods. While we implement a case study on the classification of pneumonia using radiography images as a proof-of-concept, the method proposed are extendable to all modalities including genomics, free-text or tabular data. For out of coalition portions of the image, we employed a simple background constant value. We plan to consider more sophisticated formulations in the future. Another invaluable path for future work is to develop better visualization and summarization methods and interactive interfaces alongside clinicians and other end-users.
|
| 108 |
+
|
| 109 |
+
REFERENCES
|
| 110 |
+
|
| 111 |
+
Kjersti Aas, Martin Jullum, and Anders Løland. Explaining individual predictions when features are dependent: More accurate approximations to shapley values. Artificial Intelligence, 298:103502, 2021.
|
| 112 |
+
|
| 113 |
+
Umang Bhatt, Alice Xiang, Shubham Sharma, Adrian Weller, Ankur Taly, Yunhan Jia, Joydeep Ghosh, Ruchir Puri, José MF Moura, and Peter Eckersley. Explainable machine learning in deployment. In Proceedings of the 2020 conference on fairness, accountability, and transparency, pp. 648-657, 2020.
|
| 114 |
+
|
| 115 |
+
Jianbo Chen, Le Song, Martin J Wainwright, and Michael I Jordan. L-shapley and c-shapley: Efficient model interpretation for structured data. arXiv preprint arXiv:1808.02610, 2018.
|
| 116 |
+
|
| 117 |
+
Christopher Frye, Colin Rowat, and Ilya Feige. Asymmetric shapley values: incorporating causal knowledge into model-agnostic explainability. Advances in Neural Information Processing Systems, 33:1229-1239, 2020.
|
| 118 |
+
|
| 119 |
+
Daniel Granot. Cooperative games in stochastic characteristic function form. Management Science, 23(6):621-630, 1977.
|
| 120 |
+
|
| 121 |
+
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition, 2015.
|
| 122 |
+
|
| 123 |
+
Tom Heskes, Evi Sijben, Ioan Gabriel Bucur, and Tom Claassen. Causal shapley values: Exploiting causal knowledge to explain individual predictions of complex models. Advances in neural information processing systems, 33:4778-4789, 2020.
|
| 124 |
+
|
| 125 |
+
Daniel S. Kermany, Michael Goldbaum, Wenjia Cai, Carolina C.S. Valentim, Huiying Liang, Sally L. Baxter, Alex McKeown, Ge Yang, Xiaokang Wu, Fangbing Yan, and et al. Identifying medical diagnoses and treatable diseases by image-based deep learning. Cell, 172(5), 2018. doi: 10.1016/j.cell.2018.02.010.
|
| 126 |
+
|
| 127 |
+
Yoon Kim, Carl Denton, Luong Hoang, and Alexander M Rush. Structured attention networks. arXiv preprint arXiv:1702.00887, 2017.
|
| 128 |
+
|
| 129 |
+
I Elizabeth Kumar, Suresh Venkatasubramanian, Carlos Scheidegger, and Sorelle Friedler. Problems with shapley-value-based explanations as feature importance measures. In International Conference on Machine Learning, pp. 5491-5500. PMLR, 2020.
|
| 130 |
+
|
| 131 |
+
Scott M Lundberg and Su-In Lee. A unified approach to interpreting model predictions. Advances in neural information processing systems, 30, 2017.
|
| 132 |
+
|
| 133 |
+
Tim Miller. Explanation in artificial intelligence: Insights from the social sciences. Artificial intelligence, 267:1-38, 2019.
|
| 134 |
+
|
| 135 |
+
Brent Mittelstadt, Chris Russell, and Sandra Wachter. Explaining explanations in ai. In Proceedings of the conference on fairness, accountability, and transparency, pp. 279-288, 2019.
|
| 136 |
+
|
| 137 |
+
George Papandreou and Alan L Yuille. Perturb-and-map random fields: Using discrete optimization to learn and sample from energy models. In 2011 International Conference on Computer Vision, pp. 193-200. IEEE, 2011.
|
| 138 |
+
|
| 139 |
+
Bezalel Peleg and Peter Sudhölter. Introduction to the theory of cooperative games, volume 34. Springer Science & Business Media, 2007.
|
| 140 |
+
|
| 141 |
+
Leon A Petrosjan. Cooperative stochastic games. In Advances in dynamic games, pp. 139-145. Springer, 2006.
|
| 142 |
+
|
| 143 |
+
Alvin E Roth. The Shapley value: essays in honor of Lloyd S. Shapley. Cambridge University Press, 1988.
|
| 144 |
+
|
| 145 |
+
L Shapley. A value for n-person games. Edited by Emil Artin and Marston Morse, pp. 343, 1953a.
|
| 146 |
+
|
| 147 |
+
Lloyd S Shapley. Stochastic games. Proceedings of the national academy of sciences, 39(10): 1095-1100, 1953b.
|
| 148 |
+
|
| 149 |
+
Jeroen Suijs, Peter Borm, Anja De Waegenaere, and Stef Tijs. Cooperative games with stochastic payoffs. European Journal of Operational Research, 113(1):193-205, 1999.
|
| 150 |
+
|
| 151 |
+
Panfei Sun, Dongshuang Hou, and Hao Sun. Optimization implementation of solution concepts for cooperative games with stochastic payoffs. Theory and Decision, 93(4):691-724, 2022.
|
| 152 |
+
|
| 153 |
+
Mukund Sundararajan and Amir Najmi. The many shapley values for model explanation. In International conference on machine learning, pp. 9269-9278. PMLR, 2020.
|
| 154 |
+
|
| 155 |
+
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. Advances in neural information processing systems, 30, 2017.
|
| 156 |
+
|
| 157 |
+
Robert J Weber. Probabilistic values for games. The Shapley Value. Essays in Honor of Lloyd S. Shapley, pp. 101-119, 1988.
|
| 158 |
+
|
| 159 |
+
## A ANALYTIC EXPRESSION OF THE PDF OF CATEGORICAL DIFFERENCES
|
| 160 |
+
|
| 161 |
+
Consider $E = \left\{ {{e}_{1},\ldots ,{e}_{d}}\right\}$ with $d \geq 3$ . Suppose that $v\left( S\right)$ has a $d$ -way categorical distribution with natural parameters ${\theta }_{S, j}$ , in that
|
| 162 |
+
|
| 163 |
+
$$
|
| 164 |
+
\mathbb{P}\left( {v\left( S\right) = j}\right) = \frac{{e}^{{\theta }_{S, j}}}{\mathop{\sum }\limits_{k}{e}^{{\theta }_{S, k}}}.
|
| 165 |
+
$$
|
| 166 |
+
|
| 167 |
+
Categorical games emerge, e.g., when explaining the output of multi-class classifiers or attention masks of transformer models (Kim et al., 2017; Vaswani et al., 2017).
|
| 168 |
+
|
| 169 |
+
A latent variable representation is given by the Gumbel-argmax reparameterization (Papandreou & Yuille, 2011):
|
| 170 |
+
|
| 171 |
+
$$
|
| 172 |
+
\widetilde{v}\left( {S,\varepsilon }\right) = \arg \mathop{\max }\limits_{k}\left\{ {{\theta }_{S, k} + {\varepsilon }_{k}}\right\} ,
|
| 173 |
+
$$
|
| 174 |
+
|
| 175 |
+
where ${\varepsilon }_{1},\ldots ,{\varepsilon }_{d}$ are independent standard Gumbel variables with probability distribution function $p\left( {\varepsilon }_{j}\right)$ and cumulative distribution function $F\left( {\varepsilon }_{j}\right)$ given by
|
| 176 |
+
|
| 177 |
+
$$
|
| 178 |
+
F\left( {\varepsilon }_{j}\right) = \exp \left( {-{e}^{-{\varepsilon }_{j}}}\right) ,\;p\left( {\varepsilon }_{j}\right) = \exp \left( {-{\varepsilon }_{j} - {e}^{-{\varepsilon }_{j}}}\right) .
|
| 179 |
+
$$
|
| 180 |
+
|
| 181 |
+
At this point, assume that ${e}_{j} = {\left\lbrack {\mathbf{1}}_{k = j}\right\rbrack }_{k} \in \{ 0,1{\} }^{d}$ are the standard basis vectors of ${\mathbb{R}}^{d}$ . Then, $E - E = \left\{ {{e}_{r} - {e}_{s} \mid 1 \leq r, s \leq d}\right\}$ has size ${d}^{2} - d + 1$ , and the distribution of $v\left( {S \cup i}\right) \ominus v\left( S\right)$ is given by the off-diagonal entries of the joint distribution ${Q}_{i, S}\left( {r, s}\right) = \mathbb{P}\left( {v\left( {S \cup i}\right) = r, v\left( S\right) = s}\right)$ .
|
| 182 |
+
|
| 183 |
+
We can work out ${Q}_{i, S}\left( {r, s}\right)$ explicitly. Denote
|
| 184 |
+
|
| 185 |
+
$$
|
| 186 |
+
{\alpha }_{j} = {\theta }_{S \cup i, j},\;{\beta }_{j} = {\theta }_{S, j},\;{\rho }_{j} = {\alpha }_{j} - {\beta }_{j}.
|
| 187 |
+
$$
|
| 188 |
+
|
| 189 |
+
Without loss of generality, we assume the categories to be ordered so that ${\rho }_{1} \geq {\rho }_{2} \geq \cdots \geq {\rho }_{d}$ . Then:
|
| 190 |
+
|
| 191 |
+
$$
|
| 192 |
+
{\widetilde{Q}}_{i, S}\left( {r, s}\right) = {e}^{{\alpha }_{r} + {\beta }_{s}}\left( {{C}_{s} - {C}_{r}}\right) {\mathbf{1}}_{r < s}\;\left( {r \neq s}\right) ,
|
| 193 |
+
$$
|
| 194 |
+
|
| 195 |
+
$$
|
| 196 |
+
{\widetilde{Q}}_{i, S}\left( {r, r}\right) = {e}^{{\beta }_{r} - {\bar{\beta }}_{r}}\sigma \left( {{\bar{\beta }}_{r} - {\bar{\alpha }}_{r} + {\rho }_{r}}\right) {\mathbf{1}}_{r < d} + {e}^{{\alpha }_{d} - {\bar{\alpha }}_{d}}{\mathbf{1}}_{r = d},
|
| 197 |
+
$$
|
| 198 |
+
|
| 199 |
+
where
|
| 200 |
+
|
| 201 |
+
$$
|
| 202 |
+
{\bar{\alpha }}_{k} = \log \mathop{\sum }\limits_{{j = 1}}^{k}{e}^{{\alpha }_{j}},\;{\bar{\beta }}_{k} = \log \mathop{\sum }\limits_{{j = k + 1}}^{d}{e}^{{\beta }_{j}},
|
| 203 |
+
$$
|
| 204 |
+
|
| 205 |
+
$$
|
| 206 |
+
{c}_{k} = {e}^{-{\bar{\beta }}_{k} - {\bar{\alpha }}_{k}}\left( {\sigma \left( {{\bar{\beta }}_{k} - {\bar{\alpha }}_{k} + {\rho }_{k}}\right) - \sigma \left( {{\bar{\beta }}_{k} - {\bar{\alpha }}_{k} + {\rho }_{k + 1}}\right) }\right) ,
|
| 207 |
+
$$
|
| 208 |
+
|
| 209 |
+
$$
|
| 210 |
+
{C}_{t} = \mathop{\sum }\limits_{{k = 1}}^{{t - 1}}{c}_{k},\;\sigma \left( x\right) = \frac{1}{1 + {e}^{-x}}.
|
| 211 |
+
$$
|
| 212 |
+
|
| 213 |
+
The derivation is provided in Appendix B. We write ${\widetilde{Q}}_{i, S}$ instead of ${Q}_{i, S}$ due to the specific ordering of categories. The induced distribution of $v\left( {S \cup i}\right) \ominus v\left( S\right)$ is
|
| 214 |
+
|
| 215 |
+
$$
|
| 216 |
+
\mathop{\sum }\limits_{{r < s}}{\widetilde{Q}}_{i, S}\left( {r, s}\right) {\delta }_{{e}_{r} - {e}_{s}} + \left( {\mathop{\sum }\limits_{r}{\widetilde{Q}}_{i, S}\left( {r, r}\right) }\right) {\delta }_{\mathbf{0}},
|
| 217 |
+
$$
|
| 218 |
+
|
| 219 |
+
from which the off-diagonal entries of ${\widetilde{Q}}_{i, S}\left( {r, s}\right)$ can be reconstructed.
|
| 220 |
+
|
| 221 |
+
Assume that ${Q}_{i, S}\left( {r, s}\right)$ are given for all $S$ in a common ordering of the categories, in that ${Q}_{i, S}\left( {r, s}\right) = {\widetilde{Q}}_{i, S}\left( {{\pi }_{S}\left( r\right) ,{\pi }_{S}\left( s\right) }\right)$ , where ${\pi }_{S}$ is a permutation of $\{ 1,\ldots , d\}$ fulfilling the ordering condition used above. If
|
| 222 |
+
|
| 223 |
+
$$
|
| 224 |
+
{Q}_{i}\left( {r, s}\right) = {\mathbb{E}}_{S \sim {p}^{i}}\left\lbrack {{Q}_{i, S}\left( {r, s}\right) }\right\rbrack ,
|
| 225 |
+
$$
|
| 226 |
+
|
| 227 |
+
the distributions of Categorical values are given by
|
| 228 |
+
|
| 229 |
+
$$
|
| 230 |
+
{q}_{i} = \mathop{\sum }\limits_{{r, s}}{Q}_{i}\left( {r, s}\right) {\delta }_{{e}_{r} - {e}_{s}}.
|
| 231 |
+
$$
|
| 232 |
+
|
| 233 |
+
The probability masses at each point ${e}_{r} - {e}_{s} \in E - E$ are interpretable as the probability (averaged over coalitions) that player $i$ causes the payoff of $v$ to flip from class $s$ to class $r$ .
|
| 234 |
+
|
| 235 |
+
We may define the following query functional on top of this distribution is
|
| 236 |
+
|
| 237 |
+
$$
|
| 238 |
+
{\ell }_{\mathrm{{mc}}} = \mathop{\max }\limits_{s}\mathop{\sum }\limits_{{r \neq s}}{Q}_{i}\left( {r, s}\right) ,
|
| 239 |
+
$$
|
| 240 |
+
|
| 241 |
+
which quantifies the largest probability of any change in the output led by player $i$ . It can be computed more efficiently as $\mathop{\max }\limits_{s}{Q}_{i}\left( s\right) - {Q}_{i}\left( {s, s}\right)$ , where the marginal distribution ${Q}_{S, i}\left( s\right)$ is given by
|
| 242 |
+
|
| 243 |
+
$$
|
| 244 |
+
{Q}_{S, i}\left( s\right) = \mathbb{P}\left( {v\left( S\right) = s}\right) = {e}^{{\beta }_{s} - {\bar{\beta }}_{0}}.
|
| 245 |
+
$$
|
| 246 |
+
|
| 247 |
+
## B EXTENDED DERIVATION FOR CATEGORICAL GAMES
|
| 248 |
+
|
| 249 |
+
We provide a derivation of the expressions ${\widetilde{Q}}_{i, S}\left( {r, s}\right)$ . In this derivation, $i$ and $S$ are fixed, and we write ${\mathcal{P}}_{rs}$ for ${\widetilde{Q}}_{i, S}\left( {r, s}\right)$ . Let $d \geq 3$ be an integer, $\left\lbrack {\alpha }_{j}\right\rbrack$ and $\left\lbrack {\beta }_{j}\right\rbrack$ be sets of $d$ real numbers. Above, ${\alpha }_{j} = {\theta }_{S \cup i, j}$ and ${\beta }_{j} = {\theta }_{S, j}$ , but the derivation below does not make use of this. Also, let ${\varepsilon }_{j}$ be $d$ independent standard Gumbel variables, each of which has distribution function and density
|
| 250 |
+
|
| 251 |
+
$$
|
| 252 |
+
F\left( \varepsilon \right) = \exp \left( {e}^{-\varepsilon }\right) ,\;p\left( \varepsilon \right) = F{\left( \varepsilon \right) }^{\prime } = \exp \left( {-\varepsilon - {e}^{-\varepsilon }}\right) = {e}^{-\varepsilon }F\left( \varepsilon \right) .
|
| 253 |
+
$$
|
| 254 |
+
|
| 255 |
+
Fix $r, s \in \{ 1,\ldots , d\} , r \neq s$ . We would like to obtain an expression for the probability ${\mathcal{P}}_{rs}$ of
|
| 256 |
+
|
| 257 |
+
$$
|
| 258 |
+
\underset{j}{\arg \max }\left( {{\alpha }_{j} + {\varepsilon }_{j}}\right) = r\text{ and }\;\underset{j}{\arg \max }\left( {{\beta }_{j} + {\varepsilon }_{j}}\right) = s.
|
| 259 |
+
$$
|
| 260 |
+
|
| 261 |
+
Define
|
| 262 |
+
|
| 263 |
+
$$
|
| 264 |
+
{\alpha }_{jr} \mathrel{\text{:=}} {\alpha }_{j} - {\alpha }_{r},\;{\beta }_{js} \mathrel{\text{:=}} {\beta }_{j} - {\beta }_{s}.
|
| 265 |
+
$$
|
| 266 |
+
|
| 267 |
+
The arg max equalities above can also be written as a set of ${2d}$ inequalities (2 of which are trivial):
|
| 268 |
+
|
| 269 |
+
$$
|
| 270 |
+
{\varepsilon }_{j} \leq {\varepsilon }_{r} - {\alpha }_{jr},\;{\varepsilon }_{j} \leq {\varepsilon }_{s} - {\beta }_{js},\;j = 1,\ldots , d.
|
| 271 |
+
$$
|
| 272 |
+
|
| 273 |
+
Then:
|
| 274 |
+
|
| 275 |
+
$$
|
| 276 |
+
{\mathcal{P}}_{rs} = \mathbb{E}\left\lbrack {\mathop{\prod }\limits_{j}{I}_{j}}\right\rbrack ,\;{I}_{j} \mathrel{\text{:=}} {\mathbf{1}}_{{\varepsilon }_{j} \leq \min \left( {{\varepsilon }_{r} - {\alpha }_{jr},{\varepsilon }_{s} - {\beta }_{js}}\right) }.
|
| 277 |
+
$$
|
| 278 |
+
|
| 279 |
+
Two of them are simple:
|
| 280 |
+
|
| 281 |
+
$$
|
| 282 |
+
{I}_{r} = {\mathbf{1}}_{{\varepsilon }_{r} \leq {\varepsilon }_{s} - {\beta }_{rs}},\;{I}_{s} = {\mathbf{1}}_{{\varepsilon }_{s} \leq {\varepsilon }_{r} - {\alpha }_{sr}},\;{I}_{r}{I}_{s} = {\mathbf{1}}_{{\alpha }_{s} - {\alpha }_{r} \leq {\varepsilon }_{r} - {\varepsilon }_{s} \leq {\beta }_{s} - {\beta }_{r}}.
|
| 283 |
+
$$
|
| 284 |
+
|
| 285 |
+
Denote
|
| 286 |
+
|
| 287 |
+
$$
|
| 288 |
+
{\gamma }_{j} \mathrel{\text{:=}} {\alpha }_{jr} - {\beta }_{js} = {\rho }_{j} - \left( {{\alpha }_{r} - {\beta }_{s}}\right) ,\;{\rho }_{j} \mathrel{\text{:=}} {\alpha }_{j} - {\beta }_{j}.
|
| 289 |
+
$$
|
| 290 |
+
|
| 291 |
+
Note that ${\gamma }_{j}$ depends on $r, s$ , but ${\rho }_{j}$ does not. If $j \neq r, s$ , then
|
| 292 |
+
|
| 293 |
+
$$
|
| 294 |
+
{I}_{j} = {\mathbf{1}}_{{\varepsilon }_{j} \leq {\varepsilon }_{r} - {\alpha }_{jr}}{\mathbf{1}}_{{\varepsilon }_{r} - {\varepsilon }_{s} \leq {\gamma }_{j}} + {\mathbf{1}}_{{\varepsilon }_{j} \leq {\varepsilon }_{s} - {\beta }_{js}}{\mathbf{1}}_{{\varepsilon }_{r} - {\varepsilon }_{s} \geq {\gamma }_{j}}.
|
| 295 |
+
$$
|
| 296 |
+
|
| 297 |
+
If we exchange sum and product, we obtain an expression of ${\mathcal{P}}_{rs}$ as sum of ${2}^{d - 2}$ terms. Each of these terms is an expectation over ${\varepsilon }_{r},{\varepsilon }_{s}$ , with the argument being the product of $d - 2$ terms $F\left( {{\varepsilon }_{r} + {a}_{j}}\right)$ or $F\left( {{\varepsilon }_{s} + {a}_{j}}\right)$ and a box indicator for ${\varepsilon }_{r} - {\varepsilon }_{s}$ . In the sequel, we make this more concrete and show that at most $d - 1$ of these terms are nonzero.
|
| 298 |
+
|
| 299 |
+
With a bit of hindsight, we assume that ${\rho }_{1} \geq {\rho }_{2} \geq \cdots \geq {\rho }_{d}$ , which is obtained by reordering the categories. This implies that $\left\lbrack {\gamma }_{j}\right\rbrack$ is nonincreasing for all(r, s). Also, define the function $\pi \left( k\right) =$ $k + {\mathbf{1}}_{r \leq k} + {\mathbf{1}}_{s - 1 \leq k}$ from $\{ 1,\ldots , d - 2\}$ to $\{ 1,\ldots , d\} \smallsetminus \{ r, s\}$ . We will argue in terms of a recursive computation over $k = 1,\ldots , d - 2$ . Define
|
| 300 |
+
|
| 301 |
+
$$
|
| 302 |
+
{M}_{k}\left( {{\varepsilon }_{r},{\varepsilon }_{s}}\right) = \mathbb{E}\left\lbrack {{I}_{r}{I}_{s}\mathop{\prod }\limits_{{1 \leq j \leq k}}{I}_{\pi \left( j\right) } \mid {\varepsilon }_{r},{\varepsilon }_{s}}\right\rbrack ,\;k \geq 0,
|
| 303 |
+
$$
|
| 304 |
+
|
| 305 |
+
so that ${\mathcal{P}}_{rs} = \mathbb{E}\left\lbrack {{M}_{d - 2}\left( {{\varepsilon }_{r},{\varepsilon }_{s}}\right) }\right\rbrack$ . Each ${M}_{k}$ can be written as sum of ${2}^{k}$ terms. Imagine a binary tree of depth $d - 1$ , with layers indexed by $k = 0,1,\ldots , d - 2$ . Each node in this tree is annotated by a box indicator for ${\varepsilon }_{r} - {\varepsilon }_{s}$ and some information detailed below. We are interested in the ${2}^{d - 2}$ leaf nodes of this tree.
|
| 306 |
+
|
| 307 |
+
### B.1 Box Indicators. Which Terms are Needed?
|
| 308 |
+
|
| 309 |
+
We begin with a recursive computation of the box indicators, noting that we can eliminate all nodes where the box is empty. Label the root node (at $k = 0$ ) by 1, its children (at $k = 1$ ) by 10 (left),11 (right), and so on, and define the box indicators as ${\mathbf{1}}_{{l}_{1} \leq {\varepsilon }_{r} - {\varepsilon }_{s} \leq {u}_{1}}$ , and $\left( {{l}_{10},{u}_{10}}\right) ,\left( {{l}_{11},{u}_{11}}\right)$ respectively. Then, ${l}_{1} = {\alpha }_{s} - {\alpha }_{r},{u}_{1} = {\beta }_{s} - {\beta }_{r}$ defines the box for the root. Here,
|
| 310 |
+
|
| 311 |
+
$$
|
| 312 |
+
{l}_{1} \geq {u}_{1}\; \Leftrightarrow \;{\rho }_{s} \geq {\rho }_{r}.
|
| 313 |
+
$$
|
| 314 |
+
|
| 315 |
+
Since $\left\lbrack {\rho }_{j}\right\rbrack$ is non-increasing, the root box is empty if $s < r$ , so that ${\mathcal{P}}_{rs} = 0$ in this case. In the sequel, we assume that $r < s$ and ${\rho }_{r} > {\rho }_{s}$ , so that ${l}_{1} < {u}_{1}$ .
|
| 316 |
+
|
| 317 |
+
If $\mathbf{n}$ is the label of a node at level $k - 1$ with box $\left( {{l}_{\mathbf{n}},{u}_{\mathbf{n}}}\right)$ , then
|
| 318 |
+
|
| 319 |
+
$$
|
| 320 |
+
{l}_{\mathbf{n}0} = {l}_{\mathbf{n}},\;{u}_{\mathbf{n}0} = \min \left( {{\gamma }_{\pi \left( k\right) },{u}_{\mathbf{n}}}\right) ,\;{l}_{\mathbf{n}1} = \max \left( {{\gamma }_{\pi \left( k\right) },{l}_{\mathbf{n}}}\right) ,\;{u}_{\mathbf{n}1} = {u}_{\mathbf{n}}.
|
| 321 |
+
$$
|
| 322 |
+
|
| 323 |
+
Consider node 11 (right child of root). There are two cases. (1) ${\gamma }_{\pi \left( 1\right) } < {u}_{1}$ . Then, ${l}_{11} \geq {\gamma }_{\pi \left( 1\right) } \geq$ ${\gamma }_{\pi \left( k\right) }$ for all $k \geq 1$ , so all descendants must have the same $l = {l}_{11}$ . If ever we step to the left from here, $u = \min \left( {{\gamma }_{\pi \left( k\right) },{u}_{1}}\right) \leq {\gamma }_{\pi \left( k\right) } \leq {\gamma }_{\pi \left( 1\right) } \leq {l}_{11}$ , so the node is eliminated. This means from 11, we only step to the right: ${111},{1111},\ldots$ , with $l = \max \left( {{\gamma }_{\pi \left( 1\right) },{l}_{1}}\right) , u = {u}_{1}$ , so there is only one leaf node which is a descendant of 11. (2) ${\gamma }_{\pi \left( 1\right) } \geq {u}_{1}$ . Then, ${l}_{11} \geq {u}_{11}$ , so that 11 and all its descendants are eliminated.
|
| 324 |
+
|
| 325 |
+
At node 10, we have ${l}_{10} = {l}_{1}$ . If ${\gamma }_{\pi \left( 1\right) } \leq {l}_{1}$ , the node is eliminated, so assume ${\gamma }_{\pi \left( 1\right) } > {l}_{1}$ , and ${u}_{10} = \min \left( {{\gamma }_{\pi \left( 1\right) },{u}_{1}}\right)$ . Consider its right child 101. We can repeat the argument above. There is at most one leaf node below 101, with $l = \max \left( {{\gamma }_{\pi \left( 2\right) },{l}_{1}}\right)$ and $u = {u}_{10} = \min \left( {{\gamma }_{\pi \left( 1\right) },{u}_{1}}\right)$ .
|
| 326 |
+
|
| 327 |
+
All in all, at most $d - 1$ leaf nodes are not eliminated, namely those with labels ${10}\ldots {01}\ldots 1$ , and their boxes are $\left\lbrack {\max \left( {{\gamma }_{\pi \left( 1\right) },{l}_{1}}\right) ,{u}_{1}}\right\rbrack ,\left\lbrack {\max \left( {{\gamma }_{\pi \left( 2\right) },{l}_{1}}\right) ,\min \left( {{\gamma }_{\pi \left( 1\right) },{u}_{1}}\right) }\right\rbrack ,\ldots$ , $\left\lbrack {\max \left( {{\gamma }_{\pi \left( {d - 2}\right) },{l}_{1}}\right) ,\min \left( {{\gamma }_{\pi \left( {d - 3}\right) },{u}_{1}}\right) }\right\rbrack ,\left\lbrack {{l}_{1},\min \left( {{\gamma }_{\pi \left( {d - 2}\right) },{u}_{1}}\right) }\right\rbrack$ .
|
| 328 |
+
|
| 329 |
+
Recall that each node term is a product of $d - 2$ Gumbel CDFs times a box indicator. What are these products for our $d - 1$ non-eliminated leaf nodes? The first is $F\left( {{\varepsilon }_{s} - {\beta }_{\pi \left( 1\right) s}}\right) \cdots F\left( {{\varepsilon }_{s} - {\beta }_{\pi \left( {d - 2}\right) s}}\right)$ , the second is $F\left( {{\varepsilon }_{r} - {\alpha }_{\pi \left( 1\right) r}}\right) F\left( {{\varepsilon }_{s} - {\beta }_{\pi \left( 2\right) s}}\right) \cdots F\left( {{\varepsilon }_{s} - {\beta }_{\pi \left( {d - 2}\right) s}}\right)$ , the third is $F\left( {{\varepsilon }_{r} - {\alpha }_{\pi \left( 1\right) r}}\right) F\left( {{\varepsilon }_{r} - }\right.$ $\left. {\alpha }_{\pi \left( 2\right) r}\right) F\left( {{\varepsilon }_{s} - {\beta }_{\pi \left( 3\right) s}}\right) \cdots F\left( {{\varepsilon }_{s} - {\beta }_{\pi \left( {d - 2}\right) s}}\right)$ and the last one is $F\left( {{\varepsilon }_{r} - {\alpha }_{\pi \left( 1\right) r}}\right) \cdots F\left( {{\varepsilon }_{r} - {\alpha }_{\pi \left( {d - 2}\right) r}}\right)$ . Next, we derive expressions for the expectation of these terms.
|
| 330 |
+
|
| 331 |
+
### B.2 ANALYTICAL EXPRESSIONS FOR EXPECTATIONS
|
| 332 |
+
|
| 333 |
+
Consider $d - 2$ scalars ${a}_{1},\ldots ,{a}_{d - 2}$ and $1 \leq k \leq d - 1$ . We would like to compute
|
| 334 |
+
|
| 335 |
+
$$
|
| 336 |
+
A = \mathbb{E}\left\lbrack {\left( {\mathop{\prod }\limits_{{j < k}}F\left( {{\varepsilon }_{r} + {a}_{j}}\right) }\right) \left( {\mathop{\prod }\limits_{{j \geq k}}F\left( {{\varepsilon }_{s} + {a}_{j}}\right) }\right) {\mathbf{1}}_{l \leq {\varepsilon }_{r} - {\varepsilon }_{s} \leq u}}\right\rbrack . \tag{4}
|
| 337 |
+
$$
|
| 338 |
+
|
| 339 |
+
Denote
|
| 340 |
+
|
| 341 |
+
$$
|
| 342 |
+
G\left( {{a}_{1},\ldots ,{a}_{t}}\right) \mathrel{\text{:=}} \mathbb{E}\left\lbrack {F\left( {{\varepsilon }_{1} + {a}_{1}}\right) \cdots F\left( {{\varepsilon }_{1} + {a}_{t}}\right) }\right\rbrack .
|
| 343 |
+
$$
|
| 344 |
+
|
| 345 |
+
We start with showing that
|
| 346 |
+
|
| 347 |
+
$$
|
| 348 |
+
G\left( {{a}_{1},\ldots ,{a}_{t}}\right) = {\left( 1 + {e}^{-{a}_{1}} + \cdots + {e}^{-{a}_{t}}\right) }^{-1}.
|
| 349 |
+
$$
|
| 350 |
+
|
| 351 |
+
Recall that $p\left( x\right) = F{\left( x\right) }^{\prime } = {e}^{-x}F\left( x\right)$ . If $\widetilde{F}\left( x\right) = \mathop{\prod }\limits_{{j = 1}}^{t}F\left( {x + {a}_{j}}\right)$ , then
|
| 352 |
+
|
| 353 |
+
$$
|
| 354 |
+
\widetilde{F}{\left( x\right) }^{\prime } = \left( {\mathop{\sum }\limits_{{j = 1}}^{t}{e}^{-{a}_{j}}}\right) {e}^{-x}\widetilde{F}\left( x\right) .
|
| 355 |
+
$$
|
| 356 |
+
|
| 357 |
+
Using integration by parts:
|
| 358 |
+
|
| 359 |
+
$$
|
| 360 |
+
G\left( {{a}_{1},\ldots ,{a}_{t}}\right) = \int \widetilde{F}\left( x\right) p\left( x\right) {dx} = 1 - \int \widetilde{F}{\left( x\right) }^{\prime }F\left( x\right) {dx} = 1 - \left( {\mathop{\sum }\limits_{{j = 1}}^{t}{e}^{-{a}_{j}}}\right) G\left( {{a}_{1},\ldots ,{a}_{t}}\right) ,
|
| 361 |
+
$$
|
| 362 |
+
|
| 363 |
+
where we used that $F\left( x\right) = {e}^{x}p\left( x\right)$ .
|
| 364 |
+
|
| 365 |
+
Next, define
|
| 366 |
+
|
| 367 |
+
$$
|
| 368 |
+
{g}_{1} = \log \left( {1 + {e}^{-{a}_{1}} + \cdots + {e}^{-{a}_{k - 1}}}\right) ,\;{g}_{2} = \log \left( {1 + {e}^{-{a}_{k}} + \cdots + {e}^{-{a}_{d - 2}}}\right) .
|
| 369 |
+
$$
|
| 370 |
+
|
| 371 |
+
We show that $A$ in (4) can be written in terms of $\left( {{g}_{1},{g}_{2}, l, u}\right)$ only. Assume that $k > 1$ for now. Fix ${\varepsilon }_{s}$ and do the expectation over ${\varepsilon }_{r}$ . Note that ${\mathbf{1}}_{l \leq {\varepsilon }_{r} - {\varepsilon }_{s} \leq u} = {\mathbf{1}}_{{\varepsilon }_{s} + l \leq {\varepsilon }_{r} \leq {\varepsilon }_{s} + u}$ . If $\widetilde{F}\left( x\right) = \mathop{\prod }\limits_{{j < k}}F(x +$ $\left. {a}_{j}\right)$ , then
|
| 372 |
+
|
| 373 |
+
$$
|
| 374 |
+
\widetilde{F}{\left( x\right) }^{\prime } = \left( {\mathop{\sum }\limits_{{j < k}}{e}^{-{a}_{j}}}\right) {e}^{-x}\widetilde{F}\left( x\right) .
|
| 375 |
+
$$
|
| 376 |
+
|
| 377 |
+
Using integration by parts:
|
| 378 |
+
|
| 379 |
+
$$
|
| 380 |
+
B\left( {\varepsilon }_{s}\right) = {\int }_{{\varepsilon }_{s} + l}^{{\varepsilon }_{s} + u}\widetilde{F}\left( x\right) p\left( x\right) {dx} = {\left\lbrack \widetilde{F}\left( x\right) F\left( x\right) \right\rbrack }_{{\varepsilon }_{s} + l}^{{\varepsilon }_{s} + u} - B\left( {\varepsilon }_{s}\right) \mathop{\sum }\limits_{{j < k}}{e}^{-{a}_{j}},
|
| 381 |
+
$$
|
| 382 |
+
|
| 383 |
+
so that
|
| 384 |
+
|
| 385 |
+
$$
|
| 386 |
+
B\left( {\varepsilon }_{s}\right) = {e}^{-{g}_{1}}{\left\lbrack \widetilde{F}\left( x\right) F\left( x\right) \right\rbrack }_{{\varepsilon }_{s} + l}^{{\varepsilon }_{s} + u}
|
| 387 |
+
$$
|
| 388 |
+
|
| 389 |
+
and
|
| 390 |
+
|
| 391 |
+
$$
|
| 392 |
+
A = \mathbb{E}\left\lbrack {B\left( {\varepsilon }_{s}\right) \mathop{\prod }\limits_{{j \geq k}}F\left( {{\varepsilon }_{s} + {a}_{j}}\right) }\right\rbrack = {A}_{1} - {A}_{2},
|
| 393 |
+
$$
|
| 394 |
+
|
| 395 |
+
where
|
| 396 |
+
|
| 397 |
+
$$
|
| 398 |
+
{A}_{1} = {e}^{-{g}_{1}}\mathbb{E}\left\lbrack {\left( {\mathop{\prod }\limits_{{j < k}}F\left( {{\varepsilon }_{s} + u + {a}_{j}}\right) }\right) \left( {\mathop{\prod }\limits_{{j \geq k}}F\left( {{\varepsilon }_{s} + {a}_{j}}\right) }\right) F\left( {{\varepsilon }_{s} + u}\right) }\right\rbrack
|
| 399 |
+
$$
|
| 400 |
+
|
| 401 |
+
$$
|
| 402 |
+
= {e}^{-{g}_{1}}G\left( {{a}_{1} + u,{a}_{2} + u,\ldots ,{a}_{k - 1} + u,{a}_{k},\ldots ,{a}_{d - 2}, u}\right)
|
| 403 |
+
$$
|
| 404 |
+
|
| 405 |
+
and
|
| 406 |
+
|
| 407 |
+
$$
|
| 408 |
+
{A}_{2} = {e}^{-{g}_{1}}G\left( {{a}_{1} + l,{a}_{2} + l,\ldots ,{a}_{k - 1} + l,{a}_{k},\ldots ,{a}_{d - 2}, l}\right) .
|
| 409 |
+
$$
|
| 410 |
+
|
| 411 |
+
Now,
|
| 412 |
+
|
| 413 |
+
$$
|
| 414 |
+
- \log {A}_{1} = {g}_{1} - \log G\left( {{a}_{1} + u,{a}_{2} + u,\ldots ,{a}_{k - 1} + u,{a}_{k},\ldots ,{a}_{d - 2}, u}\right)
|
| 415 |
+
$$
|
| 416 |
+
|
| 417 |
+
$$
|
| 418 |
+
= {g}_{1} + \log \left( {1 + \mathop{\sum }\limits_{{j < k}}{e}^{-{a}_{j} - u} + \mathop{\sum }\limits_{{j \geq k}}{e}^{-{a}_{j}} + {e}^{-u}}\right) = {g}_{1} + \log \left( {{e}^{{g}_{2}} + {e}^{-u + {g}_{1}}}\right)
|
| 419 |
+
$$
|
| 420 |
+
|
| 421 |
+
$$
|
| 422 |
+
= {g}_{1} + {g}_{2} + \log \left( {1 + {e}^{{g}_{1} - {g}_{2} - u}}\right)
|
| 423 |
+
$$
|
| 424 |
+
|
| 425 |
+
and
|
| 426 |
+
|
| 427 |
+
$$
|
| 428 |
+
- \log {A}_{2} = {g}_{1} + {g}_{2} + \log \left( {1 + {e}^{{g}_{1} - {g}_{2} - l}}\right)
|
| 429 |
+
$$
|
| 430 |
+
|
| 431 |
+
so that
|
| 432 |
+
|
| 433 |
+
$$
|
| 434 |
+
A = {A}_{1} - {A}_{2} = {e}^{-\left( {{g}_{1} + {g}_{2}}\right) }\left( {\sigma \left( {{g}_{2} - {g}_{1} + u}\right) - \sigma \left( {{g}_{2} - {g}_{1} + l}\right) }\right) ,\;\sigma \left( x\right) \mathrel{\text{:=}} \frac{1}{1 + {e}^{-x}}. \tag{5}
|
| 435 |
+
$$
|
| 436 |
+
|
| 437 |
+
If $k = 1$ , we can flip the roles of ${\varepsilon }_{r}$ and ${\varepsilon }_{s}$ by ${g}_{1} \leftrightarrow {g}_{2}, l \rightarrow - u, u \rightarrow - l, k \rightarrow d - 1$ , which gives
|
| 438 |
+
|
| 439 |
+
$$
|
| 440 |
+
{e}^{-\left( {{g}_{1} + {g}_{2}}\right) }\left( {\sigma \left( {-\left( {{g}_{2} - {g}_{1} + l}\right) }\right) - \sigma \left( {-\left( {{g}_{2} - {g}_{1} + u}\right) }\right) }\right) = {e}^{-\left( {{g}_{1} + {g}_{2}}\right) }\left( {\sigma \left( {{g}_{2} - {g}_{1} + u}\right) - \sigma \left( {{g}_{2} - {g}_{1} + l}\right) }\right) ,
|
| 441 |
+
$$
|
| 442 |
+
|
| 443 |
+
using $\sigma \left( {-x}\right) = 1 - \sigma \left( x\right)$ , so the expression holds in this case as well.
|
| 444 |
+
|
| 445 |
+
### B.3 Efficient Computation for All Pairs
|
| 446 |
+
|
| 447 |
+
Our $d - 1$ terms of interest can be indexed by $k = 1,\ldots , d - 1$ . We can use the analytical expression just given with ${a}_{j} = - {\alpha }_{\pi \left( j\right) r}$ for $1 \leq j < k$ and ${a}_{j} = - {\beta }_{\pi \left( j\right) s}$ for $k \leq j \leq d - 2$ . Define
|
| 448 |
+
|
| 449 |
+
$$
|
| 450 |
+
{g}_{1}\left( k\right) = \log \left( {1 + \mathop{\sum }\limits_{{1 \leq j < k}}{e}^{{\alpha }_{\pi \left( j\right) } - {\alpha }_{r}}}\right) ,\;{g}_{2}\left( k\right) = \log \left( {1 + \mathop{\sum }\limits_{{k \leq j \leq d - 2}}{e}^{{\beta }_{\pi \left( j\right) } - {\beta }_{s}}}\right) ,
|
| 451 |
+
$$
|
| 452 |
+
|
| 453 |
+
as well as
|
| 454 |
+
|
| 455 |
+
$$
|
| 456 |
+
l\left( k\right) = \max \left( {{\gamma }_{\pi \left( k\right) },{l}_{1}}\right) ,\;u\left( k\right) = \min \left( {{\gamma }_{\pi \left( {k - 1}\right) },{u}_{1}}\right) ,
|
| 457 |
+
$$
|
| 458 |
+
|
| 459 |
+
where we define $\pi \left( 0\right) = 0,\pi \left( {d - 1}\right) = d + 1,{\gamma }_{0} = + \infty$ , and ${\gamma }_{d + 1} = - \infty$ . Note that
|
| 460 |
+
|
| 461 |
+
$$
|
| 462 |
+
l\left( k\right) = \max \left( {{\rho }_{\pi \left( k\right) } - {\alpha }_{r} + {\beta }_{s},{\alpha }_{s} - {\alpha }_{r}}\right) = {\beta }_{s} - {\alpha }_{r} + \max \left( {{\rho }_{\pi \left( k\right) },{\rho }_{s}}\right) , \tag{6}
|
| 463 |
+
$$
|
| 464 |
+
|
| 465 |
+
$$
|
| 466 |
+
u\left( k\right) = \min \left( {{\rho }_{\pi \left( {k - 1}\right) } - {\alpha }_{r} + {\beta }_{s},{\beta }_{s} - {\beta }_{r}}\right) = {\beta }_{s} - {\alpha }_{r} + \min \left( {{\rho }_{\pi \left( {k - 1}\right) },{\rho }_{r}}\right) .
|
| 467 |
+
$$
|
| 468 |
+
|
| 469 |
+
${\mathcal{P}}_{rs}$ is obtained as sum of $A\left( {{g}_{1}\left( k\right) ,{g}_{2}\left( k\right) , l\left( k\right) , u\left( k\right) }\right)$ for $k = 1,\ldots , d - 1$ . In the sequel, we show how to compute these terms efficiently, for all pairs $r < s$ .
|
| 470 |
+
|
| 471 |
+
Recall that ${\gamma }_{j} = {\rho }_{j} - \left( {{\alpha }_{r} - {\beta }_{s}}\right) ,{u}_{1} = {\beta }_{s} - {\beta }_{r},{l}_{1} = {\alpha }_{s} - {\alpha }_{r}$ . Then:
|
| 472 |
+
|
| 473 |
+
$$
|
| 474 |
+
l\left( k\right) < u\left( k\right) \Leftrightarrow {\rho }_{\pi \left( k\right) } < {\rho }_{\pi \left( {k - 1}\right) } \land {\rho }_{\pi \left( k\right) } < {\rho }_{r} \land {\rho }_{s} < {\rho }_{\pi \left( {k - 1}\right) }.
|
| 475 |
+
$$
|
| 476 |
+
|
| 477 |
+
Recall that $\pi \left( k\right) = k + {\mathbf{1}}_{r \leq k} + {\mathbf{1}}_{s - 1 \leq k}$ . Define ${K}_{1} = \{ 1,\ldots , r - 1\} ,{K}_{3} = \{ s,\ldots , d - 1\}$ , each of which can be empty. For $k \in {K}_{1},{\rho }_{\pi \left( k\right) } = {\rho }_{k} \geq {\rho }_{r}$ , so $l\left( k\right) \geq u\left( k\right)$ . For $k \in {K}_{3}$ , we have $\pi \left( {k - 1}\right) = k + 1 > s$ , so that ${\rho }_{s} \geq {\rho }_{\pi \left( {k - 1}\right) }$ and $l\left( k\right) \geq u\left( k\right)$ . This means we only need to iterate over $k \in {K}_{2} = \{ r,\ldots , s - 2\}$ with $\pi \left( k\right) = k + 1$ and $k = s - 1$ with $\pi \left( k\right) = s + 1$ (the latter only if $s < d$ ).
|
| 478 |
+
|
| 479 |
+
As $k$ runs in ${K}_{2},\pi \left( k\right) = r + 1,\ldots , s - 1$ , and if $s < d$ then $\pi \left( {s - 1}\right) = s + 1$ . Now
|
| 480 |
+
|
| 481 |
+
$$
|
| 482 |
+
{g}_{1}\left( k\right) = \log \left( {1 + \mathop{\sum }\limits_{{1 \leq j < k}}{e}^{{\alpha }_{\pi \left( j\right) } - {\alpha }_{r}}}\right) = \log \mathop{\sum }\limits_{{1 \leq j \leq k}}{e}^{{\alpha }_{j} - {\alpha }_{r}},
|
| 483 |
+
$$
|
| 484 |
+
|
| 485 |
+
using that ${e}^{{\alpha }_{r} - {\alpha }_{r}} = 1$ . For ${g}_{2}\left( k\right)$ , if $k < s - 1$ , then $\{ \pi \left( j\right) \mid k \leq j \leq d - 2\} = \{ k + 1,\ldots , d\} \smallsetminus \{ s\}$ , and if $k = s - 1$ , the same holds true (the set is empty if $s = d$ ). Using ${e}^{{\beta }_{s} - {\beta }_{s}} = 1$ , we have
|
| 486 |
+
|
| 487 |
+
$$
|
| 488 |
+
{g}_{2}\left( k\right) = \log \mathop{\sum }\limits_{{k < j \leq d}}{e}^{{\beta }_{j} - {\beta }_{s}}.
|
| 489 |
+
$$
|
| 490 |
+
|
| 491 |
+
Define
|
| 492 |
+
|
| 493 |
+
$$
|
| 494 |
+
{\bar{\alpha }}_{k} \mathrel{\text{:=}} \log \mathop{\sum }\limits_{{j = 1}}^{k}{e}^{{\alpha }_{j}},\;{\bar{\beta }}_{k} \mathrel{\text{:=}} \log \mathop{\sum }\limits_{{j = k + 1}}^{d}{e}^{{\beta }_{j}},\;k = 1,\ldots , d - 1.
|
| 495 |
+
$$
|
| 496 |
+
|
| 497 |
+
Then:
|
| 498 |
+
|
| 499 |
+
$$
|
| 500 |
+
{g}_{1}\left( k\right) = {\bar{\alpha }}_{k} - {\alpha }_{r},\;{g}_{2}\left( k\right) = {\bar{\beta }}_{k} - {\beta }_{s},\;k = r,\ldots , s - 1.
|
| 501 |
+
$$
|
| 502 |
+
|
| 503 |
+
Finally, using ${g}_{2}\left( k\right) - {g}_{1}\left( k\right) = {\bar{\beta }}_{k} - {\bar{\alpha }}_{k} + {\alpha }_{r} - {\beta }_{s}$ and (6), we have
|
| 504 |
+
|
| 505 |
+
$$
|
| 506 |
+
{g}_{2}\left( k\right) - {g}_{1}\left( k\right) + l\left( k\right) = {\bar{\beta }}_{k} - {\bar{\alpha }}_{k} + \max \left( {{\rho }_{\pi \left( k\right) },{\rho }_{s}}\right) ,,\;{g}_{2}\left( k\right) - {g}_{1}\left( k\right) + u\left( k\right) = {\bar{\beta }}_{k} - {\bar{\alpha }}_{k} + \min \left( {{\rho }_{\pi \left( {k - 1}\right) },{\rho }_{r}}\right) .
|
| 507 |
+
$$
|
| 508 |
+
|
| 509 |
+
Some extra derivation, distinguishing between (a) $r = s - 1$ ,(b) $r < s - 1 \land k \in {K}_{2}$ ,(c) $r <$ $s - 1 \land k = s - 1$ shows that
|
| 510 |
+
|
| 511 |
+
$$
|
| 512 |
+
\max \left( {{\rho }_{\pi \left( k\right) },{\rho }_{s}}\right) = {\rho }_{k + 1},\;\min \left( {{\rho }_{\pi \left( {k - 1}\right) },{\rho }_{r}}\right) = {\rho }_{k},\;k = r,\ldots , s - 1.
|
| 513 |
+
$$
|
| 514 |
+
|
| 515 |
+
Plugging this into (5):
|
| 516 |
+
|
| 517 |
+
$$
|
| 518 |
+
A\left( k\right) = {e}^{{\alpha }_{r} + {\beta }_{s}}{c}_{k},\;{c}_{k} = {e}^{-{\bar{\beta }}_{k} - {\bar{\alpha }}_{k}}\left( {\sigma \left( {{\bar{\beta }}_{k} - {\bar{\alpha }}_{k} + {\rho }_{k}}\right) - \sigma \left( {{\bar{\beta }}_{k} - {\bar{\alpha }}_{k} + {\rho }_{k + 1}}\right) }\right) .
|
| 519 |
+
$$
|
| 520 |
+
|
| 521 |
+
and ${\mathcal{P}}_{rs} = \mathop{\sum }\limits_{{k = r}}^{{s - 1}}A\left( k\right)$ . Importantly, ${c}_{k}$ does not depend on $r, s$ . Therefore:
|
| 522 |
+
|
| 523 |
+
$$
|
| 524 |
+
{\mathcal{P}}_{rs} = {e}^{{\alpha }_{r} + {\beta }_{s}}\left( {{C}_{s} - {C}_{r}}\right) ,\;{C}_{t} = \mathop{\sum }\limits_{{k = 1}}^{{t - 1}}{c}_{k}\;\left( {r < s}\right) ;\;{\mathcal{P}}_{rs} = 0\;\left( {r > s}\right) . \tag{7}
|
| 525 |
+
$$
|
| 526 |
+
|
| 527 |
+
The sequences $\left\lbrack {\bar{\alpha }}_{k}\right\rbrack ,\left\lbrack {\bar{\beta }}_{k}\right\rbrack ,\left\lbrack {c}_{k}\right\rbrack ,\left\lbrack {C}_{k}\right\rbrack$ can be computed in $\mathcal{O}\left( d\right)$ .
|
| 528 |
+
|
| 529 |
+
Finally, we also determine ${\mathcal{P}}_{rr}$ , which is defined by the inequalities ${\varepsilon }_{j} \leq {\varepsilon }_{1} - \max \left( {{\alpha }_{jr},{\beta }_{jr}}\right)$ . A derivation like above (but simpler) gives:
|
| 530 |
+
|
| 531 |
+
$$
|
| 532 |
+
{\mathcal{P}}_{rr} = {\left( 1 + \mathop{\sum }\limits_{{j \neq r}}{e}^{\max \left( {{\alpha }_{jr},{\beta }_{jr}}\right) }\right) }^{-1}.
|
| 533 |
+
$$
|
| 534 |
+
|
| 535 |
+
Now, ${\alpha }_{jr} \geq {\beta }_{jr}$ iff ${\rho }_{j} \geq {\rho }_{r}$ iff $j < r$ , so that
|
| 536 |
+
|
| 537 |
+
$$
|
| 538 |
+
{\mathcal{P}}_{rr} = {\left( 1 + \mathop{\sum }\limits_{{j < r}}{e}^{{\alpha }_{j} - {\alpha }_{r}} + \mathop{\sum }\limits_{{j > r}}{e}^{{\beta }_{j} - {\beta }_{r}}\right) }^{-1} = {\left( {e}^{{\bar{\alpha }}_{r} - {\alpha }_{r}} + {e}^{{\bar{\beta }}_{r} - {\beta }_{r}}\right) }^{-1} = {e}^{{\beta }_{r} - {\bar{\beta }}_{r}}\sigma \left( {{\bar{\beta }}_{r} - {\bar{\alpha }}_{r} + {\rho }_{r}}\right) ,\;\left( {r < d}\right) ,
|
| 539 |
+
$$
|
| 540 |
+
|
| 541 |
+
$$
|
| 542 |
+
{\mathcal{P}}_{dd} = {e}^{{\alpha }_{d} - {\bar{\alpha }}_{d}}.
|
| 543 |
+
$$
|
| 544 |
+
|
| 545 |
+
## C RELATED WORK IN COOPERATIVE GAME THEORY.
|
| 546 |
+
|
| 547 |
+
The Shapley value of simple game has a probabilistic interpretation (Peleg & Sudhölter, 2007, pag. 168) however simple games are not Categorical games. An and-or axiom substitute the linear axioms in simple games (Weber, 1988), here we address probabilisite combinations. Stochastic games are typically intended as multi-stage games where the transition between stages is stochastic Shapley (1953b); Petrosjan (2006) and not the intrinsic payoffs. Static cooperative games with stochastic output have been considered from the perspective of coalition formation and considering notions of players' utility (e.g. Suijs et al., 1999) or studying two stages setups - before and after the realisation of the payoff (e.g. Granot, 1977), and from an optimization perspective (Sun et al., 2022). To the best of our knowledge, our settings and constructions have not been studied before.
|
papers/ICLR/ICLR 2023/ICLR 2023 Workshop/ICLR 2023 Workshop TML4H/SG3ztVYDubA/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,99 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ EXPLAINING MULTICLASS CLASSIFIERS WITH CATE- GORICAL VALUES: A CASE STUDY IN RADIOGRAPHY
|
| 2 |
+
|
| 3 |
+
Anonymous authors
|
| 4 |
+
|
| 5 |
+
Paper under double-blind review
|
| 6 |
+
|
| 7 |
+
§ ABSTRACT
|
| 8 |
+
|
| 9 |
+
Explainability of machine learning methods is of fundamental importance in healthcare to calibrate trust. A large branch of explainable machine learning uses tools linked to the Shapley value, which have nonetheless found difficult to interpret and potentially misleading. Taking multi-class classification as a reference task we argue that a critical issue in these methods is that they disregard structure of the models output. We develop the Categorical Shapley value as a theoretically-grounded method to explain the output of multi-class classifiers, in terms of transition (or flipping) probabilities across classes. We demonstrate on a case study composed of three example scenarios on pneumonia detection and subtyping using radiography images.
|
| 10 |
+
|
| 11 |
+
§ 1 INTRODUCTION
|
| 12 |
+
|
| 13 |
+
Machine learning (ML) has emerged as a powerful tool in healthcare with the potential to revolutionize the way we diagnose, treat and prevent diseases. ML algorithms have a wide range of applications including early detection of diseases, risk prediction in patients developing certain conditions, optimisation of treatment plans, improved prognosis, assistance in clinical decision-making, gene expression analysis, genomic classification, improved personalize patient care and more. However, the adoption of ML in clinical practice has often been hampered by the opaqueness of the ML models. This opaqueness may trigger skepticism in clinicians and other end-users such as patients or care-givers to trust models recommendations without understanding the reasoning behind their predictions, which delays and / or decreases the adoption of state-of-the art technologies and hinders further advances.
|
| 14 |
+
|
| 15 |
+
Various methods have been proposed in the literature to enhance the explainability of ML models (XAI). Among these, (local) feature attribution methods such as SHAP (Lundberg & Lee, 2017) or variants (e.g. Frye et al., 2020; Chen et al., 2018; Heskes et al., 2020) have gained considerable traction. In fact Shapley value based explanations are the most popular explainability method according to a recent study by Bhatt et al. (2020) These methods, supported by a number of axioms (properties) such as nullity, linearity and efficiency, provide insight into the contribution of each feature toward the model's decision. There is, however, a growing scrutiny into the utility of these techniques, which are judged to be un-intuitive and potentially misleading (Kumar et al., 2020; Mittelstadt et al., 2019), and do not support contrastive statements (Miller, 2019). While part of these issues may be rooted in misinterpretations of the technical tools involved ${}^{1}$ , in this paper we argue that a critical flaw in current approaches in the area is failure to capture relevant structure of the object one wishes to explain (the explicandum). In contrast, we take the position that attributive explanations should comply with the nature of the explicandum: in particular, if the model output is a RV, we should represent marginal contributions as RVs as well. Our contribution, which we dub the Categorical Shapley value, can fully support statements such as "the probability that the feature ${x}_{i}$ made $x$ being classified as a viral pneumonia rather than an healthy lung is $y$ ", which we develop, experiment and discuss in this paper within the context of radiography.
|
| 16 |
+
|
| 17 |
+
${}^{1}$ For instance, the Shapley value is a descriptive rather than prescriptive tool. This means that, in general, one should not expect that changing the feature with the highest Shapley value should lead to the largest change in the outcome.
|
| 18 |
+
|
| 19 |
+
§ 1.1 THE SHAPLEY VALUE AND ITS APPLICAITON TO EXPLAIN MULTICLASS CLASSIFIERS
|
| 20 |
+
|
| 21 |
+
For concreteness, we focus here on multi-class classification ( $d$ classes) as one of the most common tasks in ML. Let $f : \mathcal{X} \subseteq {\mathbb{R}}^{n} \mapsto \mathcal{Y}$ be a (trained) multi-class classifier and $x \in \mathcal{X}$ an input point. One common strategy to explain the behaviour of the model at $x$ is to attribute an importance score to each input feature through the computation of the Shapley value (SV) (Shapley, 1953a). In order to do so, one must first construct a cooperative game $v$ where players correspond to features and coalitions correspond to features being used: that is $v\left( S\right) = f\left( {x}_{\mid S}\right)$ , where $S \in {2}^{n}$ . ${}^{2}$ Then, for each $i \in \left\lbrack n\right\rbrack$ , the Shapley value is given by
|
| 22 |
+
|
| 23 |
+
$$
|
| 24 |
+
{\psi }_{i}\left( v\right) = \mathop{\sum }\limits_{{S \in {2}^{\left\lbrack n\right\rbrack \smallsetminus i}}}p\left( S\right) \left\lbrack {v\left( {S \cup i}\right) - v\left( S\right) }\right\rbrack = {\mathbb{E}}_{S \sim p\left( S\right) }\left\lbrack {v\left( {S \cup i}\right) - v\left( S\right) }\right\rbrack ; \tag{1}
|
| 25 |
+
$$
|
| 26 |
+
|
| 27 |
+
where $p\left( S\right) = \frac{1}{n}{\left( \begin{matrix} n - 1 \\ \left| S\right| \end{matrix}\right) }^{-1}$ if $i \notin S$ and 0 otherwise. The quantity $v\left( {S \cup i}\right) - v\left( S\right)$ is called marginal contribution of $i$ to coalition $S$ . See Roth (1988) for an in depth discussion of the SV and surrounding topics.
|
| 28 |
+
|
| 29 |
+
Historically, the SV has being developed as an answer to the question: how can we assign a worth (or value) to each player $i$ ? The SV does so by distributing "fairly" the grand payoff $v\left( \left\lbrack n\right\rbrack \right)$ among players, so that(i)if a player never contributes to the payoff, their worth is null,(ii)if any two players have indistinguishable marginal contributions, they have the same worth, and (iii) if $v$ is a linear combination of two games, say $u$ and $w$ , then the worth of $i$ for $v$ is the corresponding linear combination of their worth for $u$ and $w$ . The game $v$ could typically represent an economic or political process (e.g. a vote) and, critically, would be modelled as a real-valued set function; i.e. $v : {2}^{d} \mapsto \mathbb{R}$ , so that ${\psi }_{i}\left( v\right) \in \mathbb{R}$ .
|
| 30 |
+
|
| 31 |
+
§ 2 CATEGORICAL GAMES AND VALUES
|
| 32 |
+
|
| 33 |
+
In our case, the grand payoff is the output $f\left( x\right)$ that determines the class the model assigns to $x$ . Whilest in practice $f$ could be implemented in various ways, several modern ML models (e.g. neural nets) output distributions over the classes - e.g. through a softmax layer. Equivalently, one may think of $f\left( x\right)$ as an $E$ -valued (categorical) random variable. Using the one-hot-encoding convention, we identify $E = {\left\{ {e}_{s}\right\} }_{s = 1}^{d}$ as the one-hot vectors of the canonical base of ${\mathbb{R}}^{d}$ . Now, however, it becomes unclear which real number should be assigned to a difference of random variables. Moreover, also averaging over coalitions $S$ , as done in Eq. (1), may induce a semantic gap in this context. To recover the standard pipeline to compute the SV, one may settle for explaining the logits or the class probabilities as if they were independent scalars. However this may lead to paradoxical explanations that attribute high importance to a certain feature (say ${x}_{1}$ ) for all classes, failing to capture the fact that an increase in the likelihood of a given class must necessarily result in an aggregated decrease of the likelihood of the others. Here we show how to avoid step which causes loss of structure and rather explain $f\left( x\right)$ directly.
|
| 34 |
+
|
| 35 |
+
For a player $i$ and a coalition $S$ not containing $i$ , we need to relate $v\left( S\right)$ with $v\left( {S \cup i}\right)$ in order to quantify the marginal contribution of $i$ to $S$ . This relationship is not just in terms of the marginal distributions of these two variables, but also of their dependence. In this paper, we assume a simple dependency structure between all variables $v\left( S\right)$ , in that $v\left( S\right) = \widetilde{v}\left( {S,\varepsilon }\right)$ for $\varepsilon \sim p\left( \varepsilon \right)$ where $\widetilde{v}$ is a deterministic mapping to $E$ , and $\varepsilon$ is a random variable distributed according to some $p\left( \varepsilon \right)$ . Let $v\left( S\right)$ be a $d$ -way categorical distribution with natural parameters ${\theta }_{S,j}$ , in that
|
| 36 |
+
|
| 37 |
+
$$
|
| 38 |
+
\mathbb{P}\left( {v\left( S\right) = j}\right) = \frac{{e}^{{\theta }_{S,j}}}{\mathop{\sum }\limits_{k}{e}^{{\theta }_{S,k}}} = \operatorname{Softmax}\left( {\theta }_{S}\right) .
|
| 39 |
+
$$
|
| 40 |
+
|
| 41 |
+
We call such $v$ a Categorical game. We can implement the aforementioned dependency assumption by the Gumbel-argmax reparameterization (Papandreou &Yuille,2011): $\widetilde{v}\left( {S,\varepsilon }\right) =$ $\arg \mathop{\max }\limits_{k}\left\{ {{\theta }_{S,k} + {\varepsilon }_{k}}\right\}$ , where ${\varepsilon }_{1},\ldots ,{\varepsilon }_{d}$ are independent standard Gumbel variables.
|
| 42 |
+
|
| 43 |
+
Given this construction, we redefine the marginal contribution of $i$ to $S$ as the random variable $\widetilde{v}\left( {S \cup i,\varepsilon }\right) - \widetilde{v}\left( {S,\varepsilon }\right)$ for $\varepsilon \sim p\left( \varepsilon \right)$ . This RV takes values in the set $E - E = \left\{ {e - {e}^{\prime } \mid e,{e}^{\prime } \in E}\right\}$ ; we shall call its distribution
|
| 44 |
+
|
| 45 |
+
$$
|
| 46 |
+
{q}_{i,S}\left( z\right) = \mathbb{P}\left( {v\left( {S \cup i}\right) - v\left( S\right) = z \mid S}\right) ,\;z \in E - E.
|
| 47 |
+
$$
|
| 48 |
+
|
| 49 |
+
${}^{2}$ In practice, out-of-coalitions features must often be given a value; this could be an arbitrary baseline, a global or a conditional average Sundararajan & Najmi (2020); Aas et al. (2021).
|
| 50 |
+
|
| 51 |
+
Note that ${q}_{i,S}\left( x\right)$ is a conditional distribution, given $S \in {2}^{\left\lbrack n\right\rbrack \smallsetminus i}$ and $E - E$ is a set containing $0 \in {\mathbb{R}}^{d}$ and all vectors that have exactly two non-zero entries, one with value +1 and the other -1 .
|
| 52 |
+
|
| 53 |
+
We can view this as a generalized difference operation $v\left( {S \cup i}\right) \ominus v\left( S\right)$ between random variables rather then deterministic values, where the $\ominus$ incorporates the above dependency assumption. We define our Categorical Shapely value as the random variable $\xi \left( v\right) = {\left\{ {\xi }_{i}\right\} }_{i \in \left\lbrack n\right\rbrack }$ , where
|
| 54 |
+
|
| 55 |
+
$$
|
| 56 |
+
{\xi }_{i}\left( v\right) = v\left( {{S}_{i} \cup i}\right) \ominus v\left( {S}_{i}\right) = \widetilde{v}\left( {S \cup i,\varepsilon }\right) - \widetilde{v}\left( {S,\varepsilon }\right) \;\text{ for }\varepsilon \sim p\left( \varepsilon \right) \text{ and }S \sim p\left( S\right) . \tag{2}
|
| 57 |
+
$$
|
| 58 |
+
|
| 59 |
+
Note these RVs have multiple sources of randomness, which are independent from each other. We can marginalise over $p\left( S\right)$ to obtain the distribution ${q}_{i}\left( x\right)$ of ${\xi }_{i}\left( v\right)$ : for every $z \in E - E$ :
|
| 60 |
+
|
| 61 |
+
$$
|
| 62 |
+
{q}_{i}\left( z\right) = \mathbb{P}\left( {{\xi }_{i}\left( v\right) = z}\right) = {\mathbb{E}}_{{S}_{i} \sim {p}^{i}}\left\lbrack {{q}_{{S}_{i},i}\left( z\right) }\right\rbrack = \mathop{\sum }\limits_{{{S}_{i} \in {2}^{\left\lbrack n\right\rbrack \smallsetminus i}}}p\left( {S}_{i}\right) {q}_{{S}_{i},i}\left( z\right) . \tag{3}
|
| 63 |
+
$$
|
| 64 |
+
|
| 65 |
+
One major advantage of this novel construction is that now the distribution of the Categorical SV is straightforward to interpret. Indeed, the probability masses at each point $z = {e}_{r} - {e}_{s} \in E - E$ are interpretable as the probability (averaged over coalitions) that player $i$ causes the payoff of $v$ (and hence the prediction of $f$ to flip from class $s$ to class $r$ . We refer to ${q}_{i}\left( {{e}_{r} - {e}_{s}}\right)$ as the transition probability induced by feature $i$ .
|
| 66 |
+
|
| 67 |
+
Interestingly we can derive a closed form analytical expression for the ${q}_{i,S}$ ’s and, hence, for the ${q}_{i}$ ’s. We do this in Section A. The following proposition relates the categorical Shapley value with the standard SV and gives a number of properties that can be derived for the categorical SV.
|
| 68 |
+
|
| 69 |
+
Proposition 2.1. Let $\xi$ be the Categorical Shapley value defined in equation 2 Then:
|
| 70 |
+
|
| 71 |
+
1. $\mathbb{E}\left\lbrack {{\xi }_{i}\left( v\right) }\right\rbrack = {\psi }_{i}\left( {\mathbb{E}\left\lbrack v\right\rbrack }\right) \in {\left\lbrack -1,1\right\rbrack }^{d}$ , where $\mathbb{E}\left\lbrack v\right\rbrack$ is the $n$ -players game defined as $\mathbb{E}\left\lbrack v\right\rbrack \left( S\right) =$ $\mathbb{E}\left\lbrack {v\left( S\right) }\right\rbrack = \operatorname{Softmax}\left( {\theta }_{S}\right)$ ;
|
| 72 |
+
|
| 73 |
+
2. If $i$ is a null player, i.e. $v\left( {S \cup i}\right) = v\left( S\right)$ for all $S \neq \varnothing$ , then ${\xi }_{i}\left( v\right) = {\delta }_{0}$ , where ${\delta }_{0}$ is the Dirac delta centered in $0 \in {\mathbb{R}}^{d}$ ;
|
| 74 |
+
|
| 75 |
+
3. If $v = {v}^{\prime }$ with probability $\pi \in \left\lbrack {0,1}\right\rbrack$ and $v = {v}^{\prime \prime }$ with probability $1 - \pi$ (independent from $S$ ), then ${q}_{i}\left( z\right) = \mathbb{P}\left( {{\xi }_{i}\left( v\right) = z}\right) = \pi \mathbb{P}\left( {{\xi }_{i}\left( {v}^{\prime }\right) = z}\right) + \left( {1 - \pi }\right) \mathbb{P}\left( {{\xi }_{i}\left( {v}^{\prime \prime }\right) = z}\right) =$ $\pi {q}_{i}^{\prime }\left( z\right) + \left( {1 - \pi }\right) {q}_{i}^{\prime \prime }\left( z\right)$ .
|
| 76 |
+
|
| 77 |
+
4. $v\left( \left\lbrack n\right\rbrack \right) \ominus v\left( \varnothing \right) = \mathop{\sum }\limits_{{i \in \left\lbrack n\right\rbrack }}{\mathbb{E}}_{S \sim p\left( S\right) }\left\lbrack {{\xi }_{i}\left( v\right) }\right\rbrack$ , where the sum on the right hand side is the sum of (dependent) $E - E$ -valued random variables.
|
| 78 |
+
|
| 79 |
+
Property 1. essentially shows that the categorical SV is strictly more expressive than the traditional Shapley values, whilst putting (standard) SVs for multi-class classifier under a new light. Proprieties 2., 3 . and 4. may be seen as the "adaptations" to the Categorical SV of the null player, linearity and efficiency axioms, respectively. In particular, note that the standard linearity axiom would be of little consequence in this context as taking a linear combination of categorical RVs does not lead to another categorical RV. Instead, property 3. addresses the common situation where the classifier one wishes to explain is a (probabilistic) ensemble, relating the distributions of the respective Categorical SVs. See Section C for a brief discussion of related work in the cooperative game theory literature.
|
| 80 |
+
|
| 81 |
+
§ 3 DETECTING PNEUOMONIA IN CHEST X-RAYS: A CASE STUDY
|
| 82 |
+
|
| 83 |
+
This section employs the Categorical SV (CSV) to analyse a commonly used deep learning architecture, ResNet-18 (He et al., 2015) for pneumonia detection and subtyping using radiography images, which is casted as a multiclass classification problem based categorising subjects into three classes: healthy controls (HC - class 0), bacterial pneumonia cases (BP - class 1), viral pneumonia cases (VP - class 2) and . The model has been trained on chest X-ray images collected from pediatric patients, aged one to five, as part of their routine clinical care in Guangzhou Women and Children's Medical Center (Kermany et al., 2018). The aim is to show the importance of using structured explanations even when the model is fine-tuned to the problem of interest, in this case with a mean balanced
|
| 84 |
+
|
| 85 |
+
< g r a p h i c s >
|
| 86 |
+
|
| 87 |
+
Figure 1: Three example subject radiography images and Categorical Shapley values relative to the depicted patches, plotted as matrices. (Left) Ground-truth: VP Prediction: BP. Patch representing two artifacts which should not impact the model decision (Center) Ground-truth: VP Prediction: VP. Two patches, the red one on the left highlighting a section where pneumonia is visible and blue selecting a patch middle mediastinum. (Right) Ground-truth: VP Prediction: BP. The red patch is relative to a pneumonia area, the yellow one highlights the heart of the patient.
|
| 88 |
+
|
| 89 |
+
accuracy score of 84.7%. We select three example scenarios (as depicted in Figure 1) to analyse different use-cases where CSV empowers the decision process.
|
| 90 |
+
|
| 91 |
+
Case One: Artifacts Figure 1 (Left) shows an example scenario of an image with artifacts which have been identified in red. The probability distribution of the model for the ground-truth class BP and the predicted class VP are given as 0.4789 and 0.4808 respectively. Using Categorical SV, the contribution of the artifacts for the transition of the prediction probability to the correct class is identified as 12.7%, which implies the presence of these artifacts as a root cause behind the confusion between BP and VP, which might be mitigated by further pre-processing or ensemble classification designs.
|
| 92 |
+
|
| 93 |
+
Case Two: Correct Classification Figure 1 (Center) shows a correctly classified VP. However, even though the main affected area in this patient is depicted by red by independent experts, the contribution of this area to the decision has been found negligible (around 1%, see the left matrix under the Center image), making the model's recommendation untrustworthy. Furthermore, the transition probabilities calculated for the region middle mediastinum (depicted in blue), which is not expected to be a region of interest for pneumonia, can found as high as 13.3% from VP to HC, flagging this region as incorrectly important for the decision process of the model.
|
| 94 |
+
|
| 95 |
+
Case Three: Incorrect Classification When the incorrectly classified case pictured in Figure 1 (Right) is analysed, the transition probability for the area in red, which is labelled as a main affected area of VP by independent experts, from the prediction class BP to the ground-truth class VP is calculated as zero. The heart region identified by yellow on the other hand, is shown to exhibit over $5\%$ transition probability to the VP and BP classes, although this value would be expected to be close to zero. Both of these findings help highlight inconsistencies in the behaviour of the model.
|
| 96 |
+
|
| 97 |
+
§ 4 DISCUSSION AND CONCLUSION
|
| 98 |
+
|
| 99 |
+
By analysing three example scenarios in Section 3, we have underlined the importance of using model explainability even for fine-tuned, seemingly highly performing models, especially for use with critically important application areas such as healthcare. Employing categorical games and values empowers a structural understanding of the multiclass classification problem by providing information about transition probabilities across classes informing about flipping decisions, in addition to the feature contribution information obtained from classical methods. While we implement a case study on the classification of pneumonia using radiography images as a proof-of-concept, the method proposed are extendable to all modalities including genomics, free-text or tabular data. For out of coalition portions of the image, we employed a simple background constant value. We plan to consider more sophisticated formulations in the future. Another invaluable path for future work is to develop better visualization and summarization methods and interactive interfaces alongside clinicians and other end-users.
|
papers/ICLR/ICLR 2023/ICLR 2023 Workshop/ICLR 2023 Workshop TML4H/Tmb13sYJwP/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,296 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# POST-HOC SALIENCY METHODS FAIL TO CAPTURE LATENT FEATURE IMPORTANCE IN TIME SERIES DATA
|
| 2 |
+
|
| 3 |
+
Anonymous authors
|
| 4 |
+
|
| 5 |
+
Paper under double-blind review
|
| 6 |
+
|
| 7 |
+
## Abstract
|
| 8 |
+
|
| 9 |
+
Saliency methods provide visual explainability for deep image processing models by highlighting informative regions in the input images based on feature-wise (pixels) importance scores. These methods have been adopted to the time series domain, aiming to highlight important temporal regions in a sequence. This paper identifies, for the first time, the systematic failure of such methods in the time series domain when underlying patterns (e.g., dominant frequency or trend) are based on latent information rather than temporal regions. The latent feature importance postulation is highly relevant for the medical domain as many medical signals, such as EEG signals or sensor data for gate analysis, are commonly assumed to be related to the frequency domain. To the best of our knowledge, no existing post-hoc explainability method can highlight influential latent information for a classification problem. Hence, in this paper, we frame and analyze the problem of latent feature saliency detection. We first assess the explainability quality of multiple state-of-the-art saliency methods (Integrated Gradients, DeepLift, Kernel SHAP, Lime) on top of various classification methods (LSTM, CNN, LSTM and CNN trained via saliency guided training) using simulated time series data with underlying temporal or latent space patterns. In conclusion, we identify that Integrated Gradients and DeepLift, if redesigned, could be potential candidates for latent saliency scores.
|
| 10 |
+
|
| 11 |
+
## 1 INTRODUCTION
|
| 12 |
+
|
| 13 |
+
Saliency methods aim to explain the predictions of deep learning models by highlighting important input features. These methods often assign scores to individual inputs (Guidotti et al., 2018; Ismail et al., 2020), collectively resulting in the detection of class-distinctive patterns. For image data this means assigning scores to positional information, namely pixels. Such a strategy suits image data, as the label is often associated with specific input regions. Recently, image saliency methods have been adopted for time series data Loeffler et al. (2022); Schlegel et al. (2020). They similarly assign importance scores to the pixel counterparts, namely "time points". These methods suit the time series problem when a temporal pattern is indicative of the class. In some time series problems, however, the label may depend on the latent features such as dominant frequency, state-space model parameters, or the overall trend of a non-stationary time series. In these cases, even though the classifier might successfully capture the latent space, the positional scores extracted from the classifier will not directly explain the importance of the underlying latent features. Hence, the generated saliency maps will not be directly interpretable and thus fail to fulfill their purpose.
|
| 14 |
+
|
| 15 |
+
The goal of this paper is to introduce, formulate and analyze the problem of latent feature saliency in deep time series classification problems, focusing on the fundamental Fourier series latent model. By extension, our study is replicable for other latent models. We summarize our main contributions below:
|
| 16 |
+
|
| 17 |
+
1. We draw attention to the problem of latent feature saliency detection in time series data. We formulate the shapelet- vs. latent-based pattern in time series classification and propose a definition for an ideal latent feature saliency method (Section 2).
|
| 18 |
+
|
| 19 |
+
2. We provide a comprehensive study of popular time series saliency methods including Integrated Gradients, DeepLift, Kernel SHAP and Lime (Section 3, Section 4) on top of multiple classification methods (LSTM, CNN, LSTM and CNN trained via saliency guided training).
|
| 20 |
+
|
| 21 |
+
3. We identify effective methods that can be extended to potentially tackle the problem of latent space saliency (Section 5).
|
| 22 |
+
|
| 23 |
+
## 2 Problem Formulation
|
| 24 |
+
|
| 25 |
+
Let $D = \left( {X, Y}\right)$ with a univariate time series $X \in \mathcal{X}$ and the binary label $Y \in \{ 0,1\}$ formulate a time series classification data set. Furthermore, let the mapping ${f}_{XY} : \mathcal{X} \mapsto \{ 0,1\}$ represent a deep learning-based classifier. In latent-representation learning, we assume a latent space $\mathcal{Z}$ , a mapping from feature to latent space ${f}_{XZ} : \mathcal{X} \mapsto \mathcal{Z}$ and a latent space to label mapping ${f}_{ZY} : \mathcal{Z} \mapsto \{ 0,1\}$ , such that the classifier ${f}_{XY}$ can be learned via the feature-to-latent and latent-to-label mappings.This view has been adopted by several time series classifiers such as hidden Markov models (HMM) and recurrent neural networks (RNN). The learned latent representation, exhibits properties shown to be significant in terms of explainability Mikolov et al. (2013); Charte et al. (2020). Instead of estimating ${f}_{XZ}$ as a black-box model, a parametric latent model (such as Fourier series models, state space models, linear and switching dynamical systems, or additive and multiplicative models) can be estimated via a neural network. These models are motivated by prior knowledge about the underlying data generation mechanism; thus, their parameters often are interpretable. A saliency method applied to this solution assigns scores to latent features in the $\mathcal{Z}$ space. In contrast, methods used for the black-box models usually lack explainability for the latent features.
|
| 26 |
+
|
| 27 |
+
The latent space assumption is relevant in many time series problems. Sound signals are often differentiated by amplitude and frequency; thus, the decision process behind audio classification is likely to be better explained by the Fourier latent space than by spatial importance scores. Vibration signal classification, as in earthquake or production line failure prediction, is likely to depend on frequency or amplitude as well. Financial time series classification often revolves around modeling trend and seasonality of the time series. Many signals in the medical domain, such as EEG signals, sensor data from wearable technologies for gait analysis for neurological disease progression, pain recognition, or medication level adjustment, are further strongly related to amplitude and frequency. These examples show that achieving time series explainability is heavily related to latent space assumptions.
|
| 28 |
+
|
| 29 |
+
### 2.1 LATENT FEATURES VS. SHAPELETS
|
| 30 |
+
|
| 31 |
+
Ye & Keogh (2009) define shapelets as variable-length subsequences of time series which are maximally representative of a class. We define a feature-to-shapelet mapping ${f}_{XS} : \mathcal{X} \mapsto {\left\lbrack 0,1\right\rbrack }^{k}$ . Samples in $\mathcal{S}$ are normalized score vectors, determining which shapelet appears in a sample. Subsequently, shapelet-based classifiers predict the label based on an existing pattern in the time domain. These models are coordinated with saliency methods, which in this case are visually explainable since time points are directly expressive of both saliency scores and shapelets. The presence of informative shapelets does not contradict the assumption of a latent model. On the contrary, shapelets may appear as a proxy for latent information (see Figure 2). Nevertheless, from the explainability point of view, there is a notable difference between latent features and shapelets. As an example, a label correlated with the damping ratio of a vibration signal can be potentially predicted by shapelet-based classifiers; however, a conventional saliency method, applied to this problem, will only highlight a proxy of the informative latent feature, namely the existing fluctuations and oscillations of the time series. In conclusion, time series classification problems may be characterized by class differences in features which belong to the time domain as shapelets or to a latent domain. Current saliency methods can provide explainability for shapelets, but not directly for latent models.
|
| 32 |
+
|
| 33 |
+
### 2.2 DEFINING A DESIRABLE SALIENCY METHOD FOR TIME SERIES
|
| 34 |
+
|
| 35 |
+
Figure 1 illustrates the setup of a time series classification problem with multiple possible intermediate latent spaces, enumerated with $i$ , and denoted as ${\mathcal{Z}}^{\left( i\right) }$ . A time series $X \in \mathcal{X}$ can be mapped to ${\mathcal{Z}}^{\left( i\right) }$ by the $i$ -th chosen latent model ${f}_{XZ}^{\left( i\right) }$ . Without loss of generality, we assume that there is only one latent feature ${Z}^{ * }$ which provides the best explanation for the classification task. The latent space that contains ${Z}^{ * }$ is denoted as ${\mathcal{Z}}^{\left( *\right) }$ .
|
| 36 |
+
|
| 37 |
+

|
| 38 |
+
|
| 39 |
+
Figure 1: Time series classification schematic over the space $\mathcal{X} \times \mathcal{Y}$ with latent space representations ${Z}^{\left( i\right) }$ , associated with saliency function $m\left( {Z}^{\left( i\right) }\right)$ and resulting saliency map ${M}^{\left( i\right) }$ . Current methods ${m}_{T}$ measure saliency of the feature space, yielding the map ${M}_{T}$ .
|
| 40 |
+
|
| 41 |
+
We define a saliency method as "reliable" if it assigns the highest score to ${Z}^{ * }$ above all other features throughout all latent spaces. To formulate the reliability definition, we consider a latent-aware saliency method $m : {\mathcal{Z}}^{\left( i\right) } \mapsto {\mathbb{R}}_{ + }^{\left| {Z}^{\left( i\right) }\right| }$ which produces a saliency map ${M}^{\left( i\right) }$ for ${\mathcal{Z}}^{\left( i\right) }$ . The reliability condition is then formulated as
|
| 42 |
+
|
| 43 |
+
$$
|
| 44 |
+
\forall i \neq * ,\;\max {M}^{\left( *\right) } > \max {M}^{\left( i\right) }.
|
| 45 |
+
$$
|
| 46 |
+
|
| 47 |
+
Note that during implementation, we have to define the possible set of latent models manually.
|
| 48 |
+
|
| 49 |
+
The fundamental problem of existing saliency methods is that they only estimate the saliency map for the time domain and therefore lack appropriate output for features in other domains. Hence none of the existing saliency methods meet the criteria for reliability. However, we argue that there might exist some promising failing methods, which requires only minor adjustments to serve as desired saliency methods for time series. We define a saliency method ${m}_{T} : \mathcal{X} \mapsto {\mathbb{R}}_{ + }^{\left| X\right| }$ as promising if the produced map ${M}_{T} \in {\mathbb{R}}_{ + }^{T}$ bears enough information to infer ${M}^{\left( i\right) },\forall i$ (possibly via a simple mapping function, depicted as a purple arrow in Figure 1). In other words, ${m}_{T}$ can capture information about latent saliency, even though it cannot directly explain it. In this case, an extension of the promising method, representing the mapping from ${M}_{T}$ to ${M}^{\left( *\right) }$ , establishes a desired latent saliency method.
|
| 50 |
+
|
| 51 |
+

|
| 52 |
+
|
| 53 |
+
Figure 2: Toy examples of multiple label-making scenarios. Influential time steps (regions with high saliency scores) are shaded in grey for frequency (peaks), amplitude (highest peaks), trend (a window enough for inferring about the trend), and shapelet (presence of the informative pattern).
|
| 54 |
+
|
| 55 |
+
Figure 2 schematically depicts the output of a good failing method when the label is associated with either the frequency or amplitude of a Fourier model, the trend of an additive model, or shapelets. In particular, highlighted regions are sufficient to infer the latent parameter (or equally shapelet). Putting the experiment into practice, Figure 3 presents heat maps of importance scores resulting from two exemplary failing methods.
|
| 56 |
+
|
| 57 |
+

|
| 58 |
+
|
| 59 |
+
Figure 3: Examples of well-performing explainability methods (top row) providing to some extend interpretable explanations and completely uninterpretable saliency results (bottom row).
|
| 60 |
+
|
| 61 |
+
## 3 EXPERIMENTAL FRAMEWORK
|
| 62 |
+
|
| 63 |
+
As a preliminary step for presenting the results of the empirical study, this section introduces the examined time series saliency methods and data sets and implementation details.
|
| 64 |
+
|
| 65 |
+
Our study focuses on post-hoc saliency methods designed to explain single classification instances of trained models. Here, we investigate the following state of the art saliency methods and group them into four families.
|
| 66 |
+
|
| 67 |
+
(1) Gradient-based feature attribution (FA) methods infer input feature importance based on the magnitude of the gradient of the output with respect to the input features. The attribution method Saliency Simonyan et al. (2014) directly employs gradients to generate saliency maps. Extensions of this basic method are Gradient $\times$ Input Shrikumar et al. (2016), DeCovNet Zeiler & Fergus (2014), Guided Backpropagation Springenberg et al. (2015) and SmoothGrad Smilkov et al. (2017). Deep-Lift Shrikumar et al. (2017) utilizes a neuron attribution-based difference-from-reverence approach to assigning scores. Integrated Gradients (IG) Sundararajan et al. (2017) calculates the path integral from a non-informative baseline input to the respective input feature, tackling the problem of gradient saturation Bastings & Filippova (2020). Relevance-based methods, e.g., Layer-wise Relevance Propagation (LRP) Bach et al. (2015), Deep Taylor Decomposition Montavon et al. (2017), calculate attribution scores by propagating relevance scores from the output back through the network via designed propagation rules.
|
| 68 |
+
|
| 69 |
+
(2) Model-agnostic FA methods can be applied to any black-box classifier without access to the models' parameters Carrillo et al. (2021); Petsiuk et al. (2018). Methods such as Occlusion Zeiler & Fergus (2014), Meaningful Perturbations Fong & Vedaldi (2017) and RISE Petsiuk et al. (2018) assign saliency scores relative to the change in output when the respective feature is perturbed. LIME Ribeiro et al. (2016) fits local interpretable surrogate models to the classifier in the neighborhood of the target sample and calculates the saliency based on these models' parameters. Other methods are inspired by theorems from the field of game theory Datta et al. (2016); Lipovetsky & Conklin (2001); Strumbelj & Kononenko (2014). In particular, the application of the Shapley Value Shapley (1953) has achieved great popularity. Lundberg & Lee (2017) introduce the SHAP values method to measure feature importance by the Shapley value of a conditional expectation function of the to-be-explained model. (3) A different class of post-hoc methods generates counterfactual explanations (CF) as LASTS Guidotti et al. (2020), time series tweaking Karlsson et al. (2020), LatentCF++ Wang et al. (2021), CoMTE Ates et al. (2021) and Native Guide Delaney et al. (2021). These methods identify counter-samples to provide explainability by estimating the required variation in individual input features to change the classification outcome. Since our experiments focus on saliency maps, we exclude CF methods from our investigations in this paper.
|
| 70 |
+
|
| 71 |
+
For our study we selected four candidate methods from different classes of post-hoc methods: Integrated Gradients (IG), Deep-Lift (DL), LIME and Kernel SHAP (SHAP). As for the classifiers, we utilize long-short term memory networks (LSTM) Hochreiter & Schmidhuber (1997) and convolutional neural networks (CNNs) Le Cun et al. (1989). Since the experiments focus on saliency detection, we also train the LSTM and CNN networks via a saliency-guided training procedure (SGT) Ismail et al. (2021). This procedure allows networks to produce more consistent saliency scores, as the saliency feedback is used for training the network.
|
| 72 |
+
|
| 73 |
+
### 3.1 DATA SET GENERATION
|
| 74 |
+
|
| 75 |
+
To demonstrate our findings, we designed a simulation study in which time-series data is generated based on the Fourier series model. The Fourier series is a well-known latent model for many natural scenarios Geweke & Singleton (1981); Bracewell (2000) and it is proven that any given univariate time series can be reconstructed from its Fourier latent space using a Fourier transformation function. The Fourier latent space can be defined as a matrix with three rows representing frequencies, amplitudes and phase shifts. In our experiments, the Fourier latent space is a matrix of $3 \times {10}$ parameters.
|
| 76 |
+
|
| 77 |
+
We generated a total of ten experiments to understand the response of different saliency methods to different patterns. Our ten experiments include four experiments with temporal shapelet patterns, two with latent amplitude patterns, two with latent frequency patterns, and two with latent phase shift patterns. In each experiment, we build a data set containing 2560 time series of equal length divided into two equally sized classes. For the shapelet experiments, each sample in the data set is generated by first randomly sampling from the latent space and then applying a Fourier transformation to reconstruct its temporal signal from the latent space matrix. Afterwards the time series samples in class 1 were superimposed with a dominant shapelet pattern positioned either at a random location (experiment 1), the end (experiment 2), middle (experiment 3) or start (experiment 4) in the temporal time series. For the latent feature experiments, the latent space matrices for class 0 were sampled from a latent space different from the latent space for class 1 . The difference was defined in terms of sampling intervals for frequency, amplitude or phase shift. A detailed description of the sampling distributions per experiment is presented in Table 3 in Appendix A.2. For each experiment, the training, validating and testing sets were generated by random sampling without replacement with a ratio of ${80}\% ,{10}\%$ and ${10}\%$ , respectively.
|
| 78 |
+
|
| 79 |
+
For assigning the labels to the data samples, we induced a simple linear relation between the latent or temporal patterns and the class labels. In the latent scenarios, two classes are distinguishable using a single decision boundary defined as ${Z}^{ * } =$ const., meaning that only one latent feature is class-distinctive. Likewise, in shapelet-related scenarios, the presence or absence of a specific shapelet decides the label of the data. This allows us to study the latent features individually and in a controlled manner. In such settings, potential poor results can be confidently attributed to the intrinsic weakness of the saliency methods, rather than inappropriate classifiers. The data generation mechanism and the resulting data sets are presented and described in detail in Appendix A.1 and A.2, respectively.
|
| 80 |
+
|
| 81 |
+
### 3.2 IMPLEMENTATION DETAILS
|
| 82 |
+
|
| 83 |
+
In this paper, we investigate the performance of both the classifiers and the saliency methods with special focus on the interpretability of saliency methods. To ensure there is a uniform power between all classifiers, they were designed as simple one-layer networks with no dropouts or other forms of additional regularization. The performance of saliency methods is strongly correlated with the classification performance, which is typically increased through more sophisticated and deeper networks. Therefore, by keeping the architecture simple, we intended to objectively evaluate and compare the explainability methods without the influence of optional variations, preventing overfit-ting or performance boosting.
|
| 84 |
+
|
| 85 |
+
All algorithms were implemented in the Python programming language. The classifiers were implemented using the deep learning library PyTorch Paszke et al. (2019) with the help of the wrapper PyTorch Lightning Falcon (2019). Hyper-parameter optimization was performed through the library Optuna Akiba et al. (2019). For the feature attribution techniques, the implementations from the PyTorch based model interpretability library Captum Kokhlikyan et al. (2020) were employed.
|
| 86 |
+
|
| 87 |
+
## 4 RESULTS
|
| 88 |
+
|
| 89 |
+
Table 1 reports the average accuracy and F1 score of the chosen classifiers across our ten data sets grouped by the type of the experiments. The results show that overall the CNN trained via saliency-guided training achieves the highest classification performance.
|
| 90 |
+
|
| 91 |
+
Table 1: Average classification performance on test data across all synthetic data sets.
|
| 92 |
+
|
| 93 |
+
<table><tr><td rowspan="2">Classifier</td><td colspan="2">Shapelet</td><td colspan="2">Frequency</td><td colspan="2">Phase shift</td><td colspan="2">$\mathbf{{Amplitude}}$</td></tr><tr><td>Accuracy</td><td>F1</td><td>Accuracy</td><td>F1</td><td>Accuracy</td><td>F1</td><td>Accuracy</td><td>F1</td></tr><tr><td>LSTM</td><td>0.8535</td><td>0.8466</td><td>0.9749</td><td>0.9470</td><td>0.5157</td><td>0.4914</td><td>0.9981</td><td>0.9981</td></tr><tr><td>LSTM + SGT</td><td>0.8242</td><td>0.8417</td><td>0.9082</td><td>0.9117</td><td>0.5352</td><td>0.4145</td><td>0.9160</td><td>0.9230</td></tr><tr><td>CNN</td><td>0.6221</td><td>0.7439</td><td>0.9610</td><td>0.9633</td><td>0.9629</td><td>0.9625</td><td>0.9981</td><td>0.9981</td></tr><tr><td>CNN + SGT</td><td>0.8721</td><td>0.9138</td><td>0.9610</td><td>0.9633</td><td>0.9649</td><td>0.9634</td><td>1.0000</td><td>1.0000</td></tr></table>
|
| 94 |
+
|
| 95 |
+
It appears that the LSTM classifier is seriously challenged during phase-shift experiments. This could be due to the vanishing gradient problem of LSTMs, which hinders proper classification if informative patterns are placed in the early time points. Surprisingly, LSTM with saliency-guided training procedure performs slightly worse than LSTM. Unlike the LSTM, the CNN largely benefits from the saliency-guided training procedure, especially in the shapelet experiment.
|
| 96 |
+
|
| 97 |
+

|
| 98 |
+
|
| 99 |
+
Figure 4: Saliency maps by IG, DL, Lime and SHAP for the CNN+SGT on a frequency, amplitude, phase shift and shapelet experiment, respectively. Explanations by IG and DL clearly focus on aspects related to the latent feature (peaks and valleys for amplitude and frequency, beginning of time sequence for phase shift) and the shapelet. Maps of Lime and SHAP are visually uninterpretable.
|
| 100 |
+
|
| 101 |
+
Next, to investigate the explainability of the saliency methods, we visualize their output via color-coded heat maps and overlay them onto the original time series (Figure 4). This allows us to assess the relevance of the saliency scores and the positional information directly. In the shapelet experiments, we expect the maps to highlight the shapelet itself. In the amplitude and frequency experiments, we expect an oscillating heat map with a focus on the peaks (or valleys) and extreme values of the time series, respectively. Finally, in the phase shift experiments, we expect an emphasis on the beginning of the time series. Figure 4 compares the saliency maps of the post-hoc saliency methods (IG, DL, LIME, and SHAP) plotted for one sample per experiment group (shapelet- and latent- experiments). Visual explanations provided by IG and DL align with our expectations for all experiments and are comparatively easy to interpret. For example, in the amplitude experiments, IG and DL highlight the peaks whose values are the direct proxies for the latent feature. On the other hand, the heat maps of SHAP and LIME do not yield the expected visual patterns.
|
| 102 |
+
|
| 103 |
+

|
| 104 |
+
|
| 105 |
+
Figure 5: Saliency heat map of IG, DL, LIME and SHAP across all samples of the positive class (occurrence of a shapelet) in the test data set. The IG and DL heat maps show a clear saliency pattern in the middle of the time series, in which the shapelet occurred. The SHAP and LIME heat maps, however, resemble a random saliency assignment.
|
| 106 |
+
|
| 107 |
+
We expected that the four saliency methods perform reasonably at least for the shapelet experiments. To investigate this further on the entire data set, we generated Figure 5. These heat maps depict aggregated scores of the saliency methods for the middle-positioned shapelet experiments. In these maps, each row represents a test sample and each column a time point. Figure 5 shows that both SHAP and LIME fail to discover the shapelet pattern across the entire data set. The other two methods IG and DL, however, performed successfully: their aggregated heat map clearly highlights the position of the middle shapelet.
|
| 108 |
+
|
| 109 |
+
## 5 DISCUSSION
|
| 110 |
+
|
| 111 |
+
Promised effectiveness of saliency methods for shapelet-related classification The goal of this paper was to demonstrate the fundamental problem of adopted saliency methods for time series data in latent-related classification problems. The methods were expected to be effective in case of the presence of positional information, i.e., shapelets. However, experiments show that some of the methods performed poorly even in simple shapelet scenarios. In particular, explanations provided by different methods mostly did not align. This finding is in accordance with Neely et al. (2021). Our observation raises caution regarding the use of saliency methods for time series data, previously pointed out by Loeffler et al. (2022); Parvatharaju et al. (2021); Schlegel & Keim (2021). In our findings, IG and DL showed reliable performances throughout the experiments when paired with effective classifiers. Nevertheless, we encourage using various explainability methods as multiple explanations can coexist Wiegreffe & Pinter (2019).
|
| 112 |
+
|
| 113 |
+
Need for latent feature saliency methods for time series classification We emphasize the need for developing latent feature saliency methods for time series classification. Adopted image saliency methods are unable to parse explainable and meaningful saliency scores for time series data with class-distinctive latent patterns. As discussed in Section 2.2, we proposed a definition for "promising failing" methods as ones which produce positional scores associated with informative latent parameters. In the case of Fourier series models, this corresponds to highlighting peaks or valleys, highest peaks, or early time points in case of frequency-, amplitude- and phase-shift-related classification problems, respectively. Not all SoA methods could exhibit such behavior. We hypothesize that this was caused by the independence assumption between neighboring data points, which is made by the tested approaches. Under this assumption, the model neglects the relative temporal ordering of input features, leading to the inability to detect temporal dependencies. This finding is also reported by Lim et al. (2021).
|
| 114 |
+
|
| 115 |
+
We observed that the IG and DL methods consistently performed well for shapelet-related problems and produced useful saliency maps for latent-related problems. Note that despite calling these methods "promising", the need for directly scoring the latent parameters remains. We expect this problem to exacerbate for latent-related settings whose features contain less legible associations with the positional information, e.g., rates of changes in state-space models.
|
| 116 |
+
|
| 117 |
+
Future work To extend the empirical investigations, we suggest considering other time series latent models. We further encourage the development of methods that can incorporate multiple feature spaces into the saliency analysis. With this regard, there is a potential for extracting latent saliency scores directly from positional saliency maps, given that the target latent model is known. Our findings show that the output of IG and DL are associated with the Fourier latent model. This approach (i.e., mapping positional scores to latent scores) serves well as a baseline method.
|
| 118 |
+
|
| 119 |
+
Throughout our study, the evaluation of saliency maps was performed by visual inspection only, since the primary purpose of this paper to formulate the latent feature saliency problem and motivate further investigation of this topic through a simple experimental framework. For future work, we encourage using quantitative evaluation metrics to assess the performance of different saliency methods objectively. Furthermore, we motivate the extension of our experiments to more complex real-world data sets.
|
| 120 |
+
|
| 121 |
+
Our analyses were done on the sample level, i.e., we studied individual saliency maps to infer the underlying classification mechanism. Intra-class studies of variability and variance of saliency maps might uncover further information regarding the classification.
|
| 122 |
+
|
| 123 |
+
## 6 CONCLUSION
|
| 124 |
+
|
| 125 |
+
Explainability of time series models is an uprising field of research. To build trust in AI, interpretation and explanation of black-box classifiers are crucial. Various image saliency methods have been introduced to time series problems. They focus on positional information of the input features, providing spatial explanations. In time series data, however, the class label may depend on a latent model instead of positional information. To the best of our knowledge, performance and behavior of saliency methods in such settings have not been explored, neither has a saliency model accounting for latent features been developed. We demonstrated this problem by empirically showing that if the class label is associated with latent features of the time series instead of the presence of a specific shape, common saliency methods do not provide accurate or interpretable explanations. Finally, we presented an outline for future research to develop extensions for existing saliency methods providing latent saliency results based on time-step-wise importance scores. Our work highlights the need for research on latent saliency detection for deep time series classification.
|
| 126 |
+
|
| 127 |
+
## ACKNOWLEDGMENTS AND FUNDING
|
| 128 |
+
|
| 129 |
+
To ensure the integrity of blind review process, the acknowledgement and funding statements will be added after the review process.
|
| 130 |
+
|
| 131 |
+
## CONFLICTS OF INTEREST
|
| 132 |
+
|
| 133 |
+
The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.
|
| 134 |
+
|
| 135 |
+
## REPRODUCIBILITY STATEMENT
|
| 136 |
+
|
| 137 |
+
The synthetic data generation algorithm is described in Appendix A.I. The specific data sets employed are stated in terms of the sampling intervals of the latent features in Appendix A.2. Implementation details such as employed libraries were provided in Section 3.2. A GitHub repository containing the complete code base will be published upon final paper publication.
|
| 138 |
+
|
| 139 |
+
## REFERENCES
|
| 140 |
+
|
| 141 |
+
Takuya Akiba, Shotaro Sano, Toshihiko Yanase, Takeru Ohta, and Masanori Koyama. Optuna: A next-generation hyperparameter optimization framework. In Proceedings of the 25st ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD '19), Anchorage, USA, 2019.
|
| 142 |
+
|
| 143 |
+
Emre Ates, Burak Aksar, Vitus J. Leung, and Ayse K. Coskun. Counterfactual explanations for multivariate time series. In Proceedings of the 2021 International Conference on Applied Artificial Intelligence (ICAPAI), pp. 1-8, Halden, Norway, 2021.
|
| 144 |
+
|
| 145 |
+
S. Bach, A. Binder, G. Montavon, F. Klauschen, KR. Müller, and Samek W. On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PLoS One, 10(7), 2015. doi: 10.1371/journal.pone.0130140.
|
| 146 |
+
|
| 147 |
+
Jasmijn Bastings and Katja Filippova. The elephant in the interpretability room: Why use attention as explanation when we have saliency methods? In Proceedings of the 2020 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, online, 2020.
|
| 148 |
+
|
| 149 |
+
Ronald Newbold Bracewell. The Fourier transform and its applications. McGraw-Hill: New York, USA, 3 edition, 2000.
|
| 150 |
+
|
| 151 |
+
Alfredo Carrillo, Luis F. Cantú, and Alejandro Noriega. Individual explanations in machine learning models: A survey for practitioners. ArXiv, 2021. doi: 10.48550/arXiv.2104.04144.
|
| 152 |
+
|
| 153 |
+
David Charte, Francisco Charte, Maria J del Jesus, and Francisco Herrera. An analysis on the use of autoencoders for representation learning: Fundamentals, learning task case studies, explainability and challenges. Neurocomputing, 404:93-107, 2020. doi: 10.1016/j.neucom.2020.04.057.
|
| 154 |
+
|
| 155 |
+
Anupam Datta, Shayak Sen, and Yair Zick. Algorithmic transparency via quantitative input influence: Theory and experiments with learning systems. In Proceedings of the 2016 IEEE Symposium on Security and Privacy (SP), pp. 598-617, San Jose, USA, 2016.
|
| 156 |
+
|
| 157 |
+
Eoin Delaney, Derek Greene, and Mark T. Keane. Instance-based counterfactual explanations for time series classification. In Proceedings of the Case-Based Reasoning Research and Development: 29th International Conference (ICCBR 2021), pp. 32-47, Salamanca, Spain, 2021.
|
| 158 |
+
|
| 159 |
+
William Falcon. Pytorch lightning, 2019.
|
| 160 |
+
|
| 161 |
+
Ruth C. Fong and Andrea Vedaldi. Interpretable explanations of black boxes by meaningful perturbation. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, pp. 3449-3457, 2017.
|
| 162 |
+
|
| 163 |
+
John F. Geweke and Kenneth J. Singleton. Latent variable models for time series : A frequency domain approach with an application to the permanent income hypothesis. Journal of Econometrics, 17:287-304, 1981. doi: 10.1016/0304-4076(81)90003-8.
|
| 164 |
+
|
| 165 |
+
Riccardo Guidotti, Anna Monreale, Salvatore Ruggieri, Franco Turini, Fosca Giannotti, and Dino Pedreschi. A survey of methods for explaining black box models. ACM Comput. Surv., 51(5), 2018. doi: 10.1145/3236009.
|
| 166 |
+
|
| 167 |
+
Riccardo Guidotti, Anna Monreale, Francesco Spinnato, Dino Pedreschi, and Fosca Giannotti. Explaining any time series classifier. In Proceedings of the 2020 IEEE Second International Conference on Cognitive Machine Intelligence (CogMI), pp. 167-176, Atlanta, USA, 2020.
|
| 168 |
+
|
| 169 |
+
Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural Computation, 9(8): 1735-1780, 9 1997.
|
| 170 |
+
|
| 171 |
+
Aya Abdelsalam Ismail, Mohamed K. Gunady, Héctor Corrada Bravo, and Soheil Feizi. Benchmarking deep learning interpretability in time series predictions. In Proceedings of the 34th Conference on Neural Information Processing Systems (NeurIPS), pp. 6441-6452, online, 2020.
|
| 172 |
+
|
| 173 |
+
Aya Abdelsalam Ismail, Hector Corrada Bravo, and Soheil Feizi. Improving deep learning interpretability by saliency guided training. In Proceedings of the 35th Conference on Neural Information Processing Systems (NeurIPS2021), pp. 26726-26739, online, 2021.
|
| 174 |
+
|
| 175 |
+
Isak Karlsson, Jonathan Rebane, Panagiotis Papapetrou, and Aristides Gionis. Locally and globally explainable time series tweaking. Knowledge Information Systems, 62:1671-1700, 2020. doi: 10.1007/s10115-019-01389-4.
|
| 176 |
+
|
| 177 |
+
Narine Kokhlikyan, Vivek Miglani, Miguel Martin, Edward Wang, Bilal Alsallakh, Jonathan Reynolds, Alexander Melnikov, Natalia Kliushkina, Carlos Araya, Siqi Yan, and Orion Reblitz-Richardson. Captum: A unified and generic model interpretability library for pytorch. arXiv, 2020. doi: 10.48550/arXiv.2009.07896.
|
| 178 |
+
|
| 179 |
+
Y. Le Cun, L.D. Jackel, B. Boser, J.S. Denker, H.P. Graf, I. Guyon, D. Henderson, R.E. Howard, and W. Hubbard. Handwritten digit recognition: applications of neural network chips and automatic learning. IEEE Communications Magazine, 27(11):41-46, 1989. doi: 10.1109/35.41400.
|
| 180 |
+
|
| 181 |
+
Bryan Lim, Sercan Arik, Nicolas Loeff, and Tomas Pfister. Temporal fusion transformers for interpretable multi-horizon time series forecasting. International Journal of Forecasting, 37:1748- 1764, 2021. doi: 10.1016/j.ijforecast.2021.03.012.
|
| 182 |
+
|
| 183 |
+
Stan Lipovetsky and Michael Conklin. Analysis of regression in game theory approach. Applied Stochastic Models in Business and Industry, 17:319-330, 2001. doi: 10.1002/asmb.446.
|
| 184 |
+
|
| 185 |
+
Christoffer Loeffler, Wei-Cheng Lai, Bjoern Eskofier, Dario Zanca, Lukas Schmidt, and Christopher Mutschler. Don't get me wrong: How to apply deep visual interpretations to time series. arXiv, 2022. doi: 10.48550/ARXIV.2203.07861.
|
| 186 |
+
|
| 187 |
+
Scott M. Lundberg and Su-In Lee. A unified approach to interpreting model predictions. In Proceedings of the 31st International Conference on Neural Information Processing Systems (NIPS'17), pp. 4768-4777, Long Beach, USA, 2017.
|
| 188 |
+
|
| 189 |
+
Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. Distributed representations of words and phrases and their compositionality. In Proceedings of the 27th Conference on Neural Information Processing Systems (NeurIPS 2013), Lake Tahoe, USA, 2013.
|
| 190 |
+
|
| 191 |
+
Grégoire Montavon, Sebastian Lapuschkin, Alexander Binder, Wojciech Samek, and Klaus-Robert Müller. Explaining nonlinear classification decisions with deep taylor decomposition. Pattern Recognition, 65:211-222, 2017. doi: 10.1016/j.patcog.2016.11.008.
|
| 192 |
+
|
| 193 |
+
Michael Neely, Stefan F. Schouten, Maurits J. R. Bleeker, and Ana Lucic. Order in the court: Explainable ai methods prone to disagreement. In Proceedings of the ICML Workshop on Theoretic Foundation, Criticism, and Application Trend of Explainable AI, online, 2021.
|
| 194 |
+
|
| 195 |
+
Prathyush S. Parvatharaju, Ramesh Doddaiah, Thomas Hartvigsen, and Elke A. Rundensteiner. Learning saliency maps to explain deep time series classifiers. In Proceedings of the 30th ACM International Conference on Information & Knowledge Management, Virtual Event, pp. 1406-1415, Queensland, Australia, 2021.
|
| 196 |
+
|
| 197 |
+
Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. Pytorch: An imperative style, high-performance deep learning library. In Proceedings of the 32nd Conference on Neural Information Processing Systems (NeurIPS 2019), pp. 8024-8035, Vancouver, Canada, 2019.
|
| 198 |
+
|
| 199 |
+
Vitali Petsiuk, Abir Das, and Kate Saenko. Rise: Randomized input sampling for explanation of black-box models. In Proceedings of the 29th British Machine Vision Conference (BMVC), Newcastle, UK, 2018.
|
| 200 |
+
|
| 201 |
+
Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. "why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD 16), pp. 1135-1144, New York, USA, 2016.
|
| 202 |
+
|
| 203 |
+
Udo Schlegel and Daniel A. Keim. Time series model attribution visualizations as explanations. In Proceedings of the 2021 IEEE Workshop on TRust and EXpertise in Visual Analytics (TREX), pp. 27-31, New Orleans, USA, 2021.
|
| 204 |
+
|
| 205 |
+
Udo Schlegel, Daniela Oelke, Daniel A. Keim, and Mennatallah El-Assady. An empirical study of explainable ai techniques on deep learning models for time series tasks. In Proceedings of the Pre-registration workshop NeurIPS (2020), Vancouver, Canada, 2020.
|
| 206 |
+
|
| 207 |
+
Lloyd S. Shapley. A value for n-person games, pp. 307-317. Princeton University Press, Princeton, 1953.
|
| 208 |
+
|
| 209 |
+
Avanti Shrikumar, Peyton Greenside, Anna Shcherbina, and Anshul Kundaje. Not just a black box: Learning important features through propagating activation differences. In Proceedings of the 33rd International Conference on Machine Learning (ICML'16), volume 48, New York, USA, 2016.
|
| 210 |
+
|
| 211 |
+
Avanti Shrikumar, Peyton Greenside, and Anshul Kundaje. Learning important features through propagating activation differences. In Proceedings of the 34th International Conference on Machine Learning (ICML'17), volume 70, Sydney, Australia, 2017.
|
| 212 |
+
|
| 213 |
+
Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. Deep inside convolutional networks: Visualising image classification models and saliency maps. arXiv, abs/1312.6034, 2014. doi: 10.48550/arXiv.1312.6034.
|
| 214 |
+
|
| 215 |
+
Daniel Smilkov, Nikhil Thorat, Been Kim, Been Kim, Fernanda B. Viégas, and Martin Wattenberg. Smoothgrad: removing noise by adding noise. arXiv, 2017. doi: 10.48550/arXiv.1706.03825.
|
| 216 |
+
|
| 217 |
+
Jost Tobias Springenberg, Alexey Dosovitskiy, Thomas Brox, and Martin A. Riedmiller. Striving for simplicity: The all convolutional net. arXiv, 2015. doi: 10.48550/arXiv.1412.6806.
|
| 218 |
+
|
| 219 |
+
Mukund Sundararajan, Ankur Taly, and Qiqi Yan. Axiomatic attribution for deep networks. In Proceedings of the 34th International Conference on Machine Learning (ICML'17), pp. 3319-3328, Sydney, Australia, 2017.
|
| 220 |
+
|
| 221 |
+
Erik Štrumbelj and Igor Kononenko. Explaining prediction models and individual predictions with feature contributions. Knowledge and Information Systems, 41(3):647-665, 2014.
|
| 222 |
+
|
| 223 |
+
Zhendong Wang, Isak Samsten, Rami Mochaourab, and Panagiotis Papapetrou. Learning time series counterfactuals via latent space representations. In Proceedings of the 24th International Conference on Discovery Science (DS 2021), pp. 369-384, Halifax, Canada, 2021.
|
| 224 |
+
|
| 225 |
+
Sarah Wiegreffe and Yuval Pinter. Attention is not not explanation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 11-20, Hong Kong, China, 2019.
|
| 226 |
+
|
| 227 |
+
Lexiang Ye and Eamonn Keogh. Time series shapelets: a new primitive for data mining. In Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining (KDD09), pp. 947-956, Paris, France, 2009.
|
| 228 |
+
|
| 229 |
+
Matthew D. Zeiler and Rob Fergus. Visualizing and understanding convolutional networks. In Proceedings of the Computer Vision (ECCV 2014), pp. 818-833, Zurich, Switzerland, 2014.
|
| 230 |
+
|
| 231 |
+
## A APPENDIX
|
| 232 |
+
|
| 233 |
+
### A.1 Synthetic Data Generation
|
| 234 |
+
|
| 235 |
+
Based on the Fourier series latent model, a time series ${x}_{t}, t = 1,\ldots , T$ is modeled as
|
| 236 |
+
|
| 237 |
+
$$
|
| 238 |
+
{x}_{t} = {a}_{0} + \mathop{\sum }\limits_{{n = 1}}^{\infty }{a}_{n}\cos \left( {{\omega }_{n}t}\right) + \mathop{\sum }\limits_{{n = 1}}^{\infty }{b}_{n}\sin \left( {{\omega }_{n}t}\right)
|
| 239 |
+
$$
|
| 240 |
+
|
| 241 |
+
$$
|
| 242 |
+
= {a}_{0} + \mathop{\sum }\limits_{{n = 1}}^{\infty }{A}_{n}\cos \left( {{\omega }_{n}t + {\phi }_{n}}\right)
|
| 243 |
+
$$
|
| 244 |
+
|
| 245 |
+
$$
|
| 246 |
+
= {a}_{0} + \mathop{\sum }\limits_{{n = 1}}^{\infty }{A}_{n}\sin \left( {{\omega }_{n}t + {\phi }_{n} + \frac{\pi }{2}}\right) .
|
| 247 |
+
$$
|
| 248 |
+
|
| 249 |
+
To simulated data, let $\widetilde{n}$ represent the number of amplitudes present in the series, i.e. $\forall i > \widetilde{n},{A}_{i} = 0$ . For simplicity, we consider centered stationary periodic time series in the data generation process, i.e. ${a}_{0} = 0$ . In this case, the value at every time step $t$ is calculated as
|
| 250 |
+
|
| 251 |
+
$$
|
| 252 |
+
{x}_{t} = \mathop{\sum }\limits_{{i = 1}}^{\widetilde{n}}{A}_{i}\sin \left( {{\omega }_{i}t + {\phi }_{i} + \frac{\pi }{2}}\right) . \tag{1}
|
| 253 |
+
$$
|
| 254 |
+
|
| 255 |
+
We refer to the notions amplitude $A$ , frequency $\omega$ , phase shift $\phi$ as concepts. The separate Fourier coefficients ${A}_{i},{\omega }_{i},{\phi }_{i}$ for $i = 1,\ldots ,\widetilde{T}$ are referred to as latent features. The latent features frequency ${\omega }_{i}$ and phase shift ${\phi }_{i}$ are each sampled from a uniform distribution. The sampling intervals are chosen with respect to the specific intention in the experiment design. To simulate the amplitude parameters ${A}_{i}$ , a dominant amplitude ${A}_{1}$ is sampled. The next amplitudes are calculated considering an exponential decay with a fixed rate ${dec}$ :
|
| 256 |
+
|
| 257 |
+
$$
|
| 258 |
+
{A}_{i} = {A}_{1}\exp \left( {-i \cdot {dec}}\right) ,\;i = 1,\ldots ,\widetilde{n}.
|
| 259 |
+
$$
|
| 260 |
+
|
| 261 |
+
This makes the first frequency i.e. ${\omega }_{1}$ to be the dominant frequency of the Fourier series. Throughout the experiments, all time series were generated with an equal length of 300 time steps. i.e. $T = {300}$ .
|
| 262 |
+
|
| 263 |
+
## For assigning class labels to the time series samples, we consider the following two scenarios.
|
| 264 |
+
|
| 265 |
+
## Scenario 1: Label based on the presence of a shapelet
|
| 266 |
+
|
| 267 |
+
For assigning shape-based labels to the time series, a shapelet is inserted at a random or fixed position into all time series $X \in D$ belonging to one class. The shapelet is a second simulated Fourier series of length $l \leq T$ , where $l =$ window-ratio $\cdot T$ for a chosen window ratio. We define the sampling intervals for the latent features of the shapelet to be non-intersecting with the sampling intervals of the latent features of the original time series $X$ . The resulting shapelet replaces the original time series in the interval $\left\lbrack {j, j + l}\right\rbrack$ , where
|
| 268 |
+
|
| 269 |
+
$$
|
| 270 |
+
j \sim \mathcal{U}\left( {1, T - l}\right) \text{.}
|
| 271 |
+
$$
|
| 272 |
+
|
| 273 |
+
Scenario 2: Label based on differences in the latent features
|
| 274 |
+
|
| 275 |
+
Following the investigation of the effectiveness of explainability methods for latent features, we introduce a second simulation scenario where the labels depend on a difference in the sampling distribution of latent features of the time series. This scenario highlights the main focus of this project, and represents our novel view of explainability methods for time series. Similar to the first scenario, the time series are sampled as discretized Fourier series with latent variables $\omega , A$ and $\phi$ . The latent-dependency is induced as follows:
|
| 276 |
+
|
| 277 |
+
1. Two normal distributions with different means (based on Table 3) are selected for classes 0 and 1. For positive parameters, the distributions are log-normal.
|
| 278 |
+
|
| 279 |
+
2. Per each class, $N/2$ Fourier parameters are sampled from the given distributions.
|
| 280 |
+
|
| 281 |
+
3. The rest of the parameters are sampled from the same distribution for both classes.
|
| 282 |
+
|
| 283 |
+
4. Sampled parameters are given to the deterministic Fourier series in Equation 1 to generate the temporal samples. Rows are then labeled with the associated class, from the corresponding distribution of which the informative parameters are sampled.
|
| 284 |
+
|
| 285 |
+
### A.2 DATA SET DESCRIPTION
|
| 286 |
+
|
| 287 |
+
Based on the data generation method described above, we design ten different mechanisms for binary classification of univariate time series. Table 2 lists the parameters and algorithms for assigning labels to each sample. In table 3 the parameters used for sampling the Fourier series are presented. The complete simulation code base will be published in a GitHub repository upon final publication.
|
| 288 |
+
|
| 289 |
+
Table 2: Label-making features per experiment. The overlapping ranges refer to the sampling intervals for frequency and phase shift.
|
| 290 |
+
|
| 291 |
+
<table><tr><td>Experiment</td><td>Label feature</td><td>Description of shapelet</td></tr><tr><td>1</td><td>Shapelet</td><td>Random position, window length of ${0.2} *$ sequence length</td></tr><tr><td>2</td><td>Shapelet</td><td>Fixed position, last ${0.2} *$ sequence length time steps</td></tr><tr><td>3</td><td>Shapelet</td><td>Fixed position, starting at time step ${0.4} *$ sequence length with window length ${0.2} *$ sequence length</td></tr><tr><td>4</td><td>Shapelet</td><td>Fixed position, first ${0.2} *$ sequence length time steps</td></tr><tr><td>5</td><td>Frequency</td><td>Overlapping frequency ranges</td></tr><tr><td>6</td><td>Frequency</td><td>Overlapping frequency ranges</td></tr><tr><td>7</td><td>Phase shift</td><td>Non-overlapping phase shift ranges</td></tr><tr><td>8</td><td>Phase shift</td><td>Non-overlapping phase shift ranges</td></tr><tr><td>9</td><td>Amplitude</td><td>Different dominant amplitude</td></tr><tr><td>10</td><td>Amplitude</td><td>Different dominant amplitude</td></tr></table>
|
| 292 |
+
|
| 293 |
+
<table><tr><td>$\mathbf{{Exp}.}$</td><td>Number of sines</td><td>Freq. low</td><td>Freq. high</td><td>Phase low</td><td>Phase high</td><td>Dominant amplitude</td><td>Decay rate</td><td>Noise ratio</td></tr><tr><td>1</td><td>10</td><td>$\frac{\pi }{300}$</td><td>$\frac{\pi }{60}$</td><td>$\frac{-\pi }{4}$</td><td>$\frac{\pi }{4}$</td><td>1</td><td>0.3</td><td>0.1</td></tr><tr><td>2</td><td>10</td><td>$\frac{\pi }{300}$</td><td>$\frac{\pi }{20}$</td><td>$\frac{-\pi }{4}$</td><td>$\frac{\pi }{4}$</td><td>1</td><td>0.3</td><td>0.1</td></tr><tr><td>3</td><td>10</td><td>$\frac{\pi }{300}$</td><td>$\frac{\pi }{20}$</td><td>$\frac{-\pi }{4}$</td><td>$\frac{\pi }{4}$</td><td>1</td><td>0.3</td><td>0.1</td></tr><tr><td>4</td><td>10</td><td>$\frac{\pi }{300}$</td><td>$\frac{\pi }{20}$</td><td>$\frac{-\pi }{4}$</td><td>$\frac{\pi }{4}$</td><td>1</td><td>0.3</td><td>0.1</td></tr><tr><td>5</td><td>10/10</td><td>$\frac{\pi }{300}/\frac{\pi }{100}$</td><td>$\frac{\pi }{20}/\frac{\pi }{2}$</td><td>$\frac{-\pi }{4}/\frac{-\pi }{4}$</td><td>$\frac{\pi }{4}/\frac{\pi }{4}$</td><td>$1/1$</td><td>${0.3}/{0.3}$</td><td>${0.1}/{0.1}$</td></tr><tr><td>6</td><td>1/1</td><td>$\frac{\pi }{300}/\frac{\pi }{100}$</td><td>$\frac{\pi }{20}/\frac{\pi }{2}$</td><td>$\frac{-\pi }{4}/\frac{-\pi }{4}$</td><td>$\frac{\pi }{4}/\frac{\pi }{4}$</td><td>$1/1$</td><td>${0.3}/{0.3}$</td><td>0.1 / 0.1</td></tr><tr><td>7</td><td>1/1</td><td>$\frac{\pi }{300}/\frac{\pi }{300}$</td><td>$\frac{\pi }{20}/\frac{\pi }{20}$</td><td>$0/\frac{-\pi }{4}$</td><td>$\frac{\pi }{4}/\frac{\pi }{2}$</td><td>$1/1$</td><td>${0.3}/{0.3}$</td><td>0.1 / 0.1</td></tr><tr><td>8</td><td>10/10</td><td>$\frac{\pi }{300}/\frac{\pi }{300}$</td><td>$\frac{\pi }{20}/\frac{\pi }{20}$</td><td>$0/\frac{-\pi }{4}$</td><td>$\frac{\pi }{4}/\frac{\pi }{2}$</td><td>$1/1$</td><td>${0.3}/{0.3}$</td><td>${0.1}/{0.1}$</td></tr><tr><td>9</td><td>10/10</td><td>$\frac{\pi }{300}/\frac{\pi }{300}$</td><td>$\frac{\pi }{20}/\frac{\pi }{20}$</td><td>$0/\frac{-\pi }{4}$</td><td>$\frac{\pi }{4}/\frac{\pi }{4}$</td><td>$1/3$</td><td>${0.3}/{0.3}$</td><td>${0.1}/{0.1}$</td></tr><tr><td>10</td><td>1/1</td><td>$\frac{\pi }{300}/\frac{\pi }{300}$</td><td>$\frac{\pi }{20}/\frac{\pi }{20}$</td><td>$\frac{-\pi }{4}/\frac{-\pi }{4}$</td><td>$\frac{\pi }{4}/\frac{\pi }{4}$</td><td>1/3</td><td>${0.3}/{0.3}$</td><td>${0.1}/{0.1}$</td></tr></table>
|
| 294 |
+
|
| 295 |
+
Table 3: Overview of simulation parameters of the Fourier series. If two entries are present in one cell, each the classes were sampled from different distributions. The first entry in each cell corresponds to the sampling parameter of class 0 , the second entry to class 1 .
|
| 296 |
+
|
papers/ICLR/ICLR 2023/ICLR 2023 Workshop/ICLR 2023 Workshop TML4H/Tmb13sYJwP/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,156 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ POST-HOC SALIENCY METHODS FAIL TO CAPTURE LATENT FEATURE IMPORTANCE IN TIME SERIES DATA
|
| 2 |
+
|
| 3 |
+
Anonymous authors
|
| 4 |
+
|
| 5 |
+
Paper under double-blind review
|
| 6 |
+
|
| 7 |
+
§ ABSTRACT
|
| 8 |
+
|
| 9 |
+
Saliency methods provide visual explainability for deep image processing models by highlighting informative regions in the input images based on feature-wise (pixels) importance scores. These methods have been adopted to the time series domain, aiming to highlight important temporal regions in a sequence. This paper identifies, for the first time, the systematic failure of such methods in the time series domain when underlying patterns (e.g., dominant frequency or trend) are based on latent information rather than temporal regions. The latent feature importance postulation is highly relevant for the medical domain as many medical signals, such as EEG signals or sensor data for gate analysis, are commonly assumed to be related to the frequency domain. To the best of our knowledge, no existing post-hoc explainability method can highlight influential latent information for a classification problem. Hence, in this paper, we frame and analyze the problem of latent feature saliency detection. We first assess the explainability quality of multiple state-of-the-art saliency methods (Integrated Gradients, DeepLift, Kernel SHAP, Lime) on top of various classification methods (LSTM, CNN, LSTM and CNN trained via saliency guided training) using simulated time series data with underlying temporal or latent space patterns. In conclusion, we identify that Integrated Gradients and DeepLift, if redesigned, could be potential candidates for latent saliency scores.
|
| 10 |
+
|
| 11 |
+
§ 1 INTRODUCTION
|
| 12 |
+
|
| 13 |
+
Saliency methods aim to explain the predictions of deep learning models by highlighting important input features. These methods often assign scores to individual inputs (Guidotti et al., 2018; Ismail et al., 2020), collectively resulting in the detection of class-distinctive patterns. For image data this means assigning scores to positional information, namely pixels. Such a strategy suits image data, as the label is often associated with specific input regions. Recently, image saliency methods have been adopted for time series data Loeffler et al. (2022); Schlegel et al. (2020). They similarly assign importance scores to the pixel counterparts, namely "time points". These methods suit the time series problem when a temporal pattern is indicative of the class. In some time series problems, however, the label may depend on the latent features such as dominant frequency, state-space model parameters, or the overall trend of a non-stationary time series. In these cases, even though the classifier might successfully capture the latent space, the positional scores extracted from the classifier will not directly explain the importance of the underlying latent features. Hence, the generated saliency maps will not be directly interpretable and thus fail to fulfill their purpose.
|
| 14 |
+
|
| 15 |
+
The goal of this paper is to introduce, formulate and analyze the problem of latent feature saliency in deep time series classification problems, focusing on the fundamental Fourier series latent model. By extension, our study is replicable for other latent models. We summarize our main contributions below:
|
| 16 |
+
|
| 17 |
+
1. We draw attention to the problem of latent feature saliency detection in time series data. We formulate the shapelet- vs. latent-based pattern in time series classification and propose a definition for an ideal latent feature saliency method (Section 2).
|
| 18 |
+
|
| 19 |
+
2. We provide a comprehensive study of popular time series saliency methods including Integrated Gradients, DeepLift, Kernel SHAP and Lime (Section 3, Section 4) on top of multiple classification methods (LSTM, CNN, LSTM and CNN trained via saliency guided training).
|
| 20 |
+
|
| 21 |
+
3. We identify effective methods that can be extended to potentially tackle the problem of latent space saliency (Section 5).
|
| 22 |
+
|
| 23 |
+
§ 2 PROBLEM FORMULATION
|
| 24 |
+
|
| 25 |
+
Let $D = \left( {X,Y}\right)$ with a univariate time series $X \in \mathcal{X}$ and the binary label $Y \in \{ 0,1\}$ formulate a time series classification data set. Furthermore, let the mapping ${f}_{XY} : \mathcal{X} \mapsto \{ 0,1\}$ represent a deep learning-based classifier. In latent-representation learning, we assume a latent space $\mathcal{Z}$ , a mapping from feature to latent space ${f}_{XZ} : \mathcal{X} \mapsto \mathcal{Z}$ and a latent space to label mapping ${f}_{ZY} : \mathcal{Z} \mapsto \{ 0,1\}$ , such that the classifier ${f}_{XY}$ can be learned via the feature-to-latent and latent-to-label mappings.This view has been adopted by several time series classifiers such as hidden Markov models (HMM) and recurrent neural networks (RNN). The learned latent representation, exhibits properties shown to be significant in terms of explainability Mikolov et al. (2013); Charte et al. (2020). Instead of estimating ${f}_{XZ}$ as a black-box model, a parametric latent model (such as Fourier series models, state space models, linear and switching dynamical systems, or additive and multiplicative models) can be estimated via a neural network. These models are motivated by prior knowledge about the underlying data generation mechanism; thus, their parameters often are interpretable. A saliency method applied to this solution assigns scores to latent features in the $\mathcal{Z}$ space. In contrast, methods used for the black-box models usually lack explainability for the latent features.
|
| 26 |
+
|
| 27 |
+
The latent space assumption is relevant in many time series problems. Sound signals are often differentiated by amplitude and frequency; thus, the decision process behind audio classification is likely to be better explained by the Fourier latent space than by spatial importance scores. Vibration signal classification, as in earthquake or production line failure prediction, is likely to depend on frequency or amplitude as well. Financial time series classification often revolves around modeling trend and seasonality of the time series. Many signals in the medical domain, such as EEG signals, sensor data from wearable technologies for gait analysis for neurological disease progression, pain recognition, or medication level adjustment, are further strongly related to amplitude and frequency. These examples show that achieving time series explainability is heavily related to latent space assumptions.
|
| 28 |
+
|
| 29 |
+
§ 2.1 LATENT FEATURES VS. SHAPELETS
|
| 30 |
+
|
| 31 |
+
Ye & Keogh (2009) define shapelets as variable-length subsequences of time series which are maximally representative of a class. We define a feature-to-shapelet mapping ${f}_{XS} : \mathcal{X} \mapsto {\left\lbrack 0,1\right\rbrack }^{k}$ . Samples in $\mathcal{S}$ are normalized score vectors, determining which shapelet appears in a sample. Subsequently, shapelet-based classifiers predict the label based on an existing pattern in the time domain. These models are coordinated with saliency methods, which in this case are visually explainable since time points are directly expressive of both saliency scores and shapelets. The presence of informative shapelets does not contradict the assumption of a latent model. On the contrary, shapelets may appear as a proxy for latent information (see Figure 2). Nevertheless, from the explainability point of view, there is a notable difference between latent features and shapelets. As an example, a label correlated with the damping ratio of a vibration signal can be potentially predicted by shapelet-based classifiers; however, a conventional saliency method, applied to this problem, will only highlight a proxy of the informative latent feature, namely the existing fluctuations and oscillations of the time series. In conclusion, time series classification problems may be characterized by class differences in features which belong to the time domain as shapelets or to a latent domain. Current saliency methods can provide explainability for shapelets, but not directly for latent models.
|
| 32 |
+
|
| 33 |
+
§ 2.2 DEFINING A DESIRABLE SALIENCY METHOD FOR TIME SERIES
|
| 34 |
+
|
| 35 |
+
Figure 1 illustrates the setup of a time series classification problem with multiple possible intermediate latent spaces, enumerated with $i$ , and denoted as ${\mathcal{Z}}^{\left( i\right) }$ . A time series $X \in \mathcal{X}$ can be mapped to ${\mathcal{Z}}^{\left( i\right) }$ by the $i$ -th chosen latent model ${f}_{XZ}^{\left( i\right) }$ . Without loss of generality, we assume that there is only one latent feature ${Z}^{ * }$ which provides the best explanation for the classification task. The latent space that contains ${Z}^{ * }$ is denoted as ${\mathcal{Z}}^{\left( *\right) }$ .
|
| 36 |
+
|
| 37 |
+
${m}_{T}$ ${M}^{\left( 1\right) }$ $m$ ${\mathcal{Z}}^{\left( 1\right) }$ ${f}_{ZY}^{\left( 1\right) }$ ${f}_{ZY}^{\left( i\right) }$ ${\mathbf{Z}}^{\left( i\right) }$ ${M}^{\left( i\right) }$
|
| 38 |
+
|
| 39 |
+
Figure 1: Time series classification schematic over the space $\mathcal{X} \times \mathcal{Y}$ with latent space representations ${Z}^{\left( i\right) }$ , associated with saliency function $m\left( {Z}^{\left( i\right) }\right)$ and resulting saliency map ${M}^{\left( i\right) }$ . Current methods ${m}_{T}$ measure saliency of the feature space, yielding the map ${M}_{T}$ .
|
| 40 |
+
|
| 41 |
+
We define a saliency method as "reliable" if it assigns the highest score to ${Z}^{ * }$ above all other features throughout all latent spaces. To formulate the reliability definition, we consider a latent-aware saliency method $m : {\mathcal{Z}}^{\left( i\right) } \mapsto {\mathbb{R}}_{ + }^{\left| {Z}^{\left( i\right) }\right| }$ which produces a saliency map ${M}^{\left( i\right) }$ for ${\mathcal{Z}}^{\left( i\right) }$ . The reliability condition is then formulated as
|
| 42 |
+
|
| 43 |
+
$$
|
| 44 |
+
\forall i \neq * ,\;\max {M}^{\left( *\right) } > \max {M}^{\left( i\right) }.
|
| 45 |
+
$$
|
| 46 |
+
|
| 47 |
+
Note that during implementation, we have to define the possible set of latent models manually.
|
| 48 |
+
|
| 49 |
+
The fundamental problem of existing saliency methods is that they only estimate the saliency map for the time domain and therefore lack appropriate output for features in other domains. Hence none of the existing saliency methods meet the criteria for reliability. However, we argue that there might exist some promising failing methods, which requires only minor adjustments to serve as desired saliency methods for time series. We define a saliency method ${m}_{T} : \mathcal{X} \mapsto {\mathbb{R}}_{ + }^{\left| X\right| }$ as promising if the produced map ${M}_{T} \in {\mathbb{R}}_{ + }^{T}$ bears enough information to infer ${M}^{\left( i\right) },\forall i$ (possibly via a simple mapping function, depicted as a purple arrow in Figure 1). In other words, ${m}_{T}$ can capture information about latent saliency, even though it cannot directly explain it. In this case, an extension of the promising method, representing the mapping from ${M}_{T}$ to ${M}^{\left( *\right) }$ , establishes a desired latent saliency method.
|
| 50 |
+
|
| 51 |
+
Frequency Amplitude Trend Shapelet Class 0 Class 1
|
| 52 |
+
|
| 53 |
+
Figure 2: Toy examples of multiple label-making scenarios. Influential time steps (regions with high saliency scores) are shaded in grey for frequency (peaks), amplitude (highest peaks), trend (a window enough for inferring about the trend), and shapelet (presence of the informative pattern).
|
| 54 |
+
|
| 55 |
+
Figure 2 schematically depicts the output of a good failing method when the label is associated with either the frequency or amplitude of a Fourier model, the trend of an additive model, or shapelets. In particular, highlighted regions are sufficient to infer the latent parameter (or equally shapelet). Putting the experiment into practice, Figure 3 presents heat maps of importance scores resulting from two exemplary failing methods.
|
| 56 |
+
|
| 57 |
+
Frequency Amplitude Shapelet Phase shift high Good Bad low
|
| 58 |
+
|
| 59 |
+
Figure 3: Examples of well-performing explainability methods (top row) providing to some extend interpretable explanations and completely uninterpretable saliency results (bottom row).
|
| 60 |
+
|
| 61 |
+
§ 3 EXPERIMENTAL FRAMEWORK
|
| 62 |
+
|
| 63 |
+
As a preliminary step for presenting the results of the empirical study, this section introduces the examined time series saliency methods and data sets and implementation details.
|
| 64 |
+
|
| 65 |
+
Our study focuses on post-hoc saliency methods designed to explain single classification instances of trained models. Here, we investigate the following state of the art saliency methods and group them into four families.
|
| 66 |
+
|
| 67 |
+
(1) Gradient-based feature attribution (FA) methods infer input feature importance based on the magnitude of the gradient of the output with respect to the input features. The attribution method Saliency Simonyan et al. (2014) directly employs gradients to generate saliency maps. Extensions of this basic method are Gradient $\times$ Input Shrikumar et al. (2016), DeCovNet Zeiler & Fergus (2014), Guided Backpropagation Springenberg et al. (2015) and SmoothGrad Smilkov et al. (2017). Deep-Lift Shrikumar et al. (2017) utilizes a neuron attribution-based difference-from-reverence approach to assigning scores. Integrated Gradients (IG) Sundararajan et al. (2017) calculates the path integral from a non-informative baseline input to the respective input feature, tackling the problem of gradient saturation Bastings & Filippova (2020). Relevance-based methods, e.g., Layer-wise Relevance Propagation (LRP) Bach et al. (2015), Deep Taylor Decomposition Montavon et al. (2017), calculate attribution scores by propagating relevance scores from the output back through the network via designed propagation rules.
|
| 68 |
+
|
| 69 |
+
(2) Model-agnostic FA methods can be applied to any black-box classifier without access to the models' parameters Carrillo et al. (2021); Petsiuk et al. (2018). Methods such as Occlusion Zeiler & Fergus (2014), Meaningful Perturbations Fong & Vedaldi (2017) and RISE Petsiuk et al. (2018) assign saliency scores relative to the change in output when the respective feature is perturbed. LIME Ribeiro et al. (2016) fits local interpretable surrogate models to the classifier in the neighborhood of the target sample and calculates the saliency based on these models' parameters. Other methods are inspired by theorems from the field of game theory Datta et al. (2016); Lipovetsky & Conklin (2001); Strumbelj & Kononenko (2014). In particular, the application of the Shapley Value Shapley (1953) has achieved great popularity. Lundberg & Lee (2017) introduce the SHAP values method to measure feature importance by the Shapley value of a conditional expectation function of the to-be-explained model. (3) A different class of post-hoc methods generates counterfactual explanations (CF) as LASTS Guidotti et al. (2020), time series tweaking Karlsson et al. (2020), LatentCF++ Wang et al. (2021), CoMTE Ates et al. (2021) and Native Guide Delaney et al. (2021). These methods identify counter-samples to provide explainability by estimating the required variation in individual input features to change the classification outcome. Since our experiments focus on saliency maps, we exclude CF methods from our investigations in this paper.
|
| 70 |
+
|
| 71 |
+
For our study we selected four candidate methods from different classes of post-hoc methods: Integrated Gradients (IG), Deep-Lift (DL), LIME and Kernel SHAP (SHAP). As for the classifiers, we utilize long-short term memory networks (LSTM) Hochreiter & Schmidhuber (1997) and convolutional neural networks (CNNs) Le Cun et al. (1989). Since the experiments focus on saliency detection, we also train the LSTM and CNN networks via a saliency-guided training procedure (SGT) Ismail et al. (2021). This procedure allows networks to produce more consistent saliency scores, as the saliency feedback is used for training the network.
|
| 72 |
+
|
| 73 |
+
§ 3.1 DATA SET GENERATION
|
| 74 |
+
|
| 75 |
+
To demonstrate our findings, we designed a simulation study in which time-series data is generated based on the Fourier series model. The Fourier series is a well-known latent model for many natural scenarios Geweke & Singleton (1981); Bracewell (2000) and it is proven that any given univariate time series can be reconstructed from its Fourier latent space using a Fourier transformation function. The Fourier latent space can be defined as a matrix with three rows representing frequencies, amplitudes and phase shifts. In our experiments, the Fourier latent space is a matrix of $3 \times {10}$ parameters.
|
| 76 |
+
|
| 77 |
+
We generated a total of ten experiments to understand the response of different saliency methods to different patterns. Our ten experiments include four experiments with temporal shapelet patterns, two with latent amplitude patterns, two with latent frequency patterns, and two with latent phase shift patterns. In each experiment, we build a data set containing 2560 time series of equal length divided into two equally sized classes. For the shapelet experiments, each sample in the data set is generated by first randomly sampling from the latent space and then applying a Fourier transformation to reconstruct its temporal signal from the latent space matrix. Afterwards the time series samples in class 1 were superimposed with a dominant shapelet pattern positioned either at a random location (experiment 1), the end (experiment 2), middle (experiment 3) or start (experiment 4) in the temporal time series. For the latent feature experiments, the latent space matrices for class 0 were sampled from a latent space different from the latent space for class 1 . The difference was defined in terms of sampling intervals for frequency, amplitude or phase shift. A detailed description of the sampling distributions per experiment is presented in Table 3 in Appendix A.2. For each experiment, the training, validating and testing sets were generated by random sampling without replacement with a ratio of ${80}\% ,{10}\%$ and ${10}\%$ , respectively.
|
| 78 |
+
|
| 79 |
+
For assigning the labels to the data samples, we induced a simple linear relation between the latent or temporal patterns and the class labels. In the latent scenarios, two classes are distinguishable using a single decision boundary defined as ${Z}^{ * } =$ const., meaning that only one latent feature is class-distinctive. Likewise, in shapelet-related scenarios, the presence or absence of a specific shapelet decides the label of the data. This allows us to study the latent features individually and in a controlled manner. In such settings, potential poor results can be confidently attributed to the intrinsic weakness of the saliency methods, rather than inappropriate classifiers. The data generation mechanism and the resulting data sets are presented and described in detail in Appendix A.1 and A.2, respectively.
|
| 80 |
+
|
| 81 |
+
§ 3.2 IMPLEMENTATION DETAILS
|
| 82 |
+
|
| 83 |
+
In this paper, we investigate the performance of both the classifiers and the saliency methods with special focus on the interpretability of saliency methods. To ensure there is a uniform power between all classifiers, they were designed as simple one-layer networks with no dropouts or other forms of additional regularization. The performance of saliency methods is strongly correlated with the classification performance, which is typically increased through more sophisticated and deeper networks. Therefore, by keeping the architecture simple, we intended to objectively evaluate and compare the explainability methods without the influence of optional variations, preventing overfit-ting or performance boosting.
|
| 84 |
+
|
| 85 |
+
All algorithms were implemented in the Python programming language. The classifiers were implemented using the deep learning library PyTorch Paszke et al. (2019) with the help of the wrapper PyTorch Lightning Falcon (2019). Hyper-parameter optimization was performed through the library Optuna Akiba et al. (2019). For the feature attribution techniques, the implementations from the PyTorch based model interpretability library Captum Kokhlikyan et al. (2020) were employed.
|
| 86 |
+
|
| 87 |
+
§ 4 RESULTS
|
| 88 |
+
|
| 89 |
+
Table 1 reports the average accuracy and F1 score of the chosen classifiers across our ten data sets grouped by the type of the experiments. The results show that overall the CNN trained via saliency-guided training achieves the highest classification performance.
|
| 90 |
+
|
| 91 |
+
Table 1: Average classification performance on test data across all synthetic data sets.
|
| 92 |
+
|
| 93 |
+
max width=
|
| 94 |
+
|
| 95 |
+
2*Classifier 2|c|Shapelet 2|c|Frequency 2|c|Phase shift 2|c|$\mathbf{{Amplitude}}$
|
| 96 |
+
|
| 97 |
+
2-9
|
| 98 |
+
Accuracy F1 Accuracy F1 Accuracy F1 Accuracy F1
|
| 99 |
+
|
| 100 |
+
1-9
|
| 101 |
+
LSTM 0.8535 0.8466 0.9749 0.9470 0.5157 0.4914 0.9981 0.9981
|
| 102 |
+
|
| 103 |
+
1-9
|
| 104 |
+
LSTM + SGT 0.8242 0.8417 0.9082 0.9117 0.5352 0.4145 0.9160 0.9230
|
| 105 |
+
|
| 106 |
+
1-9
|
| 107 |
+
CNN 0.6221 0.7439 0.9610 0.9633 0.9629 0.9625 0.9981 0.9981
|
| 108 |
+
|
| 109 |
+
1-9
|
| 110 |
+
CNN + SGT 0.8721 0.9138 0.9610 0.9633 0.9649 0.9634 1.0000 1.0000
|
| 111 |
+
|
| 112 |
+
1-9
|
| 113 |
+
|
| 114 |
+
It appears that the LSTM classifier is seriously challenged during phase-shift experiments. This could be due to the vanishing gradient problem of LSTMs, which hinders proper classification if informative patterns are placed in the early time points. Surprisingly, LSTM with saliency-guided training procedure performs slightly worse than LSTM. Unlike the LSTM, the CNN largely benefits from the saliency-guided training procedure, especially in the shapelet experiment.
|
| 115 |
+
|
| 116 |
+
IG DL LIME SHAP high Frequency Amplitude Phase shift Shapelet low
|
| 117 |
+
|
| 118 |
+
Figure 4: Saliency maps by IG, DL, Lime and SHAP for the CNN+SGT on a frequency, amplitude, phase shift and shapelet experiment, respectively. Explanations by IG and DL clearly focus on aspects related to the latent feature (peaks and valleys for amplitude and frequency, beginning of time sequence for phase shift) and the shapelet. Maps of Lime and SHAP are visually uninterpretable.
|
| 119 |
+
|
| 120 |
+
Next, to investigate the explainability of the saliency methods, we visualize their output via color-coded heat maps and overlay them onto the original time series (Figure 4). This allows us to assess the relevance of the saliency scores and the positional information directly. In the shapelet experiments, we expect the maps to highlight the shapelet itself. In the amplitude and frequency experiments, we expect an oscillating heat map with a focus on the peaks (or valleys) and extreme values of the time series, respectively. Finally, in the phase shift experiments, we expect an emphasis on the beginning of the time series. Figure 4 compares the saliency maps of the post-hoc saliency methods (IG, DL, LIME, and SHAP) plotted for one sample per experiment group (shapelet- and latent- experiments). Visual explanations provided by IG and DL align with our expectations for all experiments and are comparatively easy to interpret. For example, in the amplitude experiments, IG and DL highlight the peaks whose values are the direct proxies for the latent feature. On the other hand, the heat maps of SHAP and LIME do not yield the expected visual patterns.
|
| 121 |
+
|
| 122 |
+
IG DL LIME SHAP Samples Samples Time Time high Samples Samples Time Time low
|
| 123 |
+
|
| 124 |
+
Figure 5: Saliency heat map of IG, DL, LIME and SHAP across all samples of the positive class (occurrence of a shapelet) in the test data set. The IG and DL heat maps show a clear saliency pattern in the middle of the time series, in which the shapelet occurred. The SHAP and LIME heat maps, however, resemble a random saliency assignment.
|
| 125 |
+
|
| 126 |
+
We expected that the four saliency methods perform reasonably at least for the shapelet experiments. To investigate this further on the entire data set, we generated Figure 5. These heat maps depict aggregated scores of the saliency methods for the middle-positioned shapelet experiments. In these maps, each row represents a test sample and each column a time point. Figure 5 shows that both SHAP and LIME fail to discover the shapelet pattern across the entire data set. The other two methods IG and DL, however, performed successfully: their aggregated heat map clearly highlights the position of the middle shapelet.
|
| 127 |
+
|
| 128 |
+
§ 5 DISCUSSION
|
| 129 |
+
|
| 130 |
+
Promised effectiveness of saliency methods for shapelet-related classification The goal of this paper was to demonstrate the fundamental problem of adopted saliency methods for time series data in latent-related classification problems. The methods were expected to be effective in case of the presence of positional information, i.e., shapelets. However, experiments show that some of the methods performed poorly even in simple shapelet scenarios. In particular, explanations provided by different methods mostly did not align. This finding is in accordance with Neely et al. (2021). Our observation raises caution regarding the use of saliency methods for time series data, previously pointed out by Loeffler et al. (2022); Parvatharaju et al. (2021); Schlegel & Keim (2021). In our findings, IG and DL showed reliable performances throughout the experiments when paired with effective classifiers. Nevertheless, we encourage using various explainability methods as multiple explanations can coexist Wiegreffe & Pinter (2019).
|
| 131 |
+
|
| 132 |
+
Need for latent feature saliency methods for time series classification We emphasize the need for developing latent feature saliency methods for time series classification. Adopted image saliency methods are unable to parse explainable and meaningful saliency scores for time series data with class-distinctive latent patterns. As discussed in Section 2.2, we proposed a definition for "promising failing" methods as ones which produce positional scores associated with informative latent parameters. In the case of Fourier series models, this corresponds to highlighting peaks or valleys, highest peaks, or early time points in case of frequency-, amplitude- and phase-shift-related classification problems, respectively. Not all SoA methods could exhibit such behavior. We hypothesize that this was caused by the independence assumption between neighboring data points, which is made by the tested approaches. Under this assumption, the model neglects the relative temporal ordering of input features, leading to the inability to detect temporal dependencies. This finding is also reported by Lim et al. (2021).
|
| 133 |
+
|
| 134 |
+
We observed that the IG and DL methods consistently performed well for shapelet-related problems and produced useful saliency maps for latent-related problems. Note that despite calling these methods "promising", the need for directly scoring the latent parameters remains. We expect this problem to exacerbate for latent-related settings whose features contain less legible associations with the positional information, e.g., rates of changes in state-space models.
|
| 135 |
+
|
| 136 |
+
Future work To extend the empirical investigations, we suggest considering other time series latent models. We further encourage the development of methods that can incorporate multiple feature spaces into the saliency analysis. With this regard, there is a potential for extracting latent saliency scores directly from positional saliency maps, given that the target latent model is known. Our findings show that the output of IG and DL are associated with the Fourier latent model. This approach (i.e., mapping positional scores to latent scores) serves well as a baseline method.
|
| 137 |
+
|
| 138 |
+
Throughout our study, the evaluation of saliency maps was performed by visual inspection only, since the primary purpose of this paper to formulate the latent feature saliency problem and motivate further investigation of this topic through a simple experimental framework. For future work, we encourage using quantitative evaluation metrics to assess the performance of different saliency methods objectively. Furthermore, we motivate the extension of our experiments to more complex real-world data sets.
|
| 139 |
+
|
| 140 |
+
Our analyses were done on the sample level, i.e., we studied individual saliency maps to infer the underlying classification mechanism. Intra-class studies of variability and variance of saliency maps might uncover further information regarding the classification.
|
| 141 |
+
|
| 142 |
+
§ 6 CONCLUSION
|
| 143 |
+
|
| 144 |
+
Explainability of time series models is an uprising field of research. To build trust in AI, interpretation and explanation of black-box classifiers are crucial. Various image saliency methods have been introduced to time series problems. They focus on positional information of the input features, providing spatial explanations. In time series data, however, the class label may depend on a latent model instead of positional information. To the best of our knowledge, performance and behavior of saliency methods in such settings have not been explored, neither has a saliency model accounting for latent features been developed. We demonstrated this problem by empirically showing that if the class label is associated with latent features of the time series instead of the presence of a specific shape, common saliency methods do not provide accurate or interpretable explanations. Finally, we presented an outline for future research to develop extensions for existing saliency methods providing latent saliency results based on time-step-wise importance scores. Our work highlights the need for research on latent saliency detection for deep time series classification.
|
| 145 |
+
|
| 146 |
+
§ ACKNOWLEDGMENTS AND FUNDING
|
| 147 |
+
|
| 148 |
+
To ensure the integrity of blind review process, the acknowledgement and funding statements will be added after the review process.
|
| 149 |
+
|
| 150 |
+
§ CONFLICTS OF INTEREST
|
| 151 |
+
|
| 152 |
+
The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.
|
| 153 |
+
|
| 154 |
+
§ REPRODUCIBILITY STATEMENT
|
| 155 |
+
|
| 156 |
+
The synthetic data generation algorithm is described in Appendix A.I. The specific data sets employed are stated in terms of the sampling intervals of the latent features in Appendix A.2. Implementation details such as employed libraries were provided in Section 3.2. A GitHub repository containing the complete code base will be published upon final paper publication.
|
papers/ICLR/ICLR 2023/ICLR 2023 Workshop/ICLR 2023 Workshop TML4H/bHbf5-nE8N/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,249 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# CGXPLAIN: RULE-BASED DEEP NEURAL NETWORK EXPLANATIONS USING DUAL LINEAR PROGRAMS
|
| 2 |
+
|
| 3 |
+
Anonymous authors
|
| 4 |
+
|
| 5 |
+
Paper under double-blind review
|
| 6 |
+
|
| 7 |
+
## Abstract
|
| 8 |
+
|
| 9 |
+
Rule-based surrogate models are an effective and interpretable way to approximate a Deep Neural Network's (DNN) decision boundaries, allowing humans to easily understand deep learning models. Current state-of-the-art decompositional methods, which are those that consider the DNN's latent space to extract more exact rule sets, manage to derive rule sets at high accuracy. However, they a) do not guarantee that the surrogate model has learned from the same variables as the DNN (alignment), b) only allow optimising for a single objective, such as accuracy, which can result in excessively large rule sets (complexity), and c) use decision tree algorithms as intermediate models, which can result in different explanations for the same DNN (stability). This paper introduces Column Generation eXplainer to address these limitations - a decompositional method using dual linear programming to extract rules from the hidden representations of the DNN. This approach allows optimising for any number of objectives and empowers users to tweak the explanation model to their needs. We evaluate our results on a wide variety of tasks and show that CGX meets all three criteria, by having exact reproducibility of the explanation model that guarantees stability and reduces the rule set size by $> {80}\%$ (complexity) at improved accuracy and fidelity across tasks (alignment).
|
| 10 |
+
|
| 11 |
+
## 1 INTRODUCTION
|
| 12 |
+
|
| 13 |
+
In spite of state-of-the-art performance, the opaqueness and lack of explainability of DNNs has impeded their wide adoption in safety-critical domains such as healthcare or clinical decision-making. A promising solution in eXplainable Artificial Intelligence (XAI) research is presented by global rule-based surrogate models, that approximate the decision boundaries of a DNN and represent these boundaries in simple IF-THEN-ELSE rules that make it intuitive for humans to interact with (Zilke et al., 2016; Shams et al., 2021). Surrogate models often use decompositional approaches, which inspect the latent space of a DNN (e.g., its gradients) to improve performance, while pedagogical approaches only utilise the inputs and outputs of the DNN.
|
| 14 |
+
|
| 15 |
+
In pursuit of the most accurate surrogate models, recent literature has primarily focussed on improving the fidelity between the DNN and the surrogate model, which refers to the accuracy of the surrogate model when predicting the DNN’s outputs $\widehat{y}$ instead of the true labels $y$ . While state-of-the-art methods achieve high fidelity (Contreras et al., 2022; Espinosa et al., 2021), there are several qualitative problems with these explanations that hinder their usability in practice and have been mostly neglected in previous studies. First, if features are not fully independent, there is no guarantee that a surrogate model has learned from the same variables as the DNN, meaning that the surrogate model may provide misleading explanations that do not reflect the model's behaviour (alignment). Second, most rule extraction models optimise for the accuracy of the resulting rule set as a single objective, which can result in excessively large rule sets containing thousands of rules, making them impractical to use (complexity). Third, existing decompositional methods use tree induction to extract rules, which tends to be unstable and can result in different explanations for the same DNN, sometimes leading to more confusion than clarification (stability).
|
| 16 |
+
|
| 17 |
+
This paper introduces CGX (Figure 1) - a flexible rule-based decompositional method to explain DNNs at high alignment and stability, requiring only a fraction of the rules compared to current state-of-the-art methods. We combine and extend two recent innovations of decompositional explanations (i.e., using information from the hidden layers of the DNN) (Espinosa et al., 2021) and rule induction literature (i.e., generating boolean rule sets for classification) (Dash et al., 2018).
|
| 18 |
+
|
| 19 |
+

|
| 20 |
+
|
| 21 |
+
Figure 1: Overview of the decompositional CGX algorithm, showing the process to get from the DNN as starting point (1) to the explanation model (4) that approximates the DNN's decision boundaries. We 1(a) extract the rule set ${R}_{x \mapsto {\widehat{y}}_{D}}$ by training an intermediate model on the DNN’s predictions, and 1(b) on the error of that initial ruleset for each hidden layer. Our intermediate extraction through Column generation (2) allows optimising for multiple objectives to extract short and concise rule sets. The substitution step (3) rewrites the intermediate rules ${I}_{{h}_{j} \mapsto {\widehat{y}}_{D}}$ in terms of the input variables ${I}_{x \mapsto {\widehat{y}}_{D}}$ and adds them to the surrogate model (4) if they increase its fidelity.
|
| 22 |
+
|
| 23 |
+
First, we suggest a paradigm shift for rule-based surrogate explanations that goes beyond optimising for accuracy as a single objective, allowing users to tailor the explanation to their needs. Concretely, we formulate the objective function of the intermediate model penalises the predictive loss as well as the number of rules and terms as a joint objective. Additionally, CGX allows to easily introduce further objectives. Second, we use a column generation approach as intermediate models, which have proven to be more accurate and stable than tree induction and other rule mining methods. Third, our algorithm introduces intermediate error prediction, where the information of the DNN's hidden layers is used to predict the error of the pedagogical solution (Equation 1). Fourth, we reduce the noise created by adding all rules from the DNN's latent representation by a) conducting direct layer-wise substitution, which reduces error propagation of the recursive substitution step used in prior methods and b) dismisses rules that do not improve the performance of the explanation model. This also reduces the need to choose between decompositional and pedagogical methods, since CGX converges to the pedagogical solution in its worst case performance.
|
| 24 |
+
|
| 25 |
+
## CONTRIBUTIONS
|
| 26 |
+
|
| 27 |
+
- Quality metrics: We formalise three metrics (alignment, complexity, stability) that surrogate explanations need to achieve to be feasibly applied as an explanation model across datasets.
|
| 28 |
+
|
| 29 |
+
- Alignment: We improve alignment between the original and surrogate models, achieving 1-2% higher fidelity of the rule-based predictions and 10-20% higher Ranked Biased Overlap (RBO) of ranked feature importance representations.
|
| 30 |
+
|
| 31 |
+
- Complexity: We reduce the size of the rule sets used to explain the DNN, achieving rule sets with >80% less terms compared to state-of-the-art decompositional baselines.
|
| 32 |
+
|
| 33 |
+
- Stability: Our explanations are guaranteed to produce identical explanations for the same underlying model.
|
| 34 |
+
|
| 35 |
+
- Decompositional value: We demonstrate that decompositional methods are particularly useful for harder tasks, while pedagogical methods are sufficient for simple tasks.
|
| 36 |
+
|
| 37 |
+
## 2 RELATED WORK
|
| 38 |
+
|
| 39 |
+
XAI & Rule-based explanations XAI research has the objective of understanding why a machine learning model makes a prediction, as well as how the process behind the prediction works (Arrieta et al., 2020). This helps to increase trustworthiness (Floridi, 2019), identifying causality (Murdoch et al., 2019), as well as establishing confidence (Theodorou et al., 2017), fairness (Theodorou et al., 2017), and accessibility (Adadi & Berrada, 2018) in model predictions. Global explainability methods attempt to learn a representation that applies to every sample in the data, instead of only individual samples or features (local), and then provide a set of generalisable principles, commonly referred to as a surrogate model (Arrieta et al., 2020). Surrogate models can be either pedagogical or decompositional (Islam et al., 2021). Pedagogical methods train an explainable model on the predictions of the DNN $\widehat{y}$ instead of the true labels $y$ , still treating keep treating the DNN as a black-box (Confalonieri et al., 2020; Saad & Wunsch II, 2007). Pedagogical methods have a faster runtime since they ignore the latent space of the DNN, but sacrifice predictive performance (Zilke et al. 2016). Decompositional methods inspect the model weights or gradients and can therefore learn a closer representation of how the model makes a prediction at the expense of runtime.
|
| 40 |
+
|
| 41 |
+
One promising category of global decompositional methods are rule extraction models such as DeepRED (Zilke et al., 2016), REM-D (Shams et al., 2021), ECLAIRE (Espinosa et al., 2021), and DeXIRE (Contreras et al., 2022). These methods learn a set of conjunctive (CNF) or disjunctive normal form (DNF) rules ${\bar{R}}_{x \mapsto \widehat{y}}$ that approximate the neural network’s predictions $\widehat{y}$ (Zilke et al., 2016). Existing decompositional methods often use decision tree algorithms, such as C5.0 (Pandya & Pandya, 2015), for intermediate rule extraction. Thus, they learn rules that represent the relationship between each hidden layer and the DNN predictions ${R}_{{h}_{i} \mapsto \widehat{y}}$ , which are then recursively substituted to be rewritten in terms of the input features as ${R}_{x \mapsto \widehat{y}}$ (Shams et al.,2021). While existing surrogate methods achieve high fidelity, the resulting rule set $R$ is often still too large (thousands of rules) to clarify the model's behaviour in practice. Recent research has attempted to reduce the complexity of rule-based surrogates by running different decision tree algorithms, pruning methods (Shams et al., 2021), or clause-wise substitution (Espinosa et al., 2021). However, existing rule-based surrogate algorithms are heavily dependent on tree-based models used for rule generation. Thus, the performance is significantly sacrificed if the tree depth is too heavily restricted, despite reducing the size of the rule set.
|
| 42 |
+
|
| 43 |
+
Rule induction methods Another approach to explainability is to use explainable-by-design models, one of which are rule-based representations. Many of these methods use rule mining which first produces a set of candidate clauses and then implements a rule selection algorithm which selects or ranks the rules from that search space. The problem with this is that the search space is inherently restricted (Lakkaraju et al., 2016; Wang et al., 2017). Another class of methods, such as RIPPER (Cohen, 1995) construct their rule sets by greedily adding the conjunction that explains most of the remaining data. This approach comes with the problem that the rule sets are not guaranteed to be globally optimal and commonly result in large rule sets. Two popular state-of-the-art rule induction methods that aim to control rule set complexity are Bayesian Rule Sets (BRS) (Wang et al., 2017) and Boolean rules from Column Generation (CG). BRS use probabilistic models with prior parameters to construct small-size DNF rule sets. Column generation uses binarisation and large linear programming techniques to efficiently search over the exponential number of possible clauses, where the rule set size can be restricted with a complexity constraint in the objective function. While all of the above rule induction methods could be used for the rule extraction, we chose CG due to its stability and flexible formulation of the objective function.
|
| 44 |
+
|
| 45 |
+
## 3 METHODOLOGY
|
| 46 |
+
|
| 47 |
+
### 3.1 QUALITY METRICS
|
| 48 |
+
|
| 49 |
+
To improve on the shortcomings of existing decompositional methods, we first provide formal definitions to measure alignment, complexity, and stability. We assume an original model $f\left( x\right)$ (DNN) with $i$ hidden layers ${h}_{i}$ and the rule-based surrogate model $g\left( {f\left( x\right) }\right)$ consisting of the rule set ${R}_{x \mapsto \widehat{y}}$ that was extracted using an intermediate model $\psi \left( \cdot \right)$ .
|
| 50 |
+
|
| 51 |
+
We define complexity as the size of the explanation rule set $\left| {R}_{x \mapsto \widehat{y}}\right|$ , expressed as the sum of the number of clauses of all rules in $R$ , i.e., $\min \left| {R}_{x \mapsto \widehat{y}}\right|$ .
|
| 52 |
+
|
| 53 |
+
We measure alignment between ${f}_{x}$ and ${g}_{x}$ in two different ways. First, we look at the performance alignment as fidelity, which measures the predictive accuracy of the model predictions ${\widehat{y}}_{g}$ agains the original model predictions ${\widehat{y}}_{f}$ as ${\mu }_{f, g} = \frac{1}{n}\mathop{\sum }\limits_{1}^{{n - 1}}\left( {{\widehat{y}}_{f} = {\widehat{y}}_{g}}\right)$ . Second, we assess the feature alignment of the resulting explanations. Feature importance is a commonly used to understand which variables a model relies on when making predictions, represented as a ranked list. To validate that ${f}_{x}$ and ${g}_{x}$ are well-aligned, we want to ensure that both models rely on the same input features from $X$ in their predictions. Assuming two ranked lists $S$ and $T$ , we calculate the Ranked Biased Overlap ${\varphi }_{ST}$ (Webber et al.,2010) as $\max \varphi \left( {S, T, p}\right) = \max \left( {1 - p}\right) \mathop{\sum }\limits_{{d = 1}}{p}^{d - 1}{A}_{d}$ , where ${A}_{d}$ is the ratio of list overlap size at depth $d$ and ${w}_{d}$ is the geometric progression ${w}_{d} = \left( {1 - p}\right) {p}^{d - 1}$ , a weight vectors used to calculate the weighted sum of all evaluation depths.
|
| 54 |
+
|
| 55 |
+
Finally, we define stability as rule sets that are identical on repeated calls of the explanation methods with the same underlying model. We run the explanation model ${g}_{x}$ on different seeds $s =$ $\{ 0,1,2\ldots , j\}$ , where we want to ensure that the rule sets are equivalent as ${R}_{x \mapsto \widehat{y}}\left( {s}_{1}\right) = {R}_{x \mapsto \widehat{y}}\left( {s}_{2}\right)$ .
|
| 56 |
+
|
| 57 |
+
### 3.2 COLUMN GENERATION AS INTERMEDIATE MODEL
|
| 58 |
+
|
| 59 |
+
We hypothesise that the majority of the complexity, stability, and alignment issues stem from the choice of the intermediate model $\psi \left( \cdot \right)$ in state-of-the-art decompositional methods. We use an adapted version of the column generation solver outlined in Dash et al. (2018). Instead of using $\psi \left( \cdot \right)$ as a standalone model, we will show that the column generation solver is well-suited as intermediate model in decompositional methods instead of commonly used tree-based algorithms such as C4.5/C5.0 Zilke et al. (2016); Shams et al. (2021). We start with the original restricted Master Linear Program which formulates from Dash et al. (2018) the Hamming loss, which counts the number of clauses that have to be removed to classify the incorrect sample correctly. The Hamming loss is bound by an error and complexity constraint. We update the negative reduced cost of the pricing subproblem from (Dash et al.,2018) to include the hyperparameters for the number of clauses $\left( {\lambda }_{0}\right)$ and the number of terms $\left( {\lambda }_{1}\right)$ , which are linked to the complexity constraint as a dual variable. This formulation also makes it simple to add further parameters to the complexity constraint and negative reduced cost (e.g, adding a constraint that penalises rules or clauses for only one particular class).
|
| 60 |
+
|
| 61 |
+
### 3.3 CG EXPLAINER
|
| 62 |
+
|
| 63 |
+
ECLAIRE outperforms other decompositional methods on fidelity, rule set size, and run time. Using column generation instead of tree induction as the intermediate model $\psi \left( \cdot \right)$ , we reformulate the ECLAIRE algorithm as shown in Algorithm 1 with the core objective of improving the three quality metrics we set out. We introduce two versions of the column generation explainer - a pedagogical (CGX-ped) and a decompositional implementation (CGX-dec).
|
| 64 |
+
|
| 65 |
+
CGX-ped extracts rules from the intermediate model to predict the DNN predictions ${\widehat{y}}_{D}$ . This method ignores the latent space of the DNN, but can still outperform standalone column generation by guidance of the DNN's predictions:
|
| 66 |
+
|
| 67 |
+
$$
|
| 68 |
+
{\widehat{y}}_{\text{ped }} = {R}_{x \mapsto {\widehat{y}}_{D}}\left( X\right) = \psi \left( {X,{\widehat{y}}_{D}}\right) \tag{1}
|
| 69 |
+
$$
|
| 70 |
+
|
| 71 |
+
CGX-dec (Algorithm 1) introduces three key innovations over other decompositional methods. First, we do not start with an empty rule set, but uses the pedagogical solution (Equation 1) as a starting point (line 2). Second, building on the pedagogical rule set, the algorithm iterates through the hidden layers. To improve on the pedagogical solution at each layer, we run intermediate error prediction by extracting rules by applying the intermediate model $\psi \left( \cdot \right)$ to predict the prediction error of the pedagogical solution $\widehat{e}$ from each hidden layer (line 5). That is, we specifically learn rules that discriminate between false and correct prediction of the current best rule set, therefore resulting in rules that would improve this solution. The final update is the substitution method - previous approaches recursively replace the terms (Shams et al., 2021) or clauses (Espinosa et al., 2021) of the hidden layer ${h}_{j + 1}$ with the terms for each output class from the previous layer ${h}_{j}$ until all hidden rules can be rewritten in terms of the input features $X$ . Since not every hidden layer can be perfectly represented in terms of the input, the substitution step always contains an error which propagates down the layers as the same method is applied recursively. Instead, we use the direct rule substitution step outlined in Algorithm 2. Similar to the CG solver, we first binarise our input features as rule thresholds (line 1). After computing the conjunctions of the candidate rules, we calculate the error
|
| 72 |
+
|
| 73 |
+
Algorithm 1 CGX-dec
|
| 74 |
+
|
| 75 |
+
---
|
| 76 |
+
|
| 77 |
+
Input: DNN ${f}_{\theta }$ with layers $\left\{ {{h}_{0},{h}_{1},\ldots ,{h}_{d + 1}}\right\}$
|
| 78 |
+
|
| 79 |
+
Input: Labelled Training data $X = \left\{ {{x}^{\left( 1\right) },\ldots ,{x}^{\left( N\right) }}\right\} ;Y = \left\{ {{y}^{\left( 1\right) },\ldots ,{y}^{\left( N\right) }}\right\}$
|
| 80 |
+
|
| 81 |
+
Output: Rule set ${R}_{x \mapsto \widehat{y}}$
|
| 82 |
+
|
| 83 |
+
${\widehat{y}}^{\left( 1\right) },\ldots ,{\widehat{y}}^{\left( N\right) } \leftarrow \arg \max \left( {{h}_{d + 1}\left( {x}^{\left( 1\right) }\right) }\right) ,\ldots ,\arg \max \left( {{h}_{d + 1}\left( {x}^{\left( N\right) }\right) }\right)$
|
| 84 |
+
|
| 85 |
+
${R}_{x \mapsto \widehat{y}} \leftarrow \psi \left( {X,\widehat{y}}\right)$
|
| 86 |
+
|
| 87 |
+
for hidden layer $i = 1,\ldots , d$ do
|
| 88 |
+
|
| 89 |
+
${x}^{\prime \left( 1\right) },\ldots ,{x}^{\prime \left( N\right) } \leftarrow {h}_{i}\left( {x}^{\left( 1\right) }\right) ,\ldots ,{h}_{i}\left( {x}^{\left( N\right) }\right)$
|
| 90 |
+
|
| 91 |
+
$\widehat{e} \leftarrow \left( {\widehat{y} \neq y}\right)$
|
| 92 |
+
|
| 93 |
+
${R}_{{h}_{i} \mapsto \widehat{e}} \leftarrow \psi \left( \left\{ {\left( {{x}^{\prime \left( 1\right) },{\widehat{e}}_{1}}\right) ,\ldots ,\left( {{x}^{\prime \left( N\right) },{\widehat{e}}_{N}}\right) }\right\} \right)$
|
| 94 |
+
|
| 95 |
+
for rule $r \in {R}_{{h}_{i} \mapsto \widehat{y}}$ do
|
| 96 |
+
|
| 97 |
+
$s \leftarrow$ substitute(r)
|
| 98 |
+
|
| 99 |
+
${I}_{x \mapsto \widehat{y}} \leftarrow s \cup {R}_{x \mapsto \widehat{y}}$
|
| 100 |
+
|
| 101 |
+
if $\widetilde{fid}\left( {{\widehat{y}}_{I},\widehat{y}}\right) > \widetilde{fid}\left( {{\widehat{y}}_{R},\widehat{y}}\right)$ then
|
| 102 |
+
|
| 103 |
+
${R}_{x \mapsto \widehat{y}} \leftarrow {I}_{x \mapsto \widehat{y}} \cup {R}_{x \mapsto \widehat{y}}$
|
| 104 |
+
|
| 105 |
+
end if
|
| 106 |
+
|
| 107 |
+
end for
|
| 108 |
+
|
| 109 |
+
end for
|
| 110 |
+
|
| 111 |
+
return ${R}_{x \mapsto \widehat{y}}$
|
| 112 |
+
|
| 113 |
+
---
|
| 114 |
+
|
| 115 |
+
for each candidate and select the set of candidate clauses with the lowest error (Algorithm 2, line 3) compared to the hidden layer predictions $\left( {y}_{{h}_{ij}}^{ \land }\right)$ . Knowing that the substitution step still contains an error, some rules contribute more to the performance than others (rules with high errors are likely to decrease predictive performance). Therefore, the last update in Algorithm 1 is that the substituted rules resulting after the substitution step are only added to the rule set if they improve the pedagogical solution (lines $9\& {10}$ ).
|
| 116 |
+
|
| 117 |
+
## 4 EXPERIMENTS
|
| 118 |
+
|
| 119 |
+
Given the alignment, complexity, and stability shortcomings of existing methods, we design computational experiments to answer the following research questions:
|
| 120 |
+
|
| 121 |
+
- Q1.1 Performance alignment: Does the proven higher performance of column generation rule sets lead to higher fidelity with the DNN?
|
| 122 |
+
|
| 123 |
+
- Q1.2 Feature alignment: How well do aggregate measures such as feature importance from the rule set align with local explanation methods of the DNN?
|
| 124 |
+
|
| 125 |
+
- Q2 Complexity: Can we control the trade-off between explainability (i.e., low complexity) and accuracy by optimising for a joint objective?
|
| 126 |
+
|
| 127 |
+
- Q3 Stability: Do multiple runs of our method produce the same rule set for the same underlying model?
|
| 128 |
+
|
| 129 |
+
- Q4 Decompositional value: Is the performance gain of decompositional methods worth the higher time complexity compared to pedagogical methods?
|
| 130 |
+
|
| 131 |
+
Algorithm 2 Direct rule substitution
|
| 132 |
+
|
| 133 |
+
---
|
| 134 |
+
|
| 135 |
+
Input: rule ${r}_{{h}_{ij} \mapsto \widehat{y}}$
|
| 136 |
+
|
| 137 |
+
Input: Training data $X = \left\{ {{x}^{\left( 1\right) },\ldots ,{x}^{\left( N\right) }}\right\}$
|
| 138 |
+
|
| 139 |
+
Hyperparameter: # of rule candidate combinations $k$
|
| 140 |
+
|
| 141 |
+
Output: substituted rule(s) ${r}_{x \mapsto \widehat{y}}$
|
| 142 |
+
|
| 143 |
+
${X}_{\text{bin }} \leftarrow$ BinarizeFeatures(X, bins)
|
| 144 |
+
|
| 145 |
+
${r}_{\text{cand }} \leftarrow$ ComputeConjunctions $\left( {k,{X}_{\text{bin }}}\right)$
|
| 146 |
+
|
| 147 |
+
${Error}{s}_{{r}_{cand}} \leftarrow 1 - \frac{1}{N}\mathop{\sum }\limits^{{N - 1}}\left( {{\widehat{y}}_{{h}_{ij}} = {\widehat{y}}_{{r}_{cand}}}\right)$
|
| 148 |
+
|
| 149 |
+
${r}_{x \mapsto \widehat{y}} \leftarrow \min \left( {{Error}{s}_{{r}_{cand}}}\right)$
|
| 150 |
+
|
| 151 |
+
return ${r}_{x \mapsto \widehat{y}}$
|
| 152 |
+
|
| 153 |
+
---
|
| 154 |
+
|
| 155 |
+
### 4.1 BASELINES & SETUP
|
| 156 |
+
|
| 157 |
+
We use both pedagogical and decompositional explanation baselines in our experiments. For pedagogical baselines, we re-purpose state-of-the-art rule induction and decision tree methods to be trained on the DNN predictions $\widehat{y}$ instead of the true labels $y$ . Concretely, we use the C5.0 decision tree algorithm (Pandya & Pandya, 2015), Bayesian Rule Sets (Wang et al., 2017), and RIPPER (Cohen, 1995). As decompositional baselines, we implement the ECLAIRE algorithm as implemented in (Espinosa et al., 2021) which has been shown to outperform other decompositional methods in both speed and accuracy. Additionally, we benchmark against the standalone Column Generation method (Dash et al.,2018) trained on the true labels $y$ to show the benefit of applying it as an intermediate model in both pedagogical and decompositional settings. We run all baselines and our models on five different real-world and synthetic classification datasets, showing the scalability and adaptability to different numbers of samples, features, and class imbalances (Appendix A.1).
|
| 158 |
+
|
| 159 |
+
We run all experiments on five different random folds to initialise the train-test splits of the data, the random initialisations of the DNN as well as random inputs of the baselines. All experiments were run on a 2020 MacBook Pro with a $2\mathrm{{GHz}}$ Intel i5 processor and ${16}\mathrm{{GB}}$ of RAM. For running the baselines, we use open-source implementations published in conjunction with RIPPER, BRS, and ECLAIRE, running hyperparameter search for best results as set out in the respective papers. For comparability, we use the same DNN topology (number and depth of layers) as used in the experiments in (Espinosa et al., 2021). For hyperparameter optimisation of the DNN, we use the keras implementation of the Hyperband algorithm (Li et al., 2018) to search for the optimal learning rate, hidden and output layer activations, batch normalisation, dropout, and L2 regularisation. The CGX implementation uses the MOSEK solver (Andersen & Andersen, 2000) as its cvxpy backend. All the code required to reproduce the experimental results will be made available on GitHub after review.
|
| 160 |
+
|
| 161 |
+
## 5 RESULTS
|
| 162 |
+
|
| 163 |
+
Performance alignment (Q1.1) The primary objective of performance alignment is the fidelity between the predictions of the rule set ${\widehat{y}}_{R}$ compared to the model predictions ${\widehat{y}}_{DNN}$ , since we want an explanation model that mimics the DNNs behaviour as closely as possible. The results in Table 1 show that CGX-ped has a higher fidelity compared to the baseline methods on most datasets by approximately 1-2% whilst having significantly fewer rules. While RIPPER has a slightly higher fidelity on the MAGIC dataset, both CGX-ped and CGX-dec achieve competitive performance whilst only requiring $5\%$ of the rules. This table also shows that a high fidelity does not guarantee a high accuracy on the overall task, which is visible on the FICO dataset. While CGX achieves a very high fidelity in this task, the overall accuracy is relatively low. This is caused by the underlying DNN struggling to perform well in this task. Notably, the performance of CGX-dec and CGX-ped is equivalent on the XOR dataset, indicating that there were no rules to add from the intermediate layers. This is because the XOR dataset is a relatively simple synthetic dataset, where the pedagogical version already identifies nearly the exact thresholds that were used to generate the target (see Figure 2(c)).
|
| 164 |
+
|
| 165 |
+
Feature alignment (Q1.2) Going beyond fidelity, the feature alignment score $\psi$ in Figure 2(a) shows the mean RBO score $\psi$ between the feature importance derived from the CGX rule set and the aggregated importance of local methods (SHAP and LIME) of the original DNN. A higher score shows that the two ranked lists are more aligned and, as such, the DNN and the rule-based surrogate model rely on the same features for their explanations more closely. Figure 2(a) compares the decompositional CGX-dec method to the best-performing decompositional baseline (ECLAIRE) and shows that CGX-dec achieves a higher feature alignment across all datasets compared to the baseline.
|
| 166 |
+
|
| 167 |
+
Complexity (Q2) Table 1 shows that both the pedagogical and decompositional methods achieve highly competitive results with only a fraction of the rules required. Compared to pedagogical baselines, CGX-ped outperforms on the majority of the tasks. While the pedagogical BRS baseline produces fewer rules for some datasets (FICO and MB-HIST), their total number of clauses are more than double those of CGX across all datasets due to longer chained rules being produced by this method. Additionally, the BRS fidelity is not competitive with CGX-ped or CGX-dec. Looking at ECLAIRE as our decompositional baseline, the results show that CGX-dec only requires a fraction of the clauses compared to ECLAIRE. In the case of the Magic dataset, ECLAIRE required $> {100}\mathrm{x}$ more rules than our method, while for other datasets, the multiple ranges from 10-20x more rules required.
|
| 168 |
+
|
| 169 |
+
Table 1: Overview of CGX-ped and CGX-dec performance alignment (fidelity) and complexity (# clauses) compared to the baselines across datasets. CGX-ped outperforms all baselines across the majority of tasks. While RIPPER has a slightly higher fidelity on the MAGIC dataset, CGX only requires $\sim 5\%$ of the clauses.
|
| 170 |
+
|
| 171 |
+
<table><tr><td>DATASET</td><td>Model</td><td>RULE FID.</td><td>RULE ACC.</td><td>#RULES</td><td>#CLAUSES</td></tr><tr><td rowspan="7">XOR</td><td>CG (STANDALONE)</td><td>${78.0} \pm {16.8}$</td><td>${81.1} \pm {18.5}$</td><td>${5.2} \pm {1.9}$</td><td>${21.6} \pm {12.7}$</td></tr><tr><td>Ripper (PED)</td><td>${53.5} \pm {3.9}$</td><td>${53.8} \pm {4.0}$</td><td>${7.4} \pm {3.6}$</td><td>${14.4} \pm {7.5}$</td></tr><tr><td>BRS (PED)</td><td>${91.3} \pm {2.0}$</td><td>${95.5} \pm {1.3}$</td><td>${9.0} \pm {0.3}$</td><td>${80.9} \pm {3.0}$</td></tr><tr><td>C5 (PED)</td><td>${53.0} \pm {0.2}$</td><td>${52.6} \pm {0.2}$</td><td>$1 \pm 0$</td><td>$1 \pm 0$</td></tr><tr><td>ECLAIRE (DEC)</td><td>${91.4} \pm {2.4}$</td><td>${91.8} \pm {2.6}$</td><td>${87} \pm {16.2}$</td><td>${263} \pm {49.1}$</td></tr><tr><td>CGX-PED (OURS)</td><td>${92.4} \pm {1.1}$</td><td>$\mathbf{{96.7} \pm {1.7}}$</td><td>3.6 ± 1.8</td><td>${10.4} \pm {7.2}$</td></tr><tr><td>CGX-DEC (OURS)</td><td>$\mathbf{{92.4} \pm {1.1}}$</td><td>$\mathbf{{96.7} \pm {1.7}}$</td><td>${3.6} \pm {1.8}$</td><td>${10.4} \pm {7.2}$</td></tr><tr><td rowspan="7">MAGIC</td><td>CG (STANDALONE)</td><td>${85.7} \pm {2.5}$</td><td>${82.7} \pm {0.3}$</td><td>${5.2} \pm {0.8}$</td><td>${13.0} \pm {2.4}$</td></tr><tr><td>RIPPER (PED)</td><td>$\mathbf{{91.9} \pm {0.9}}$</td><td>${81.7} \pm {0.5}$</td><td>${152.2} \pm {14.6}$</td><td>${462.8} \pm {53.5}$</td></tr><tr><td>BRS (PED)</td><td>${84.6} \pm {2.1}$</td><td>${79.3} \pm {1.3}$</td><td>${5.8} \pm {0.3}$</td><td>${24.1} \pm {4.8}$</td></tr><tr><td>C5 (PED)</td><td>${85.4} \pm {2.5}$</td><td>${82.8} \pm {0.9}$</td><td>${57.8} \pm {4.5}$</td><td>${208.7} \pm {37.6}$</td></tr><tr><td>ECLAIRE (DEC)</td><td>${87.4} \pm {1.2}$</td><td>${84.6} \pm {0.5}$</td><td>${392.2} \pm {73.9}$</td><td>${1513.4} \pm {317.8}$</td></tr><tr><td>CGX-PED (OURS)</td><td>90.4 ± 1.7</td><td>${80.6} \pm {0.6}$</td><td>$\mathbf{{5.0} \pm {0.7}}$</td><td>${11.6} \pm {1.9}$</td></tr><tr><td>CGX-DEC (OURS)</td><td>${91.5} \pm {1.3}$</td><td>${84.4} \pm {0.8}$</td><td>${7.4} \pm {0.8}$</td><td>${11.6} \pm {1.9}$</td></tr><tr><td rowspan="7">MB-ER</td><td>CG (STANDALONE)</td><td>${92.1} \pm {1.1}$</td><td>${92.0} \pm {1.1}$</td><td>${5.0} \pm {0.7}$</td><td>${15.4} \pm {2.2}$</td></tr><tr><td>Ripper (PED)</td><td>${86.5} \pm {2.2}$</td><td>${85.2} \pm {3.0}$</td><td>${22.0} \pm {9.2}$</td><td>${30.2} \pm {21.6}$</td></tr><tr><td>BRS (PED)</td><td>90.9 ± 1.2</td><td>${88.4} \pm {0.9}$</td><td>${8.9} \pm {1.1}$</td><td>${57.6} \pm {18.5}$</td></tr><tr><td>C5 (PED)</td><td>${89.3} \pm 1$</td><td>${92.7} \pm {0.9}$</td><td>${21.8} \pm 3$</td><td>${72.4} \pm {14.5}$</td></tr><tr><td>ECLAIRE (DEC)</td><td>$\mathbf{{94.7} \pm {0.2}}$</td><td>$\mathbf{{94.1} \pm {1.6}}$</td><td>${48.3} \pm {15.3}$</td><td>137.6 ± 24.7</td></tr><tr><td>CGX-PED (OURS)</td><td>${93.7} \pm {1.1}$</td><td>${92.0} \pm {0.9}$</td><td>${4.2} \pm {0.4}$</td><td>$\mathbf{{17.0} \pm {1.9}}$</td></tr><tr><td>CGX-DEC (OURS)</td><td>$\mathbf{{94.7} \pm {0.9}}$</td><td>${92.4} \pm {0.7}$</td><td>${5.9} \pm {1.1}$</td><td>${21.8} \pm {3.4}$</td></tr><tr><td rowspan="7">MB-HIST</td><td>CG (STANDALONE)</td><td>${88.5} \pm {2.3}$</td><td>${91.1} \pm {1.4}$</td><td>${4.0} \pm {0.7}$</td><td>${19.4} \pm {2.4}$</td></tr><tr><td>RIPPER (PED)</td><td>${86.7} \pm {3.7}$</td><td>${88.1} \pm {3.3}$</td><td>${13.8} \pm {3.4}$</td><td>${35.0} \pm {11.6}$</td></tr><tr><td>BRS (PED)</td><td>${81.7} \pm {2.1}$</td><td>${79.9} \pm {2.5}$</td><td>${5.1} \pm {0.2}$</td><td>${40.3} \pm {5.8}$</td></tr><tr><td>C5 (PED)</td><td>${89.3} \pm 1$</td><td>${87.9} \pm {0.9}$</td><td>${12.8} \pm {3.1}$</td><td>${35.2} \pm {11.3}$</td></tr><tr><td>ECLAIRE (DEC)</td><td>${89.4} \pm {1.8}$</td><td>${88.9} \pm {2.3}$</td><td>${30} \pm {12.4}$</td><td>${74.7} \pm {15.7}$</td></tr><tr><td>CGX-PED (OURS)</td><td>${89.1} \pm {3.6}$</td><td>${89.4} \pm {2.5}$</td><td>${5.2} \pm {1.9}$</td><td>$\mathbf{{27.8} \pm {7.6}}$</td></tr><tr><td>CGX-DEC (OURS)</td><td>$\mathbf{{89.6} \pm {3.6}}$</td><td>$\mathbf{{90.2} \pm {2.5}}$</td><td>${6.8} \pm {2.0}$</td><td>${32.2} \pm {8.3}$</td></tr><tr><td rowspan="7">FICO</td><td>CG (STANDALONE)</td><td>${86.4} \pm {2.8}$</td><td>70.6 ± 0.4</td><td>${3.3} \pm {1.1}$</td><td>${8.6} \pm {3.6}$</td></tr><tr><td>Ripper (PED)</td><td>${88.8} \pm {2.8}$</td><td>${70.2} \pm {1.0}$</td><td>99.2 ± 14.5</td><td>307.4 ± 41.6</td></tr><tr><td>BRS (PED)</td><td>${84.8} \pm {2.3}$</td><td>${65.4} \pm {2.1}$</td><td>${3.1} \pm {0.2}$</td><td>${18} \pm {3.2}$</td></tr><tr><td>C5 (PED)</td><td>${72.7} \pm {2.1}$</td><td>${81.8} \pm {1.6}$</td><td>${34.8} \pm {4.1}$</td><td>${125.6} \pm {35.2}$</td></tr><tr><td>ECLAIRE (DEC)</td><td>${66.5} \pm {2.5}$</td><td>$\mathbf{{84.9} \pm {1.7}}$</td><td>${161.0} \pm {12.3}$</td><td>${298.0} \pm {21.2}$</td></tr><tr><td>CGX-PED (OURS)</td><td>${91.1} \pm {0.1}$</td><td>${70.5} \pm {0.8}$</td><td>${3.6} \pm {1.1}$</td><td>${9.6} \pm {3.6}$</td></tr><tr><td>CGX-DEC (OURS)</td><td>$\mathbf{{92.4} \pm {0.2}}$</td><td>${71.4} \pm 1$</td><td>${5.1} \pm {1.3}$</td><td>${13.4} \pm {2.1}$</td></tr></table>
|
| 172 |
+
|
| 173 |
+
Stability (Q3) Figure 2(c) shows that CGX (both versions) results in identical explanations when running only the explainability method on a different random seed, keeping the data folds and random seed of the DNN identical. We observe that CGX produces the exact same rule set on repeated runs, while our decompositional baseline produces different explanations, which can be confusing to users. Note that this stability is different from the standard deviation shown in Table 1, where we would expect variation from different splits of the data and random initialisations of the DNN.
|
| 174 |
+
|
| 175 |
+
Value of decomposition (Q4) We acknowledge that the time complexity of decompositional methods scales linearly to the number of layers, which makes the pedagogical CGX-ped implementation an attractive alternative for very deep network topologies. To help decide whether to use pedagogical or decompositional methods, we looked at how much the information from the DNN's latent space (lines 3-10 in Algorithm 1) improves the pedagogical solution (line 2 in Algorithm 1). Figure 2(b) shows that the added performance gained from information of the hidden layers is related to the difficulty of the task. For "easy" tasks, (i.e., those where the DNN has a high accuracy/AUC such as the XOR task), CGX-ped and CGX-dec converge to the same solution, since no rules from the hidden layers increase the fidelity. Figure 2(b) shows that the performance difference increases with the difficulty of the task. For the FICO task, where the DNN accuracy is only just over 70%, the surrogate model gains the most information from the hidden layers.
|
| 176 |
+
|
| 177 |
+

|
| 178 |
+
|
| 179 |
+
Figure 2: Overview of CGX performance with respect to alignment (a), task difficulty (b), and stability (c). Subfigure (a) shows the mean Ranked Biased Overlap of CGX compared to ECLAIRE shows that CGX's rule set show a higher feature alignment. Subfigure (b) is comparing task difficulty (DNN prediction error, x-axis) with the incremental fidelity improvement (y-axis) when using CGX-dec over CGX-ped. As tasks get more difficult, using the CGX-dec adds relatively more fidelity compared to the CGX-ped. Subfigure (c) shows that CGX has exact reproducibility for the same underlying model.
|
| 180 |
+
|
| 181 |
+
## 6 DISCUSSION
|
| 182 |
+
|
| 183 |
+
This paper introduces a global decompositional method that uses column generation as intermediate models. We improve rule-based explanations by intermediate error predictions from the latent space of a DNN, coupled with layer-wise substitution to reduce error propagation. CGX enables research and industry to customise surrogate explanations for different end users by parameterising the accuracy-explainability trade-off. First, we introduced a quantitative measure to analyse the feature alignment between the surrogate model and local explanations of the DNN and show that our surrogate model explanations are more closely aligned to other local explanation methods of the original model. Second, the design of the objective functions allows assigning a higher cost to surrogate model complexity (i.e., the number of clauses) using an extra hyperparameter. We demonstrate that this achieves significantly lower complexity and enables users to control the accuracy-interpretability trade-off by setting higher or lower penalties on the number of rules. Third, the results show that CGX is independent of its initialisation (solution to the Master Linear Program), which leads to improved stability compared to methods using tree induction for rule extraction. Additionally, CGX requires fewer hyperparameters compared to tree-based algorithms such as C5, hence requiring less fine-tuning to achieve competitive results. While this introduces the lambda parameter to enable users to control the length of the resulting rule set, it is also possible to run the solver unconstrained. Beyond these benefits, having rule-based surrogate models enables end users to intervenability by users, as they can amend the rule set to encode further domain knowledge.
|
| 184 |
+
|
| 185 |
+
The key limitation of CGX and decompositional methods more generally is that the runtime is highly dependent on the number of hidden DNN layers and the number of columns in $X$ . We attempt to mitigate this problem by showing that CGX-ped is a highly competitive alternative, especially for simple tasks. For more difficult tasks, however, the decompositional method still delivers better explanations with higher fidelity. The implementation will be open-sourced as a pip-installable Python package.
|
| 186 |
+
|
| 187 |
+
## ACKNOWLEDGMENTS
|
| 188 |
+
|
| 189 |
+
Acknowledgements here in camera-ready version
|
| 190 |
+
|
| 191 |
+
REFERENCES
|
| 192 |
+
|
| 193 |
+
Amina Adadi and Mohammed Berrada. Peeking inside the black-box: a survey on explainable artificial intelligence (xai). IEEE access, 6:52138-52160, 2018.
|
| 194 |
+
|
| 195 |
+
Erling D Andersen and Knud D Andersen. The mosek interior point optimizer for linear programming: an implementation of the homogeneous algorithm. In High performance optimization, pp. 197-232. Springer, 2000.
|
| 196 |
+
|
| 197 |
+
Alejandro Barredo Arrieta, Natalia Díaz-Rodríguez, Javier Del Ser, Adrien Bennetot, Siham Tabik, Alberto Barbado, Salvador García, Sergio Gil-López, Daniel Molina, Richard Benjamins, et al. Explainable artificial intelligence (xai): Concepts, taxonomies, opportunities and challenges toward responsible ai. Information Fusion, 58:82-115, 2020.
|
| 198 |
+
|
| 199 |
+
RK Bock, A Chilingarian, M Gaug, F Hakl, Th Hengstebeck, M Jiřina, J Klaschka, E Kotrč, P Savický, S Towers, et al. Methods for multidimensional event classification: a case study using images from a cherenkov gamma-ray telescope. Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment, 516(2-3):511-528, 2004.
|
| 200 |
+
|
| 201 |
+
William W. Cohen. Fast effective rule induction. Machine Learning Proceedings 1995, pp. 115-123, 1 1995. doi: 10.1016/B978-1-55860-377-6.50023-2.
|
| 202 |
+
|
| 203 |
+
Roberto Confalonieri, Tillman Weyde, Tarek R Besold, and Fermín Moscoso del Prado Martín. Trepan reloaded: A knowledge-driven approach to explaining artificial neural networks. 2020.
|
| 204 |
+
|
| 205 |
+
Victor Contreras, Niccolo Marini, Lora Fanda, Gaetano Manzo, Yazan Mualla, Jean Paul Calbimonte, Michael Schumacher, and Davide Calvaresi. A dexire for extracting propositional rules from neural networks via binarization. Electronics (Switzerland), 11, 12 2022. ISSN 20799292. doi: 10.3390/ELECTRONICS11244171.
|
| 206 |
+
|
| 207 |
+
Sanjeeb Dash, Oktay Gunluk, and Dennis Wei. Boolean decision rules via column generation. Advances in neural information processing systems, 31, 2018.
|
| 208 |
+
|
| 209 |
+
Mateo Espinosa, Zoreh Shams, and Mateja Jamnik. Efficient decompositional rule extraction for deep neural networks. In XAI Debugging Workshop @ NEURIPS 2021, 2021.
|
| 210 |
+
|
| 211 |
+
Luciano Floridi. Establishing the rules for building trustworthy ai. Nature Machine Intelligence, 1(6): 261-262, 2019.
|
| 212 |
+
|
| 213 |
+
Sheikh Rabiul Islam, William Eberle, Sheikh Khaled Ghafoor, and Mohiuddin Ahmed. Explainable artificial intelligence approaches: A survey. arXiv preprint arXiv:2101.09429, 2021.
|
| 214 |
+
|
| 215 |
+
Himabindu Lakkaraju, Stephen H Bach, and Jure Leskovec. Interpretable decision sets: A joint framework for description and prediction. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pp. 1675-1684, 2016.
|
| 216 |
+
|
| 217 |
+
Lisha Li, Kevin Jamieson, Afshin Rostamizadeh, and Ameet Talwalkar. Hyperband: A novel bandit-based approach to hyperparameter optimization. Journal of Machine Learning Research, 18:1-52, 2018.URL http://jmlr.org/papers/v18/16-558.html.
|
| 218 |
+
|
| 219 |
+
W James Murdoch, Chandan Singh, Karl Kumbier, Reza Abbasi-Asl, and Bin Yu. Interpretable machine learning: definitions, methods, and applications. arXiv preprint arXiv:1901.04592, 2019.
|
| 220 |
+
|
| 221 |
+
Rutvija Pandya and Jayati Pandya. C5. 0 algorithm to improved decision tree with feature selection and reduced error pruning. International Journal of Computer Applications, 117(16):18-21, 2015.
|
| 222 |
+
|
| 223 |
+
Bernard Pereira, Suet-Feung Chin, Oscar M Rueda, Hans-Kristian Moen Vollan, Elena Provenzano, Helen A Bardwell, Michelle Pugh, Linda Jones, Roslin Russell, Stephen-John Sammut, et al. The somatic mutation profiles of 2,433 breast cancers refine their genomic and transcriptomic landscapes. Nature communications, 7(1):1-16, 2016.
|
| 224 |
+
|
| 225 |
+
Emad W Saad and Donald C Wunsch II. Neural network explanation using inversion. Neural networks, 20(1):78-93, 2007.
|
| 226 |
+
|
| 227 |
+
Zohreh Shams, Botty Dimanov, Sumaiyah Kola, Nikola Simidjievski, Helena Andres Terre, Paul Scherer, Urska Matjasec, Jean Abraham, Mateja Jamnik, and Pietro Lio. Rem: An integrative rule extraction methodology for explainable data analysis in healthcare. bioRxiv, 2021.
|
| 228 |
+
|
| 229 |
+
Andreas Theodorou, Robert H Wortham, and Joanna J Bryson. Designing and implementing transparency for real time inspection of autonomous robots. Connection Science, 29(3):230-241, 2017.
|
| 230 |
+
|
| 231 |
+
Tong Wang, Cynthia Rudin, Finale Doshi-Velez, Yimin Liu, Erica Klampfl, and Perry MacNeille. A bayesian framework for learning rule sets for interpretable classification. The Journal of Machine Learning Research, 18(1):2357-2393, 2017.
|
| 232 |
+
|
| 233 |
+
William Webber, Alistair Moffat, and Justin Zobel. A similarity measure for indefinite rankings. ACM Transactions on Information Systems (TOIS), 28(4):1-38, 2010.
|
| 234 |
+
|
| 235 |
+
Jan Ruben Zilke, Eneldo Loza Mencía, and Frederik Janssen. Deepred-rule extraction from deep neural networks. In International Conference on Discovery Science, pp. 457-473. Springer, 2016.
|
| 236 |
+
|
| 237 |
+
## A DATASETS
|
| 238 |
+
|
| 239 |
+
### A.1 DATASETS
|
| 240 |
+
|
| 241 |
+
MAGIC is a particle physics dataset used to simulate the registration of either high-energy gamma particles or background hadron cosmic radiation based on imaging signals from a ground-based atmostpheric telescope. It has ${19}\mathrm{k}$ samples with $\sim {35}\%$ in the minority class and 10 features derived from the "shower image" of the pulses left by the incoming photons Bock et al. (2004).
|
| 242 |
+
|
| 243 |
+
Metabric-ER predicts the Immunohistochemical subtypes of 1980 patients using 1000 features including tumour characteristics, clinical traits, gene expression patterns, and survival rates. In this dataset, $\sim {24}\%$ of patients are Estrogen-Receptor-positive (ER), meaning that these tumours have ERs that allow the tumour to grow.
|
| 244 |
+
|
| 245 |
+
Metabric-Hist contains ${1004}\mathrm{{mRNA}}$ expressions of 1694 patients to predict two of the most common histological subtypes of breast cancer tumours, where positive diagnoses make up 8.7% of the samples – Invasive Lobular Carcinoma (ICL) or Invasive Ductal Carcinoma (IDC) Pereira et al. (2016).
|
| 246 |
+
|
| 247 |
+
XOR is a synthetic dataset used as a common baseline to evaluate the performance of rule extractors. The dataset generates a supervised dataset with 10 features of the form ${\left( {x}_{j}^{\left( i\right) },{y}_{i}\right) }_{i = 1}^{1000}$ where every data point in ${x}_{i} \in {\left\lbrack 0,1\right\rbrack }^{10}$ is independently sampled from a uniform distribution. The binary labels ${y}_{i}$ are assigned by XOR-ing the result of the rounded first two dimensions ${y}_{i} = \operatorname{round}\left( {x}_{1}^{\left( i\right) }\right) \bigoplus \operatorname{round}\left( {x}_{2}^{\left( i\right) }\right)$ .
|
| 248 |
+
|
| 249 |
+
FICO is a finance dataset originally designed for an explainable ML challenge containing home equity line of credit (HELOC) applications by homeowners with the task of predicting whether the applicant repays their HELOC credit within 2 years. The dataset contains 10,459 applicants with 24 features on the spending habits of each applicant as well as an external risk estimate of each.
|
papers/ICLR/ICLR 2023/ICLR 2023 Workshop/ICLR 2023 Workshop TML4H/bHbf5-nE8N/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,290 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ CGXPLAIN: RULE-BASED DEEP NEURAL NETWORK EXPLANATIONS USING DUAL LINEAR PROGRAMS
|
| 2 |
+
|
| 3 |
+
Anonymous authors
|
| 4 |
+
|
| 5 |
+
Paper under double-blind review
|
| 6 |
+
|
| 7 |
+
§ ABSTRACT
|
| 8 |
+
|
| 9 |
+
Rule-based surrogate models are an effective and interpretable way to approximate a Deep Neural Network's (DNN) decision boundaries, allowing humans to easily understand deep learning models. Current state-of-the-art decompositional methods, which are those that consider the DNN's latent space to extract more exact rule sets, manage to derive rule sets at high accuracy. However, they a) do not guarantee that the surrogate model has learned from the same variables as the DNN (alignment), b) only allow optimising for a single objective, such as accuracy, which can result in excessively large rule sets (complexity), and c) use decision tree algorithms as intermediate models, which can result in different explanations for the same DNN (stability). This paper introduces Column Generation eXplainer to address these limitations - a decompositional method using dual linear programming to extract rules from the hidden representations of the DNN. This approach allows optimising for any number of objectives and empowers users to tweak the explanation model to their needs. We evaluate our results on a wide variety of tasks and show that CGX meets all three criteria, by having exact reproducibility of the explanation model that guarantees stability and reduces the rule set size by $> {80}\%$ (complexity) at improved accuracy and fidelity across tasks (alignment).
|
| 10 |
+
|
| 11 |
+
§ 1 INTRODUCTION
|
| 12 |
+
|
| 13 |
+
In spite of state-of-the-art performance, the opaqueness and lack of explainability of DNNs has impeded their wide adoption in safety-critical domains such as healthcare or clinical decision-making. A promising solution in eXplainable Artificial Intelligence (XAI) research is presented by global rule-based surrogate models, that approximate the decision boundaries of a DNN and represent these boundaries in simple IF-THEN-ELSE rules that make it intuitive for humans to interact with (Zilke et al., 2016; Shams et al., 2021). Surrogate models often use decompositional approaches, which inspect the latent space of a DNN (e.g., its gradients) to improve performance, while pedagogical approaches only utilise the inputs and outputs of the DNN.
|
| 14 |
+
|
| 15 |
+
In pursuit of the most accurate surrogate models, recent literature has primarily focussed on improving the fidelity between the DNN and the surrogate model, which refers to the accuracy of the surrogate model when predicting the DNN’s outputs $\widehat{y}$ instead of the true labels $y$ . While state-of-the-art methods achieve high fidelity (Contreras et al., 2022; Espinosa et al., 2021), there are several qualitative problems with these explanations that hinder their usability in practice and have been mostly neglected in previous studies. First, if features are not fully independent, there is no guarantee that a surrogate model has learned from the same variables as the DNN, meaning that the surrogate model may provide misleading explanations that do not reflect the model's behaviour (alignment). Second, most rule extraction models optimise for the accuracy of the resulting rule set as a single objective, which can result in excessively large rule sets containing thousands of rules, making them impractical to use (complexity). Third, existing decompositional methods use tree induction to extract rules, which tends to be unstable and can result in different explanations for the same DNN, sometimes leading to more confusion than clarification (stability).
|
| 16 |
+
|
| 17 |
+
This paper introduces CGX (Figure 1) - a flexible rule-based decompositional method to explain DNNs at high alignment and stability, requiring only a fraction of the rules compared to current state-of-the-art methods. We combine and extend two recent innovations of decompositional explanations (i.e., using information from the hidden layers of the DNN) (Espinosa et al., 2021) and rule induction literature (i.e., generating boolean rule sets for classification) (Dash et al., 2018).
|
| 18 |
+
|
| 19 |
+
< g r a p h i c s >
|
| 20 |
+
|
| 21 |
+
Figure 1: Overview of the decompositional CGX algorithm, showing the process to get from the DNN as starting point (1) to the explanation model (4) that approximates the DNN's decision boundaries. We 1(a) extract the rule set ${R}_{x \mapsto {\widehat{y}}_{D}}$ by training an intermediate model on the DNN’s predictions, and 1(b) on the error of that initial ruleset for each hidden layer. Our intermediate extraction through Column generation (2) allows optimising for multiple objectives to extract short and concise rule sets. The substitution step (3) rewrites the intermediate rules ${I}_{{h}_{j} \mapsto {\widehat{y}}_{D}}$ in terms of the input variables ${I}_{x \mapsto {\widehat{y}}_{D}}$ and adds them to the surrogate model (4) if they increase its fidelity.
|
| 22 |
+
|
| 23 |
+
First, we suggest a paradigm shift for rule-based surrogate explanations that goes beyond optimising for accuracy as a single objective, allowing users to tailor the explanation to their needs. Concretely, we formulate the objective function of the intermediate model penalises the predictive loss as well as the number of rules and terms as a joint objective. Additionally, CGX allows to easily introduce further objectives. Second, we use a column generation approach as intermediate models, which have proven to be more accurate and stable than tree induction and other rule mining methods. Third, our algorithm introduces intermediate error prediction, where the information of the DNN's hidden layers is used to predict the error of the pedagogical solution (Equation 1). Fourth, we reduce the noise created by adding all rules from the DNN's latent representation by a) conducting direct layer-wise substitution, which reduces error propagation of the recursive substitution step used in prior methods and b) dismisses rules that do not improve the performance of the explanation model. This also reduces the need to choose between decompositional and pedagogical methods, since CGX converges to the pedagogical solution in its worst case performance.
|
| 24 |
+
|
| 25 |
+
§ CONTRIBUTIONS
|
| 26 |
+
|
| 27 |
+
* Quality metrics: We formalise three metrics (alignment, complexity, stability) that surrogate explanations need to achieve to be feasibly applied as an explanation model across datasets.
|
| 28 |
+
|
| 29 |
+
* Alignment: We improve alignment between the original and surrogate models, achieving 1-2% higher fidelity of the rule-based predictions and 10-20% higher Ranked Biased Overlap (RBO) of ranked feature importance representations.
|
| 30 |
+
|
| 31 |
+
* Complexity: We reduce the size of the rule sets used to explain the DNN, achieving rule sets with >80% less terms compared to state-of-the-art decompositional baselines.
|
| 32 |
+
|
| 33 |
+
* Stability: Our explanations are guaranteed to produce identical explanations for the same underlying model.
|
| 34 |
+
|
| 35 |
+
* Decompositional value: We demonstrate that decompositional methods are particularly useful for harder tasks, while pedagogical methods are sufficient for simple tasks.
|
| 36 |
+
|
| 37 |
+
§ 2 RELATED WORK
|
| 38 |
+
|
| 39 |
+
XAI & Rule-based explanations XAI research has the objective of understanding why a machine learning model makes a prediction, as well as how the process behind the prediction works (Arrieta et al., 2020). This helps to increase trustworthiness (Floridi, 2019), identifying causality (Murdoch et al., 2019), as well as establishing confidence (Theodorou et al., 2017), fairness (Theodorou et al., 2017), and accessibility (Adadi & Berrada, 2018) in model predictions. Global explainability methods attempt to learn a representation that applies to every sample in the data, instead of only individual samples or features (local), and then provide a set of generalisable principles, commonly referred to as a surrogate model (Arrieta et al., 2020). Surrogate models can be either pedagogical or decompositional (Islam et al., 2021). Pedagogical methods train an explainable model on the predictions of the DNN $\widehat{y}$ instead of the true labels $y$ , still treating keep treating the DNN as a black-box (Confalonieri et al., 2020; Saad & Wunsch II, 2007). Pedagogical methods have a faster runtime since they ignore the latent space of the DNN, but sacrifice predictive performance (Zilke et al. 2016). Decompositional methods inspect the model weights or gradients and can therefore learn a closer representation of how the model makes a prediction at the expense of runtime.
|
| 40 |
+
|
| 41 |
+
One promising category of global decompositional methods are rule extraction models such as DeepRED (Zilke et al., 2016), REM-D (Shams et al., 2021), ECLAIRE (Espinosa et al., 2021), and DeXIRE (Contreras et al., 2022). These methods learn a set of conjunctive (CNF) or disjunctive normal form (DNF) rules ${\bar{R}}_{x \mapsto \widehat{y}}$ that approximate the neural network’s predictions $\widehat{y}$ (Zilke et al., 2016). Existing decompositional methods often use decision tree algorithms, such as C5.0 (Pandya & Pandya, 2015), for intermediate rule extraction. Thus, they learn rules that represent the relationship between each hidden layer and the DNN predictions ${R}_{{h}_{i} \mapsto \widehat{y}}$ , which are then recursively substituted to be rewritten in terms of the input features as ${R}_{x \mapsto \widehat{y}}$ (Shams et al.,2021). While existing surrogate methods achieve high fidelity, the resulting rule set $R$ is often still too large (thousands of rules) to clarify the model's behaviour in practice. Recent research has attempted to reduce the complexity of rule-based surrogates by running different decision tree algorithms, pruning methods (Shams et al., 2021), or clause-wise substitution (Espinosa et al., 2021). However, existing rule-based surrogate algorithms are heavily dependent on tree-based models used for rule generation. Thus, the performance is significantly sacrificed if the tree depth is too heavily restricted, despite reducing the size of the rule set.
|
| 42 |
+
|
| 43 |
+
Rule induction methods Another approach to explainability is to use explainable-by-design models, one of which are rule-based representations. Many of these methods use rule mining which first produces a set of candidate clauses and then implements a rule selection algorithm which selects or ranks the rules from that search space. The problem with this is that the search space is inherently restricted (Lakkaraju et al., 2016; Wang et al., 2017). Another class of methods, such as RIPPER (Cohen, 1995) construct their rule sets by greedily adding the conjunction that explains most of the remaining data. This approach comes with the problem that the rule sets are not guaranteed to be globally optimal and commonly result in large rule sets. Two popular state-of-the-art rule induction methods that aim to control rule set complexity are Bayesian Rule Sets (BRS) (Wang et al., 2017) and Boolean rules from Column Generation (CG). BRS use probabilistic models with prior parameters to construct small-size DNF rule sets. Column generation uses binarisation and large linear programming techniques to efficiently search over the exponential number of possible clauses, where the rule set size can be restricted with a complexity constraint in the objective function. While all of the above rule induction methods could be used for the rule extraction, we chose CG due to its stability and flexible formulation of the objective function.
|
| 44 |
+
|
| 45 |
+
§ 3 METHODOLOGY
|
| 46 |
+
|
| 47 |
+
§ 3.1 QUALITY METRICS
|
| 48 |
+
|
| 49 |
+
To improve on the shortcomings of existing decompositional methods, we first provide formal definitions to measure alignment, complexity, and stability. We assume an original model $f\left( x\right)$ (DNN) with $i$ hidden layers ${h}_{i}$ and the rule-based surrogate model $g\left( {f\left( x\right) }\right)$ consisting of the rule set ${R}_{x \mapsto \widehat{y}}$ that was extracted using an intermediate model $\psi \left( \cdot \right)$ .
|
| 50 |
+
|
| 51 |
+
We define complexity as the size of the explanation rule set $\left| {R}_{x \mapsto \widehat{y}}\right|$ , expressed as the sum of the number of clauses of all rules in $R$ , i.e., $\min \left| {R}_{x \mapsto \widehat{y}}\right|$ .
|
| 52 |
+
|
| 53 |
+
We measure alignment between ${f}_{x}$ and ${g}_{x}$ in two different ways. First, we look at the performance alignment as fidelity, which measures the predictive accuracy of the model predictions ${\widehat{y}}_{g}$ agains the original model predictions ${\widehat{y}}_{f}$ as ${\mu }_{f,g} = \frac{1}{n}\mathop{\sum }\limits_{1}^{{n - 1}}\left( {{\widehat{y}}_{f} = {\widehat{y}}_{g}}\right)$ . Second, we assess the feature alignment of the resulting explanations. Feature importance is a commonly used to understand which variables a model relies on when making predictions, represented as a ranked list. To validate that ${f}_{x}$ and ${g}_{x}$ are well-aligned, we want to ensure that both models rely on the same input features from $X$ in their predictions. Assuming two ranked lists $S$ and $T$ , we calculate the Ranked Biased Overlap ${\varphi }_{ST}$ (Webber et al.,2010) as $\max \varphi \left( {S,T,p}\right) = \max \left( {1 - p}\right) \mathop{\sum }\limits_{{d = 1}}{p}^{d - 1}{A}_{d}$ , where ${A}_{d}$ is the ratio of list overlap size at depth $d$ and ${w}_{d}$ is the geometric progression ${w}_{d} = \left( {1 - p}\right) {p}^{d - 1}$ , a weight vectors used to calculate the weighted sum of all evaluation depths.
|
| 54 |
+
|
| 55 |
+
Finally, we define stability as rule sets that are identical on repeated calls of the explanation methods with the same underlying model. We run the explanation model ${g}_{x}$ on different seeds $s =$ $\{ 0,1,2\ldots ,j\}$ , where we want to ensure that the rule sets are equivalent as ${R}_{x \mapsto \widehat{y}}\left( {s}_{1}\right) = {R}_{x \mapsto \widehat{y}}\left( {s}_{2}\right)$ .
|
| 56 |
+
|
| 57 |
+
§ 3.2 COLUMN GENERATION AS INTERMEDIATE MODEL
|
| 58 |
+
|
| 59 |
+
We hypothesise that the majority of the complexity, stability, and alignment issues stem from the choice of the intermediate model $\psi \left( \cdot \right)$ in state-of-the-art decompositional methods. We use an adapted version of the column generation solver outlined in Dash et al. (2018). Instead of using $\psi \left( \cdot \right)$ as a standalone model, we will show that the column generation solver is well-suited as intermediate model in decompositional methods instead of commonly used tree-based algorithms such as C4.5/C5.0 Zilke et al. (2016); Shams et al. (2021). We start with the original restricted Master Linear Program which formulates from Dash et al. (2018) the Hamming loss, which counts the number of clauses that have to be removed to classify the incorrect sample correctly. The Hamming loss is bound by an error and complexity constraint. We update the negative reduced cost of the pricing subproblem from (Dash et al.,2018) to include the hyperparameters for the number of clauses $\left( {\lambda }_{0}\right)$ and the number of terms $\left( {\lambda }_{1}\right)$ , which are linked to the complexity constraint as a dual variable. This formulation also makes it simple to add further parameters to the complexity constraint and negative reduced cost (e.g, adding a constraint that penalises rules or clauses for only one particular class).
|
| 60 |
+
|
| 61 |
+
§ 3.3 CG EXPLAINER
|
| 62 |
+
|
| 63 |
+
ECLAIRE outperforms other decompositional methods on fidelity, rule set size, and run time. Using column generation instead of tree induction as the intermediate model $\psi \left( \cdot \right)$ , we reformulate the ECLAIRE algorithm as shown in Algorithm 1 with the core objective of improving the three quality metrics we set out. We introduce two versions of the column generation explainer - a pedagogical (CGX-ped) and a decompositional implementation (CGX-dec).
|
| 64 |
+
|
| 65 |
+
CGX-ped extracts rules from the intermediate model to predict the DNN predictions ${\widehat{y}}_{D}$ . This method ignores the latent space of the DNN, but can still outperform standalone column generation by guidance of the DNN's predictions:
|
| 66 |
+
|
| 67 |
+
$$
|
| 68 |
+
{\widehat{y}}_{\text{ ped }} = {R}_{x \mapsto {\widehat{y}}_{D}}\left( X\right) = \psi \left( {X,{\widehat{y}}_{D}}\right) \tag{1}
|
| 69 |
+
$$
|
| 70 |
+
|
| 71 |
+
CGX-dec (Algorithm 1) introduces three key innovations over other decompositional methods. First, we do not start with an empty rule set, but uses the pedagogical solution (Equation 1) as a starting point (line 2). Second, building on the pedagogical rule set, the algorithm iterates through the hidden layers. To improve on the pedagogical solution at each layer, we run intermediate error prediction by extracting rules by applying the intermediate model $\psi \left( \cdot \right)$ to predict the prediction error of the pedagogical solution $\widehat{e}$ from each hidden layer (line 5). That is, we specifically learn rules that discriminate between false and correct prediction of the current best rule set, therefore resulting in rules that would improve this solution. The final update is the substitution method - previous approaches recursively replace the terms (Shams et al., 2021) or clauses (Espinosa et al., 2021) of the hidden layer ${h}_{j + 1}$ with the terms for each output class from the previous layer ${h}_{j}$ until all hidden rules can be rewritten in terms of the input features $X$ . Since not every hidden layer can be perfectly represented in terms of the input, the substitution step always contains an error which propagates down the layers as the same method is applied recursively. Instead, we use the direct rule substitution step outlined in Algorithm 2. Similar to the CG solver, we first binarise our input features as rule thresholds (line 1). After computing the conjunctions of the candidate rules, we calculate the error
|
| 72 |
+
|
| 73 |
+
Algorithm 1 CGX-dec
|
| 74 |
+
|
| 75 |
+
Input: DNN ${f}_{\theta }$ with layers $\left\{ {{h}_{0},{h}_{1},\ldots ,{h}_{d + 1}}\right\}$
|
| 76 |
+
|
| 77 |
+
Input: Labelled Training data $X = \left\{ {{x}^{\left( 1\right) },\ldots ,{x}^{\left( N\right) }}\right\} ;Y = \left\{ {{y}^{\left( 1\right) },\ldots ,{y}^{\left( N\right) }}\right\}$
|
| 78 |
+
|
| 79 |
+
Output: Rule set ${R}_{x \mapsto \widehat{y}}$
|
| 80 |
+
|
| 81 |
+
${\widehat{y}}^{\left( 1\right) },\ldots ,{\widehat{y}}^{\left( N\right) } \leftarrow \arg \max \left( {{h}_{d + 1}\left( {x}^{\left( 1\right) }\right) }\right) ,\ldots ,\arg \max \left( {{h}_{d + 1}\left( {x}^{\left( N\right) }\right) }\right)$
|
| 82 |
+
|
| 83 |
+
${R}_{x \mapsto \widehat{y}} \leftarrow \psi \left( {X,\widehat{y}}\right)$
|
| 84 |
+
|
| 85 |
+
for hidden layer $i = 1,\ldots ,d$ do
|
| 86 |
+
|
| 87 |
+
${x}^{\prime \left( 1\right) },\ldots ,{x}^{\prime \left( N\right) } \leftarrow {h}_{i}\left( {x}^{\left( 1\right) }\right) ,\ldots ,{h}_{i}\left( {x}^{\left( N\right) }\right)$
|
| 88 |
+
|
| 89 |
+
$\widehat{e} \leftarrow \left( {\widehat{y} \neq y}\right)$
|
| 90 |
+
|
| 91 |
+
${R}_{{h}_{i} \mapsto \widehat{e}} \leftarrow \psi \left( \left\{ {\left( {{x}^{\prime \left( 1\right) },{\widehat{e}}_{1}}\right) ,\ldots ,\left( {{x}^{\prime \left( N\right) },{\widehat{e}}_{N}}\right) }\right\} \right)$
|
| 92 |
+
|
| 93 |
+
for rule $r \in {R}_{{h}_{i} \mapsto \widehat{y}}$ do
|
| 94 |
+
|
| 95 |
+
$s \leftarrow$ substitute(r)
|
| 96 |
+
|
| 97 |
+
${I}_{x \mapsto \widehat{y}} \leftarrow s \cup {R}_{x \mapsto \widehat{y}}$
|
| 98 |
+
|
| 99 |
+
if $\widetilde{fid}\left( {{\widehat{y}}_{I},\widehat{y}}\right) > \widetilde{fid}\left( {{\widehat{y}}_{R},\widehat{y}}\right)$ then
|
| 100 |
+
|
| 101 |
+
${R}_{x \mapsto \widehat{y}} \leftarrow {I}_{x \mapsto \widehat{y}} \cup {R}_{x \mapsto \widehat{y}}$
|
| 102 |
+
|
| 103 |
+
end if
|
| 104 |
+
|
| 105 |
+
end for
|
| 106 |
+
|
| 107 |
+
end for
|
| 108 |
+
|
| 109 |
+
return ${R}_{x \mapsto \widehat{y}}$
|
| 110 |
+
|
| 111 |
+
for each candidate and select the set of candidate clauses with the lowest error (Algorithm 2, line 3) compared to the hidden layer predictions $\left( {y}_{{h}_{ij}}^{ \land }\right)$ . Knowing that the substitution step still contains an error, some rules contribute more to the performance than others (rules with high errors are likely to decrease predictive performance). Therefore, the last update in Algorithm 1 is that the substituted rules resulting after the substitution step are only added to the rule set if they improve the pedagogical solution (lines $9\& {10}$ ).
|
| 112 |
+
|
| 113 |
+
§ 4 EXPERIMENTS
|
| 114 |
+
|
| 115 |
+
Given the alignment, complexity, and stability shortcomings of existing methods, we design computational experiments to answer the following research questions:
|
| 116 |
+
|
| 117 |
+
* Q1.1 Performance alignment: Does the proven higher performance of column generation rule sets lead to higher fidelity with the DNN?
|
| 118 |
+
|
| 119 |
+
* Q1.2 Feature alignment: How well do aggregate measures such as feature importance from the rule set align with local explanation methods of the DNN?
|
| 120 |
+
|
| 121 |
+
* Q2 Complexity: Can we control the trade-off between explainability (i.e., low complexity) and accuracy by optimising for a joint objective?
|
| 122 |
+
|
| 123 |
+
* Q3 Stability: Do multiple runs of our method produce the same rule set for the same underlying model?
|
| 124 |
+
|
| 125 |
+
* Q4 Decompositional value: Is the performance gain of decompositional methods worth the higher time complexity compared to pedagogical methods?
|
| 126 |
+
|
| 127 |
+
Algorithm 2 Direct rule substitution
|
| 128 |
+
|
| 129 |
+
Input: rule ${r}_{{h}_{ij} \mapsto \widehat{y}}$
|
| 130 |
+
|
| 131 |
+
Input: Training data $X = \left\{ {{x}^{\left( 1\right) },\ldots ,{x}^{\left( N\right) }}\right\}$
|
| 132 |
+
|
| 133 |
+
Hyperparameter: # of rule candidate combinations $k$
|
| 134 |
+
|
| 135 |
+
Output: substituted rule(s) ${r}_{x \mapsto \widehat{y}}$
|
| 136 |
+
|
| 137 |
+
${X}_{\text{ bin }} \leftarrow$ BinarizeFeatures(X, bins)
|
| 138 |
+
|
| 139 |
+
${r}_{\text{ cand }} \leftarrow$ ComputeConjunctions $\left( {k,{X}_{\text{ bin }}}\right)$
|
| 140 |
+
|
| 141 |
+
${Error}{s}_{{r}_{cand}} \leftarrow 1 - \frac{1}{N}\mathop{\sum }\limits^{{N - 1}}\left( {{\widehat{y}}_{{h}_{ij}} = {\widehat{y}}_{{r}_{cand}}}\right)$
|
| 142 |
+
|
| 143 |
+
${r}_{x \mapsto \widehat{y}} \leftarrow \min \left( {{Error}{s}_{{r}_{cand}}}\right)$
|
| 144 |
+
|
| 145 |
+
return ${r}_{x \mapsto \widehat{y}}$
|
| 146 |
+
|
| 147 |
+
§ 4.1 BASELINES & SETUP
|
| 148 |
+
|
| 149 |
+
We use both pedagogical and decompositional explanation baselines in our experiments. For pedagogical baselines, we re-purpose state-of-the-art rule induction and decision tree methods to be trained on the DNN predictions $\widehat{y}$ instead of the true labels $y$ . Concretely, we use the C5.0 decision tree algorithm (Pandya & Pandya, 2015), Bayesian Rule Sets (Wang et al., 2017), and RIPPER (Cohen, 1995). As decompositional baselines, we implement the ECLAIRE algorithm as implemented in (Espinosa et al., 2021) which has been shown to outperform other decompositional methods in both speed and accuracy. Additionally, we benchmark against the standalone Column Generation method (Dash et al.,2018) trained on the true labels $y$ to show the benefit of applying it as an intermediate model in both pedagogical and decompositional settings. We run all baselines and our models on five different real-world and synthetic classification datasets, showing the scalability and adaptability to different numbers of samples, features, and class imbalances (Appendix A.1).
|
| 150 |
+
|
| 151 |
+
We run all experiments on five different random folds to initialise the train-test splits of the data, the random initialisations of the DNN as well as random inputs of the baselines. All experiments were run on a 2020 MacBook Pro with a $2\mathrm{{GHz}}$ Intel i5 processor and ${16}\mathrm{{GB}}$ of RAM. For running the baselines, we use open-source implementations published in conjunction with RIPPER, BRS, and ECLAIRE, running hyperparameter search for best results as set out in the respective papers. For comparability, we use the same DNN topology (number and depth of layers) as used in the experiments in (Espinosa et al., 2021). For hyperparameter optimisation of the DNN, we use the keras implementation of the Hyperband algorithm (Li et al., 2018) to search for the optimal learning rate, hidden and output layer activations, batch normalisation, dropout, and L2 regularisation. The CGX implementation uses the MOSEK solver (Andersen & Andersen, 2000) as its cvxpy backend. All the code required to reproduce the experimental results will be made available on GitHub after review.
|
| 152 |
+
|
| 153 |
+
§ 5 RESULTS
|
| 154 |
+
|
| 155 |
+
Performance alignment (Q1.1) The primary objective of performance alignment is the fidelity between the predictions of the rule set ${\widehat{y}}_{R}$ compared to the model predictions ${\widehat{y}}_{DNN}$ , since we want an explanation model that mimics the DNNs behaviour as closely as possible. The results in Table 1 show that CGX-ped has a higher fidelity compared to the baseline methods on most datasets by approximately 1-2% whilst having significantly fewer rules. While RIPPER has a slightly higher fidelity on the MAGIC dataset, both CGX-ped and CGX-dec achieve competitive performance whilst only requiring $5\%$ of the rules. This table also shows that a high fidelity does not guarantee a high accuracy on the overall task, which is visible on the FICO dataset. While CGX achieves a very high fidelity in this task, the overall accuracy is relatively low. This is caused by the underlying DNN struggling to perform well in this task. Notably, the performance of CGX-dec and CGX-ped is equivalent on the XOR dataset, indicating that there were no rules to add from the intermediate layers. This is because the XOR dataset is a relatively simple synthetic dataset, where the pedagogical version already identifies nearly the exact thresholds that were used to generate the target (see Figure 2(c)).
|
| 156 |
+
|
| 157 |
+
Feature alignment (Q1.2) Going beyond fidelity, the feature alignment score $\psi$ in Figure 2(a) shows the mean RBO score $\psi$ between the feature importance derived from the CGX rule set and the aggregated importance of local methods (SHAP and LIME) of the original DNN. A higher score shows that the two ranked lists are more aligned and, as such, the DNN and the rule-based surrogate model rely on the same features for their explanations more closely. Figure 2(a) compares the decompositional CGX-dec method to the best-performing decompositional baseline (ECLAIRE) and shows that CGX-dec achieves a higher feature alignment across all datasets compared to the baseline.
|
| 158 |
+
|
| 159 |
+
Complexity (Q2) Table 1 shows that both the pedagogical and decompositional methods achieve highly competitive results with only a fraction of the rules required. Compared to pedagogical baselines, CGX-ped outperforms on the majority of the tasks. While the pedagogical BRS baseline produces fewer rules for some datasets (FICO and MB-HIST), their total number of clauses are more than double those of CGX across all datasets due to longer chained rules being produced by this method. Additionally, the BRS fidelity is not competitive with CGX-ped or CGX-dec. Looking at ECLAIRE as our decompositional baseline, the results show that CGX-dec only requires a fraction of the clauses compared to ECLAIRE. In the case of the Magic dataset, ECLAIRE required $> {100}\mathrm{x}$ more rules than our method, while for other datasets, the multiple ranges from 10-20x more rules required.
|
| 160 |
+
|
| 161 |
+
Table 1: Overview of CGX-ped and CGX-dec performance alignment (fidelity) and complexity (# clauses) compared to the baselines across datasets. CGX-ped outperforms all baselines across the majority of tasks. While RIPPER has a slightly higher fidelity on the MAGIC dataset, CGX only requires $\sim 5\%$ of the clauses.
|
| 162 |
+
|
| 163 |
+
max width=
|
| 164 |
+
|
| 165 |
+
DATASET Model RULE FID. RULE ACC. #RULES #CLAUSES
|
| 166 |
+
|
| 167 |
+
1-6
|
| 168 |
+
7*XOR CG (STANDALONE) ${78.0} \pm {16.8}$ ${81.1} \pm {18.5}$ ${5.2} \pm {1.9}$ ${21.6} \pm {12.7}$
|
| 169 |
+
|
| 170 |
+
2-6
|
| 171 |
+
Ripper (PED) ${53.5} \pm {3.9}$ ${53.8} \pm {4.0}$ ${7.4} \pm {3.6}$ ${14.4} \pm {7.5}$
|
| 172 |
+
|
| 173 |
+
2-6
|
| 174 |
+
BRS (PED) ${91.3} \pm {2.0}$ ${95.5} \pm {1.3}$ ${9.0} \pm {0.3}$ ${80.9} \pm {3.0}$
|
| 175 |
+
|
| 176 |
+
2-6
|
| 177 |
+
C5 (PED) ${53.0} \pm {0.2}$ ${52.6} \pm {0.2}$ $1 \pm 0$ $1 \pm 0$
|
| 178 |
+
|
| 179 |
+
2-6
|
| 180 |
+
ECLAIRE (DEC) ${91.4} \pm {2.4}$ ${91.8} \pm {2.6}$ ${87} \pm {16.2}$ ${263} \pm {49.1}$
|
| 181 |
+
|
| 182 |
+
2-6
|
| 183 |
+
CGX-PED (OURS) ${92.4} \pm {1.1}$ $\mathbf{{96.7} \pm {1.7}}$ 3.6 ± 1.8 ${10.4} \pm {7.2}$
|
| 184 |
+
|
| 185 |
+
2-6
|
| 186 |
+
CGX-DEC (OURS) $\mathbf{{92.4} \pm {1.1}}$ $\mathbf{{96.7} \pm {1.7}}$ ${3.6} \pm {1.8}$ ${10.4} \pm {7.2}$
|
| 187 |
+
|
| 188 |
+
1-6
|
| 189 |
+
7*MAGIC CG (STANDALONE) ${85.7} \pm {2.5}$ ${82.7} \pm {0.3}$ ${5.2} \pm {0.8}$ ${13.0} \pm {2.4}$
|
| 190 |
+
|
| 191 |
+
2-6
|
| 192 |
+
RIPPER (PED) $\mathbf{{91.9} \pm {0.9}}$ ${81.7} \pm {0.5}$ ${152.2} \pm {14.6}$ ${462.8} \pm {53.5}$
|
| 193 |
+
|
| 194 |
+
2-6
|
| 195 |
+
BRS (PED) ${84.6} \pm {2.1}$ ${79.3} \pm {1.3}$ ${5.8} \pm {0.3}$ ${24.1} \pm {4.8}$
|
| 196 |
+
|
| 197 |
+
2-6
|
| 198 |
+
C5 (PED) ${85.4} \pm {2.5}$ ${82.8} \pm {0.9}$ ${57.8} \pm {4.5}$ ${208.7} \pm {37.6}$
|
| 199 |
+
|
| 200 |
+
2-6
|
| 201 |
+
ECLAIRE (DEC) ${87.4} \pm {1.2}$ ${84.6} \pm {0.5}$ ${392.2} \pm {73.9}$ ${1513.4} \pm {317.8}$
|
| 202 |
+
|
| 203 |
+
2-6
|
| 204 |
+
CGX-PED (OURS) 90.4 ± 1.7 ${80.6} \pm {0.6}$ $\mathbf{{5.0} \pm {0.7}}$ ${11.6} \pm {1.9}$
|
| 205 |
+
|
| 206 |
+
2-6
|
| 207 |
+
CGX-DEC (OURS) ${91.5} \pm {1.3}$ ${84.4} \pm {0.8}$ ${7.4} \pm {0.8}$ ${11.6} \pm {1.9}$
|
| 208 |
+
|
| 209 |
+
1-6
|
| 210 |
+
7*MB-ER CG (STANDALONE) ${92.1} \pm {1.1}$ ${92.0} \pm {1.1}$ ${5.0} \pm {0.7}$ ${15.4} \pm {2.2}$
|
| 211 |
+
|
| 212 |
+
2-6
|
| 213 |
+
Ripper (PED) ${86.5} \pm {2.2}$ ${85.2} \pm {3.0}$ ${22.0} \pm {9.2}$ ${30.2} \pm {21.6}$
|
| 214 |
+
|
| 215 |
+
2-6
|
| 216 |
+
BRS (PED) 90.9 ± 1.2 ${88.4} \pm {0.9}$ ${8.9} \pm {1.1}$ ${57.6} \pm {18.5}$
|
| 217 |
+
|
| 218 |
+
2-6
|
| 219 |
+
C5 (PED) ${89.3} \pm 1$ ${92.7} \pm {0.9}$ ${21.8} \pm 3$ ${72.4} \pm {14.5}$
|
| 220 |
+
|
| 221 |
+
2-6
|
| 222 |
+
ECLAIRE (DEC) $\mathbf{{94.7} \pm {0.2}}$ $\mathbf{{94.1} \pm {1.6}}$ ${48.3} \pm {15.3}$ 137.6 ± 24.7
|
| 223 |
+
|
| 224 |
+
2-6
|
| 225 |
+
CGX-PED (OURS) ${93.7} \pm {1.1}$ ${92.0} \pm {0.9}$ ${4.2} \pm {0.4}$ $\mathbf{{17.0} \pm {1.9}}$
|
| 226 |
+
|
| 227 |
+
2-6
|
| 228 |
+
CGX-DEC (OURS) $\mathbf{{94.7} \pm {0.9}}$ ${92.4} \pm {0.7}$ ${5.9} \pm {1.1}$ ${21.8} \pm {3.4}$
|
| 229 |
+
|
| 230 |
+
1-6
|
| 231 |
+
7*MB-HIST CG (STANDALONE) ${88.5} \pm {2.3}$ ${91.1} \pm {1.4}$ ${4.0} \pm {0.7}$ ${19.4} \pm {2.4}$
|
| 232 |
+
|
| 233 |
+
2-6
|
| 234 |
+
RIPPER (PED) ${86.7} \pm {3.7}$ ${88.1} \pm {3.3}$ ${13.8} \pm {3.4}$ ${35.0} \pm {11.6}$
|
| 235 |
+
|
| 236 |
+
2-6
|
| 237 |
+
BRS (PED) ${81.7} \pm {2.1}$ ${79.9} \pm {2.5}$ ${5.1} \pm {0.2}$ ${40.3} \pm {5.8}$
|
| 238 |
+
|
| 239 |
+
2-6
|
| 240 |
+
C5 (PED) ${89.3} \pm 1$ ${87.9} \pm {0.9}$ ${12.8} \pm {3.1}$ ${35.2} \pm {11.3}$
|
| 241 |
+
|
| 242 |
+
2-6
|
| 243 |
+
ECLAIRE (DEC) ${89.4} \pm {1.8}$ ${88.9} \pm {2.3}$ ${30} \pm {12.4}$ ${74.7} \pm {15.7}$
|
| 244 |
+
|
| 245 |
+
2-6
|
| 246 |
+
CGX-PED (OURS) ${89.1} \pm {3.6}$ ${89.4} \pm {2.5}$ ${5.2} \pm {1.9}$ $\mathbf{{27.8} \pm {7.6}}$
|
| 247 |
+
|
| 248 |
+
2-6
|
| 249 |
+
CGX-DEC (OURS) $\mathbf{{89.6} \pm {3.6}}$ $\mathbf{{90.2} \pm {2.5}}$ ${6.8} \pm {2.0}$ ${32.2} \pm {8.3}$
|
| 250 |
+
|
| 251 |
+
1-6
|
| 252 |
+
7*FICO CG (STANDALONE) ${86.4} \pm {2.8}$ 70.6 ± 0.4 ${3.3} \pm {1.1}$ ${8.6} \pm {3.6}$
|
| 253 |
+
|
| 254 |
+
2-6
|
| 255 |
+
Ripper (PED) ${88.8} \pm {2.8}$ ${70.2} \pm {1.0}$ 99.2 ± 14.5 307.4 ± 41.6
|
| 256 |
+
|
| 257 |
+
2-6
|
| 258 |
+
BRS (PED) ${84.8} \pm {2.3}$ ${65.4} \pm {2.1}$ ${3.1} \pm {0.2}$ ${18} \pm {3.2}$
|
| 259 |
+
|
| 260 |
+
2-6
|
| 261 |
+
C5 (PED) ${72.7} \pm {2.1}$ ${81.8} \pm {1.6}$ ${34.8} \pm {4.1}$ ${125.6} \pm {35.2}$
|
| 262 |
+
|
| 263 |
+
2-6
|
| 264 |
+
ECLAIRE (DEC) ${66.5} \pm {2.5}$ $\mathbf{{84.9} \pm {1.7}}$ ${161.0} \pm {12.3}$ ${298.0} \pm {21.2}$
|
| 265 |
+
|
| 266 |
+
2-6
|
| 267 |
+
CGX-PED (OURS) ${91.1} \pm {0.1}$ ${70.5} \pm {0.8}$ ${3.6} \pm {1.1}$ ${9.6} \pm {3.6}$
|
| 268 |
+
|
| 269 |
+
2-6
|
| 270 |
+
CGX-DEC (OURS) $\mathbf{{92.4} \pm {0.2}}$ ${71.4} \pm 1$ ${5.1} \pm {1.3}$ ${13.4} \pm {2.1}$
|
| 271 |
+
|
| 272 |
+
1-6
|
| 273 |
+
|
| 274 |
+
Stability (Q3) Figure 2(c) shows that CGX (both versions) results in identical explanations when running only the explainability method on a different random seed, keeping the data folds and random seed of the DNN identical. We observe that CGX produces the exact same rule set on repeated runs, while our decompositional baseline produces different explanations, which can be confusing to users. Note that this stability is different from the standard deviation shown in Table 1, where we would expect variation from different splits of the data and random initialisations of the DNN.
|
| 275 |
+
|
| 276 |
+
Value of decomposition (Q4) We acknowledge that the time complexity of decompositional methods scales linearly to the number of layers, which makes the pedagogical CGX-ped implementation an attractive alternative for very deep network topologies. To help decide whether to use pedagogical or decompositional methods, we looked at how much the information from the DNN's latent space (lines 3-10 in Algorithm 1) improves the pedagogical solution (line 2 in Algorithm 1). Figure 2(b) shows that the added performance gained from information of the hidden layers is related to the difficulty of the task. For "easy" tasks, (i.e., those where the DNN has a high accuracy/AUC such as the XOR task), CGX-ped and CGX-dec converge to the same solution, since no rules from the hidden layers increase the fidelity. Figure 2(b) shows that the performance difference increases with the difficulty of the task. For the FICO task, where the DNN accuracy is only just over 70%, the surrogate model gains the most information from the hidden layers.
|
| 277 |
+
|
| 278 |
+
< g r a p h i c s >
|
| 279 |
+
|
| 280 |
+
Figure 2: Overview of CGX performance with respect to alignment (a), task difficulty (b), and stability (c). Subfigure (a) shows the mean Ranked Biased Overlap of CGX compared to ECLAIRE shows that CGX's rule set show a higher feature alignment. Subfigure (b) is comparing task difficulty (DNN prediction error, x-axis) with the incremental fidelity improvement (y-axis) when using CGX-dec over CGX-ped. As tasks get more difficult, using the CGX-dec adds relatively more fidelity compared to the CGX-ped. Subfigure (c) shows that CGX has exact reproducibility for the same underlying model.
|
| 281 |
+
|
| 282 |
+
§ 6 DISCUSSION
|
| 283 |
+
|
| 284 |
+
This paper introduces a global decompositional method that uses column generation as intermediate models. We improve rule-based explanations by intermediate error predictions from the latent space of a DNN, coupled with layer-wise substitution to reduce error propagation. CGX enables research and industry to customise surrogate explanations for different end users by parameterising the accuracy-explainability trade-off. First, we introduced a quantitative measure to analyse the feature alignment between the surrogate model and local explanations of the DNN and show that our surrogate model explanations are more closely aligned to other local explanation methods of the original model. Second, the design of the objective functions allows assigning a higher cost to surrogate model complexity (i.e., the number of clauses) using an extra hyperparameter. We demonstrate that this achieves significantly lower complexity and enables users to control the accuracy-interpretability trade-off by setting higher or lower penalties on the number of rules. Third, the results show that CGX is independent of its initialisation (solution to the Master Linear Program), which leads to improved stability compared to methods using tree induction for rule extraction. Additionally, CGX requires fewer hyperparameters compared to tree-based algorithms such as C5, hence requiring less fine-tuning to achieve competitive results. While this introduces the lambda parameter to enable users to control the length of the resulting rule set, it is also possible to run the solver unconstrained. Beyond these benefits, having rule-based surrogate models enables end users to intervenability by users, as they can amend the rule set to encode further domain knowledge.
|
| 285 |
+
|
| 286 |
+
The key limitation of CGX and decompositional methods more generally is that the runtime is highly dependent on the number of hidden DNN layers and the number of columns in $X$ . We attempt to mitigate this problem by showing that CGX-ped is a highly competitive alternative, especially for simple tasks. For more difficult tasks, however, the decompositional method still delivers better explanations with higher fidelity. The implementation will be open-sourced as a pip-installable Python package.
|
| 287 |
+
|
| 288 |
+
§ ACKNOWLEDGMENTS
|
| 289 |
+
|
| 290 |
+
Acknowledgements here in camera-ready version
|
papers/ICLR/ICLR 2023/ICLR 2023 Workshop/ICLR 2023 Workshop TML4H/flfJ1OwD-FD/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,133 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# DO TISSUE SOURCE SITES LEAVE IDENTIFIABLE SIG- NATURES IN WHOLE SLIDE IMAGES BEYOND STAIN- ING?
|
| 2 |
+
|
| 3 |
+
Anonymous authors
|
| 4 |
+
|
| 5 |
+
Paper under double-blind review
|
| 6 |
+
|
| 7 |
+
## Abstract
|
| 8 |
+
|
| 9 |
+
Why can deep learning predictors trained on Whole Slide Images fail to generalize? It is a common theme in Computational Pathology (CPath) to see a high performing model developed in a research setting experience a large drop in performance when it is eventually deployed to a new clinical environment. One of the major reasons for this is the batch effect that is introduced during the creation of whole slide images resulting in a domain shift. CPath pipelines try to reduce this effect via stain normalization techniques. However, in this paper, we provide empirical evidence that stain normalization methods do not result in any significant reduction of the batch effect. This is done via clustering analysis of the dataset as well as training weakly-supervised models to predict source sites. This study aims to open up avenues for further research for effective handling of batch effects for improving trustworthiness and generalization of predictive modelling in the Computational Pathology domain.
|
| 10 |
+
|
| 11 |
+
## 1 INTRODUCTION
|
| 12 |
+
|
| 13 |
+
Computational Pathology (CPath) is an emerging field which aims to leverage the ever increasing amount of health data to solve complex and clinically relevant problems through the application of machine learning Abels et al. (2019). Areas of interest include, but are not limited to, predicting diagnostic abnormalities associated with cancer, gene mutations Ferrer-Costa et al. (2004); Coudray et al. (2018) or cancer grade Graham et al. (2021) as well as survival prediction of patients Chen et al. (2022). In the recent years, there has been a growing popularity for the use of deep learning methods in CPath which utilise digitised slide tissue images also known as Whole Slide Images (WSIs) Yao et al. (2020); Di et al. (2020). Such models have been very successful with many studies reporting a multitude of very high performance metrics on a wide range of datasets Sokolova et al. (2006). Nevertheless, when some of these models are applied in a clinical setting they can fail to generalize Foote et al. (2022).
|
| 14 |
+
|
| 15 |
+
In this work we argue that this is partially a consequence of reliance on stain normalization methods in the WSI pre-processing pipeline which are not able to truly remove the variability present in images sourced from different hospitals or laboratories. This variability occurs in the creation of these WSIs. As cells are transparent, it is necessary to stain tissue samples before they are digitised or observed under a microscope to effectively interpret them visually. The staining reagent and process as well as the scanner used can vary across source sites which often results in inconsistent staining characteristics across WSIs. This site-specific signature results in a batch effect which can be exploited by a deep learning model to produce inflated accuracy values but poor generalization in cases where the stain characteristics are different. Stain normalization approaches aim to remove this variability by reducing colour and intensity variations present in these images by normalizing them to a standard or base image. However, as stain normalisation usually works in a low-dimensional space, we hypothesise that it fails to remove any higher order site-specific signatures which can still lead to exploitation of the batch effect and generalization failure under domain shift. This means that stain normalised images look normalised to the human eye when in reality hidden factors such as those that result from different laboratory protocols are still present. These factors can skew the learning process acting as confounding variables which could lead to an overestimation of a model's true performance on a given task and subsequently be the cause of the poor generalisation in clinical deployment.
|
| 16 |
+
|
| 17 |
+

|
| 18 |
+
|
| 19 |
+
Figure 1: Proposed workflow of the experiment. A dataset of WSIs is first pre-processed and nonoverlapping patches are obtained. Patches are stained nromalised with either Macenko or Reinahrd Stain Normalisation. Both sets of patches are the fed into a ShuffleNet pretrained on ImageNet to extract 1024-dimensional features. The features are used to train a CLAM CNN as well as a kernalized SVM with a precomputed MMD kernel in order to predict source site information.
|
| 20 |
+
|
| 21 |
+
The technical contribution of this paper is the demonstration of empirical evidence of the presence of these hidden factors in a dataset regardless of what stain normalization technique is applied to it. This is achieved through a carefully designed experiment using different stain normalization schemes as well as two fundamentally different types of predictors. This aspect of the design of computational pathology pipelines is often ignored and, to the best of our knowledge, has not been previously explored with a carefully controlled experiment. The findings in this paper have significant bearing on improving trustworthiness and generalization of machine learning applications in the rapidly emerging area of computational pathology.
|
| 22 |
+
|
| 23 |
+
## 2 MATERIALS AND METHODS
|
| 24 |
+
|
| 25 |
+
### 2.1 EXPERIMENT DESIGN STRATEGY
|
| 26 |
+
|
| 27 |
+
To illustrate that stain normalization methods are unable to effectively remove center specific batch effects, we designed a simple experiment in which we predict the laboratory of origin (centre) of a whole slide image both before and after stain normalization. We first predict the centre of origin of a whole slide image by modelling this task as a weakly-supervised binary classification problem with the target label being the centre of origin. We then develop a separate weakly-supervised predictor to predict the centre of origin with stain normalized whole slide images as input. The fundamental principle behind this experiment is that if stain normalization is an effective strategy to remove any identifiable signatures of the centre of origin or the underlying batch effect, we should get substantial decrease in accuracy of predicting the center after stain normalization. For this purpose, we use multi-centric breast cancer whole slide images (WSIs) from the Cancer Genome Atlas (TCGA) 13 et al. (2012). To show that the results of our analysis are not specific to a certain type of stain normalization, we utilize with two different commonly used stain normalization schemes (Reinhard Reinhard et al. (2001) and Macenko Macenko et al. (2009)). In order to marginalize the effect of the choice of the weakly supervised method being used for the prediction of the centre of origin, we use two fundamentally different types of predictors (CLAM Lu et al. (2021) and MMD-Kernels Keller et al. (2023)). Below each component of the experiment design is explained in further detail.
|
| 28 |
+
|
| 29 |
+
### 2.2 DATASET
|
| 30 |
+
|
| 31 |
+
1,113 publicly available WSIs of Formalin-Fixed paraffin-Embedded (FFPE) Hematoxylin and Eosin (H&E) stained tissue sections of 1084 breast carcinoma patients were collected from The Cancer Genome Atlas (TCGA-BRCA) Hoadley et al. (2018); 13 et al. (2012). For some patients multiple WSIs were available and thus only the ones with best visual quality were used. Additionally, WSIs with missing baseline resolution information were ignored. After filtering 1,051 WSIs remain which are used for analysis. These WSIs were belonging to 49 sources sites.
|
| 32 |
+
|
| 33 |
+
### 2.3 PRE-PROCESSING OF WSIS
|
| 34 |
+
|
| 35 |
+
Quality of WSIs can be negatively affected by artefacts (tissue folds, pen-marking, etc) initiating from histology laboratories. To ensure that any models do not exploit these tissue artefacts the tissue regions of WSIs are segmented using a tissue segmentation method. The tissue segmentation means that only information tissue regions remain and artefacts are removed. Since, an entire WSI at full resolution can be very large $\left( {{100},{000} \times {100},{000}\text{pixels}}\right)$ and cannot be fitted into a GPU memory each WSI is tiled into patches of size ${512} \times {512}$ at a spatial resolution of 0.50 microns-per-pixel (MPP). Tiles that capture less than ${40}\%$ of informative tissue area (mean pixel intensity greater than 200) are filtered out.
|
| 36 |
+
|
| 37 |
+
### 2.4 TISSUE STAINING AND STAIN NORMALIZATION METHODS
|
| 38 |
+
|
| 39 |
+
Histology images are acquired by staining a tissue specimen with a dye that shows variable affinities to different tissue components. In case of routine Hematoxylin and Eosin (H&E) staining, nuclei are stained with Hematoxylin and are highlighted in bluish color, while cytoplasm and extracellular matrix are stained with Eosin and can be seen in pinkish color Fischer et al. (2008). However, due to variations in staining protocols, characteristics of the dye, duration for which the dyes are applied, tissue type and thickness, scanner characteristics and a number of other factors can impact the stain characteristics of the tissue resulting in center-specific confounding factors which are not at all related to any underlying pathology. These constitute a batch effect that can leave a centre-specific signature in the tissue image and affect the generalization performance of any machine learning method.
|
| 40 |
+
|
| 41 |
+
One way of addressing such variations is stain normalization using methods such as the ones proposed by Reinhard Reinhard et al. (2001) and Macenko Macenko et al. (2009). These stain normalization methods map the color style of source image to target images [Khan et al. (2014) while preserving cellular and morphometric information contained in the images.
|
| 42 |
+
|
| 43 |
+
### 2.5 SOURCE SITE PREDICTION
|
| 44 |
+
|
| 45 |
+
We hypothesise that majority of stain normalization methods try to make the images look similar but even after stain-normalization the histology laboratory from which tissue specimen is originating can still be predicted. More specifically, we argue that the use of stain normalization methods are not likely to make computational pathology algorithm generalize in case of domain-shift as these methods can not completely eliminate stain-specific information of the source site. To illustrate this, we used stain-normalized and non stain-normalized images and tried to predict the tissue Source site as target variable. The hypothesis is that, if stain-normalization is removing the tissue site specific information then the tissue source site is less likely to be predicted with high accuracy from the stain-normalized images compared to non-stain-normalized images.
|
| 46 |
+
|
| 47 |
+
We demonstrated the predictability of tissue source site from stain-normalized and non-stain normalized images using a multiple instance learning method and also a Kernel based methods. As a multiple instance learning method, we used Clustering-constrained Attention Multiple Instance Learning (CLAM) which is a weakly-supervised method that has shown promising performance in several computational pathology tasks Lu et al. (2021). CLAM consider each WSI as a bag of patches and then used attention-based pooling function for obtaining slide-level representation from patch-level representation. As a second predictive model, we used a recently published support vector machine (SVM) based classification method that constructs a whole slide image level kernel matrix using a Maximum Mean Discrepancy (MMD) over shuffle-net derived feature representations of patches in whole slide images Keller et al. (2023). In recent work, this method has been shown to have strong predictive power for TP53 mutation prediction and survival analysis from whole slide images. Note that the two methods have fundamentally different principles of operation so that any subsequent findings can be understood in a broad context independent of the specific nature of the predictive model being used. As it is not the goal of this work to present these specific predictors, the interested reader is referred to their original publications for further details.
|
| 48 |
+
|
| 49 |
+
We evaluated the performance of both these methods in predicting tissue source site using both stain-normalized and non-stain normalized data. The experiments were performed using stratified five fold cross validation. For each source site we train a separate model using one-vs-rest approach, in which all tissue images of patients originating from a given source site $L$ are labelled as 1, while the rest are labelled as 0 . We then train the predictive model for predicting the source site of each WSI. In order to make meaningful comparisons, we restricted our analysis to prediction of 8 sources sites each of which has 50 or more images each.
|
| 50 |
+
|
| 51 |
+
Performance evaluation we done via 5-fold cross validation stratified with respect to $L$ . The hyper-parameters were selected by utilising a validation set $({30}\%$ of each train split fold). Average Area under the Receiver Operating Characteristic curve (AUC-ROC) across the 5 folds along with its standard deviation was used as the performance metric.
|
| 52 |
+
|
| 53 |
+
### 2.6 SIMILARITY KERNEL AND CLUSTERING ANALYSIS
|
| 54 |
+
|
| 55 |
+
In order to further understand the implication of stain normalization at a dataset level, we performed hierarchical clustering over the whole slide image maximum mean discrepancy kernel matrix for the whole dataset. The matrix shows the degree of pairwise similarity between whole slide images. We show the kernel matrices both before and after stain normalization together with clustering. If the stain normalization had been effective at removing any information about the center, we would expect that any clustering done after stain normalization will not be possible to group WSIs from the same center in the same cluster. This serves as an additional un-supervised analysis of whether clustering is able to remove center-specific information or not.
|
| 56 |
+
|
| 57 |
+
## 3 RESULTS AND DISCUSSION
|
| 58 |
+
|
| 59 |
+
### 3.1 EFFECT OF STAIN NORMALIZATION
|
| 60 |
+
|
| 61 |
+
Figure 1 shows the visual results of stain normalization over patches of a few example whole slide images. From figure it can be clearly seen that patches belonging to different center are looking same after same normalization. Similarly, from the ROC curve it can be seen that tissue source site of majority of all most all center can be predicted with high AUC-ROC, which support our hypothesis that stain-normalization methods are not removing the source site information, though might be looking the same but the underlying footprint is still there. If stain normalization methods have removed source site information, then we will be seeing AUC-ROC of 0.5 (random) but its not the case. From this analysis we can say that, the analyzed stain-normalization methods are less likely to make model robust against domain-shift.
|
| 62 |
+
|
| 63 |
+
### 3.2 PREDICTIVE POWER OVER ORIGINAL DATA
|
| 64 |
+
|
| 65 |
+
Figure-1 and Tables 1-2 show the results of prediction of the source center from original whole slide images, i.e., without any stain normalization using two different predictive pipelines (CLAM in Table-1 and MMD Kernel in Table-2). These results show that it is possible to predict the source of a given whole slide image with very high predictive power as measured using area under the receiver operating characteristic curve (AUC-ROC) using both methods. This shows that, as expected, there is a significant signature in a whole slide image specific to the laboratory of origin.
|
| 66 |
+
|
| 67 |
+

|
| 68 |
+
|
| 69 |
+
Figure 2: The visualisation of the kernels for non-stained and stain normalized WSIs along with their respective dendrogram. Two WSIs from each cluster are also displayed.
|
| 70 |
+
|
| 71 |
+
### 3.3 PREDICTIVE POWER OVER STAIN NORMALIZED DATA
|
| 72 |
+
|
| 73 |
+
Figure-1 and Tables 1-2 show the results of prediction of the source center from original whole slide images, i.e., with stain normalization using two different predictive pipelines (CLAM in Table-1 and MMD Kernel in Table-2) and two different stain normalization methods (Reinhard and Ma-cenko stain normalization). These results show that it is possible to predict the source of a given whole slide image with very high predictive power as measured using area under the receiver operating characteristic curve (AUC-ROC) using both predictive pipelines even after stain normalization. There is effectively very little change in predictive power as a consequence of stain normalization. This shows that stain normalization alone is not able to remove the site-specific information contained in a whole slide image and the batch effect still exists even after stain adjustment.
|
| 74 |
+
|
| 75 |
+
### 3.4 CLUSTERING ANALYSIS
|
| 76 |
+
|
| 77 |
+
The hierarchically-clustered heatmaps along with their respective dendrograms for the kernels are shown in Figure 2. From this figure one can see that both non-normalized and stain normalized WSIs, have a large proportion of brightly coloured regions in their heatmaps indicating that there are many slides that share similar characteristics. The dataset has been split into 4 main clusters as can be seen on the dendrograms where slides within the same cluster seem to regularly originate from the same laboratory, for example the orange cluster contains many slides from laboratory E2 (Roswell Park). This indicates to us that some hidden site identification markers are likely to still be present even after normalisation.
|
| 78 |
+
|
| 79 |
+
### 3.5 CODE AND DATA AVAILABILITY
|
| 80 |
+
|
| 81 |
+
The code and data used in this paper will be made available on the insitutional github upon the acceptance of the paper but has been skipped in line with doubly blind review requirements at this time.
|
| 82 |
+
|
| 83 |
+
Table 1: Comparison of performance of CLAM trained for source site prediction for various stain normalization protocols. Here + indicates WSIs that originated from the chosen site and - indicates WSIs from one of the remaining source sites.
|
| 84 |
+
|
| 85 |
+
<table><tr><td>Source Site</td><td>$\left( {+, - }\right)$</td><td>Unstained AUCROC $\pm$ std</td><td>Reinhard Stained AUCROC $\pm$ std</td><td>Macenko Stained AUCROC $\pm$ std</td></tr><tr><td>University of Pittsburgh (BH)</td><td>(142,903)</td><td>${0.84} \pm {0.04}$</td><td>${0.82} \pm {0.06}$</td><td>${0.86} \pm {0.03}$</td></tr><tr><td>Walter Reed (A2)</td><td>(100,945)</td><td>${0.82} \pm {0.10}$</td><td>${0.73} \pm {0.07}$</td><td>${0.87} \pm {0.07}$</td></tr><tr><td>Roswell Park (E2)</td><td>(90,955)</td><td>${0.96} \pm {0.01}$</td><td>${0.92} \pm {0.02}$</td><td>${0.96} \pm {0.01}$</td></tr><tr><td>Indivumed (A8)</td><td>(74,971)</td><td>${1.00} \pm {0.00}$</td><td>${0.99} \pm {0.00}$</td><td>${1.00} \pm {0.00}$</td></tr><tr><td>Greater Poland Cancer Center (D8)</td><td>(78,967)</td><td>${0.97} \pm {0.03}$</td><td>${0.94} \pm {0.04}$</td><td>${0.97} \pm {0.02}$</td></tr><tr><td>Mayo (AR)</td><td>(69,976)</td><td>${0.98} \pm {0.02}$</td><td>${0.95} \pm {0.04}$</td><td>${0.97} \pm {0.03}$</td></tr><tr><td>Asterand (E9)</td><td>(62,983)</td><td>${0.98} \pm {0.02}$</td><td>${0.98} \pm {0.01}$</td><td>${0.96} \pm {0.04}$</td></tr><tr><td>Duke (B6)</td><td>(50,995)</td><td>${0.97} \pm {0.03}$</td><td>${0.92} \pm {0.04}$</td><td>${0.94} \pm {0.06}$</td></tr><tr><td>Average AUCROC</td><td/><td>${0.94} \pm {0.08}$</td><td>${0.94} \pm {0.06}$</td><td>${0.91} \pm {0.09}$</td></tr></table>
|
| 86 |
+
|
| 87 |
+
Table 2: Comparison of performance of an SVM trained for source site prediction for various stain normalization protocols. Here + indicates WSIs that originated from the chosen site and - indicates WSIs from one of the remaining source sites.
|
| 88 |
+
|
| 89 |
+
<table><tr><td>Source Site</td><td>$\left( {+, - }\right)$</td><td>Unstained AUCROC $\pm$ std</td><td>Reinhard Stained AUCROC $\pm$ std</td><td>Macenko Stained AUCROC $\pm$ std</td></tr><tr><td>University of Pittsburgh (BH)</td><td>(142,903)</td><td>${0.95} \pm {0.02}$</td><td>${0.93} \pm {0.02}$</td><td>${0.95} \pm {0.01}$</td></tr><tr><td>Walter Reed (A2)</td><td>(100,945)</td><td>${0.95} \pm {0.03}$</td><td>${0.88} \pm {0.04}$</td><td>${0.96} \pm {0.02}$</td></tr><tr><td>Roswell Park (E2)</td><td>(90,955)</td><td>${0.98} \pm {0.01}$</td><td>${0.98} \pm {0.02}$</td><td>${0.99} \pm {0.01}$</td></tr><tr><td>Indivumed (A8)</td><td>(74,971)</td><td>${1.0} \pm {0.00}$</td><td>${1.0} \pm {0.00}$</td><td>${1.0} \pm {0.00}$</td></tr><tr><td>Greater Poland Cancer Center (D8)</td><td>(78,967)</td><td>${0.99} \pm {0.00}$</td><td>${0.99} \pm {0.01}$</td><td>${0.99} \pm {0.00}$</td></tr><tr><td>Mayo (AR)</td><td>(69,976)</td><td>${0.99} \pm {0.00}$</td><td>${0.98} \pm {0.01}$</td><td>${0.99} \pm {0.01}$</td></tr><tr><td>Asterand (E9)</td><td>(62,983)</td><td>${0.98} \pm {0.01}$</td><td>${0.98} \pm {0.01}$</td><td>${0.98} \pm {0.02}$</td></tr><tr><td>Duke (B6)</td><td>(50,995)</td><td>${0.98} \pm {0.02}$</td><td>${0.97} \pm {0.01}$</td><td>${0.98} \pm {0.01}$</td></tr><tr><td>Average AUCROC</td><td/><td>${0.98} \pm {0.02}$</td><td>${0.96} \pm {0.04}$</td><td>${0.98} \pm {0.02}$</td></tr></table>
|
| 90 |
+
|
| 91 |
+
## 4 CONCLUSIONS AND FUTURE WORK
|
| 92 |
+
|
| 93 |
+
We conclude that tissue source sites leave identifiable markers that can be picked by machine learning models. We show that this may be one of the reasons why many models often result in poor generalization when used outside a research setting thus we urge computational pathologists to keep this in mind when designing models and datasets. In the future we would like to verify our results on a larger database as well as explore what exactly are the most prominent factors that make a source site so easily distinguishable and how we can develop strategies to counter such confounding factors.
|
| 94 |
+
|
| 95 |
+
## ACKNOWLEDGEMENTS
|
| 96 |
+
|
| 97 |
+
Skipped due to doubly blind requirements.
|
| 98 |
+
|
| 99 |
+
REFERENCES
|
| 100 |
+
|
| 101 |
+
Brigham & Women's Hospital & Harvard Medical School Chin Lynda 911 Park Peter J. 12 Kucher-lapati Raju 13, Genome data analysis: Baylor College of Medicine Creighton Chad J. 2223 Donehower Lawrence A. 22 2324 25, Institute for Systems Biology Reynolds Sheila 31 Kreis-berg Richard B. 31 Bernard Brady 31 Bressler Ryan 31 Erkkila Timo 32 Lin Jake 31 Thorsson Vesteinn 31 Zhang Wei 33 Shmulevich Ilya 31, et al. Comprehensive molecular portraits of human breast tumours. Nature, 490(7418):61-70, 2012.
|
| 102 |
+
|
| 103 |
+
Esther Abels, Liron Pantanowitz, Famke Aeffner, Mark D Zarella, Jeroen van der Laak, Marilyn M Bui, Venkata NP Vemuri, Anil V Parwani, Jeff Gibbs, Emmanuel Agosto-Arroyo, et al. Computational pathology definitions, best practices, and recommendations for regulatory guidance: a white paper from the digital pathology association. The Journal of pathology, 249(3):286-294, 2019.
|
| 104 |
+
|
| 105 |
+
Richard J Chen, Chengkuan Chen, Yicong Li, Tiffany Y Chen, Andrew D Trister, Rahul G Kr-ishnan, and Faisal Mahmood. Scaling vision transformers to gigapixel images via hierarchical self-supervised learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 16144-16155, 2022.
|
| 106 |
+
|
| 107 |
+
Nicolas Coudray, Paolo Santiago Ocampo, Theodore Sakellaropoulos, Navneet Narula, Matija Snuderl, David Fenyö, Andre L Moreira, Narges Razavian, and Aristotelis Tsirigos. Classification and mutation prediction from non-small cell lung cancer histopathology images using deep learning. Nature medicine, 24(10):1559-1567, 2018.
|
| 108 |
+
|
| 109 |
+
Donglin Di, Shengrui Li, Jun Zhang, and Yue Gao. Ranking-based survival prediction on histopathological whole-slide images. In Medical Image Computing and Computer Assisted Intervention-MICCAI 2020: 23rd International Conference, Lima, Peru, October 4-8, 2020, Proceedings, Part V 23, pp. 428-438. Springer, 2020.
|
| 110 |
+
|
| 111 |
+
C Ferrer-Costa, M Orozco, and X de La Cruz. Sequence-based prediction of pathological mutations. Proteins: Structure, Function, and Bioinformatics, 57(4):811-819, 2004.
|
| 112 |
+
|
| 113 |
+
Andrew H Fischer, Kenneth A Jacobson, Jack Rose, and Rolf Zeller. Hematoxylin and eosin staining of tissue and cell sections. Cold spring harbor protocols, 2008(5):pdb-prot4986, 2008.
|
| 114 |
+
|
| 115 |
+
Alex Foote, Amina Asif, Nasir Rajpoot, and Fayyaz Minhas. Reet: robustness evaluation and enhancement toolbox for computational pathology. Bioinformatics, 38(12):3312-3314, 2022.
|
| 116 |
+
|
| 117 |
+
Simon Graham, Mostafa Jahanifar, Ayesha Azam, Mohammed Nimir, Yee-Wah Tsang, Katherine Dodd, Emily Hero, Harvir Sahota, Atisha Tank, Ksenija Benes, et al. Lizard: A large-scale dataset for colonic nuclear instance segmentation and classification. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 684-693, 2021.
|
| 118 |
+
|
| 119 |
+
Katherine A Hoadley, Christina Yau, Toshinori Hinoue, Denise M Wolf, Alexander J Lazar, Esther Drill, Ronglai Shen, Alison M Taylor, Andrew D Cherniack, Vésteinn Thorsson, et al. Cell-of-origin patterns dominate the molecular classification of 10,000 tumors from 33 types of cancer. Cell, 173(2):291-304, 2018.
|
| 120 |
+
|
| 121 |
+
Piotr Keller, Muhammad Dawood, and Fayyaz ul Amir Afsar Minhas. Maximum mean discrepancy kernels for predictive and prognostic modeling of whole slide images. arXiv preprint arXiv:1111.6285, 2023.
|
| 122 |
+
|
| 123 |
+
Adnan Mujahid Khan, Nasir Rajpoot, Darren Treanor, and Derek Magee. A nonlinear mapping approach to stain normalization in digital histopathology images using image-specific color deconvolution. IEEE transactions on Biomedical Engineering, 61(6):1729-1738, 2014.
|
| 124 |
+
|
| 125 |
+
Ming Y Lu, Drew FK Williamson, Tiffany Y Chen, Richard J Chen, Matteo Barbieri, and Faisal Mahmood. Data-efficient and weakly supervised computational pathology on whole-slide images. Nature biomedical engineering, 5(6):555-570, 2021.
|
| 126 |
+
|
| 127 |
+
Marc Macenko, Marc Niethammer, James S Marron, David Borland, John T Woosley, Xiaojun Guan, Charles Schmitt, and Nancy E Thomas. A method for normalizing histology slides for quantitative analysis. In 2009 IEEE international symposium on biomedical imaging: from nano to macro, pp. 1107-1110. IEEE, 2009.
|
| 128 |
+
|
| 129 |
+
Erik Reinhard, Michael Adhikhmin, Bruce Gooch, and Peter Shirley. Color transfer between images. IEEE Computer graphics and applications, 21(5):34-41, 2001.
|
| 130 |
+
|
| 131 |
+
Marina Sokolova, Nathalie Japkowicz, and Stan Szpakowicz. Beyond accuracy, f-score and roc: a family of discriminant measures for performance evaluation. In AI 2006: Advances in Artificial Intelligence: 19th Australian Joint Conference on Artificial Intelligence, Hobart, Australia, December 4-8, 2006. Proceedings 19, pp. 1015-1021. Springer, 2006.
|
| 132 |
+
|
| 133 |
+
Jiawen Yao, Xinliang Zhu, Jitendra Jonnagaddala, Nicholas Hawkins, and Junzhou Huang. Whole slide images based cancer survival prediction using attention guided deep multiple instance learning networks. Medical Image Analysis, 65:101789, 2020.
|
papers/ICLR/ICLR 2023/ICLR 2023 Workshop/ICLR 2023 Workshop TML4H/flfJ1OwD-FD/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,159 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ DO TISSUE SOURCE SITES LEAVE IDENTIFIABLE SIG- NATURES IN WHOLE SLIDE IMAGES BEYOND STAIN- ING?
|
| 2 |
+
|
| 3 |
+
Anonymous authors
|
| 4 |
+
|
| 5 |
+
Paper under double-blind review
|
| 6 |
+
|
| 7 |
+
§ ABSTRACT
|
| 8 |
+
|
| 9 |
+
Why can deep learning predictors trained on Whole Slide Images fail to generalize? It is a common theme in Computational Pathology (CPath) to see a high performing model developed in a research setting experience a large drop in performance when it is eventually deployed to a new clinical environment. One of the major reasons for this is the batch effect that is introduced during the creation of whole slide images resulting in a domain shift. CPath pipelines try to reduce this effect via stain normalization techniques. However, in this paper, we provide empirical evidence that stain normalization methods do not result in any significant reduction of the batch effect. This is done via clustering analysis of the dataset as well as training weakly-supervised models to predict source sites. This study aims to open up avenues for further research for effective handling of batch effects for improving trustworthiness and generalization of predictive modelling in the Computational Pathology domain.
|
| 10 |
+
|
| 11 |
+
§ 1 INTRODUCTION
|
| 12 |
+
|
| 13 |
+
Computational Pathology (CPath) is an emerging field which aims to leverage the ever increasing amount of health data to solve complex and clinically relevant problems through the application of machine learning Abels et al. (2019). Areas of interest include, but are not limited to, predicting diagnostic abnormalities associated with cancer, gene mutations Ferrer-Costa et al. (2004); Coudray et al. (2018) or cancer grade Graham et al. (2021) as well as survival prediction of patients Chen et al. (2022). In the recent years, there has been a growing popularity for the use of deep learning methods in CPath which utilise digitised slide tissue images also known as Whole Slide Images (WSIs) Yao et al. (2020); Di et al. (2020). Such models have been very successful with many studies reporting a multitude of very high performance metrics on a wide range of datasets Sokolova et al. (2006). Nevertheless, when some of these models are applied in a clinical setting they can fail to generalize Foote et al. (2022).
|
| 14 |
+
|
| 15 |
+
In this work we argue that this is partially a consequence of reliance on stain normalization methods in the WSI pre-processing pipeline which are not able to truly remove the variability present in images sourced from different hospitals or laboratories. This variability occurs in the creation of these WSIs. As cells are transparent, it is necessary to stain tissue samples before they are digitised or observed under a microscope to effectively interpret them visually. The staining reagent and process as well as the scanner used can vary across source sites which often results in inconsistent staining characteristics across WSIs. This site-specific signature results in a batch effect which can be exploited by a deep learning model to produce inflated accuracy values but poor generalization in cases where the stain characteristics are different. Stain normalization approaches aim to remove this variability by reducing colour and intensity variations present in these images by normalizing them to a standard or base image. However, as stain normalisation usually works in a low-dimensional space, we hypothesise that it fails to remove any higher order site-specific signatures which can still lead to exploitation of the batch effect and generalization failure under domain shift. This means that stain normalised images look normalised to the human eye when in reality hidden factors such as those that result from different laboratory protocols are still present. These factors can skew the learning process acting as confounding variables which could lead to an overestimation of a model's true performance on a given task and subsequently be the cause of the poor generalisation in clinical deployment.
|
| 16 |
+
|
| 17 |
+
< g r a p h i c s >
|
| 18 |
+
|
| 19 |
+
Figure 1: Proposed workflow of the experiment. A dataset of WSIs is first pre-processed and nonoverlapping patches are obtained. Patches are stained nromalised with either Macenko or Reinahrd Stain Normalisation. Both sets of patches are the fed into a ShuffleNet pretrained on ImageNet to extract 1024-dimensional features. The features are used to train a CLAM CNN as well as a kernalized SVM with a precomputed MMD kernel in order to predict source site information.
|
| 20 |
+
|
| 21 |
+
The technical contribution of this paper is the demonstration of empirical evidence of the presence of these hidden factors in a dataset regardless of what stain normalization technique is applied to it. This is achieved through a carefully designed experiment using different stain normalization schemes as well as two fundamentally different types of predictors. This aspect of the design of computational pathology pipelines is often ignored and, to the best of our knowledge, has not been previously explored with a carefully controlled experiment. The findings in this paper have significant bearing on improving trustworthiness and generalization of machine learning applications in the rapidly emerging area of computational pathology.
|
| 22 |
+
|
| 23 |
+
§ 2 MATERIALS AND METHODS
|
| 24 |
+
|
| 25 |
+
§ 2.1 EXPERIMENT DESIGN STRATEGY
|
| 26 |
+
|
| 27 |
+
To illustrate that stain normalization methods are unable to effectively remove center specific batch effects, we designed a simple experiment in which we predict the laboratory of origin (centre) of a whole slide image both before and after stain normalization. We first predict the centre of origin of a whole slide image by modelling this task as a weakly-supervised binary classification problem with the target label being the centre of origin. We then develop a separate weakly-supervised predictor to predict the centre of origin with stain normalized whole slide images as input. The fundamental principle behind this experiment is that if stain normalization is an effective strategy to remove any identifiable signatures of the centre of origin or the underlying batch effect, we should get substantial decrease in accuracy of predicting the center after stain normalization. For this purpose, we use multi-centric breast cancer whole slide images (WSIs) from the Cancer Genome Atlas (TCGA) 13 et al. (2012). To show that the results of our analysis are not specific to a certain type of stain normalization, we utilize with two different commonly used stain normalization schemes (Reinhard Reinhard et al. (2001) and Macenko Macenko et al. (2009)). In order to marginalize the effect of the choice of the weakly supervised method being used for the prediction of the centre of origin, we use two fundamentally different types of predictors (CLAM Lu et al. (2021) and MMD-Kernels Keller et al. (2023)). Below each component of the experiment design is explained in further detail.
|
| 28 |
+
|
| 29 |
+
§ 2.2 DATASET
|
| 30 |
+
|
| 31 |
+
1,113 publicly available WSIs of Formalin-Fixed paraffin-Embedded (FFPE) Hematoxylin and Eosin (H&E) stained tissue sections of 1084 breast carcinoma patients were collected from The Cancer Genome Atlas (TCGA-BRCA) Hoadley et al. (2018); 13 et al. (2012). For some patients multiple WSIs were available and thus only the ones with best visual quality were used. Additionally, WSIs with missing baseline resolution information were ignored. After filtering 1,051 WSIs remain which are used for analysis. These WSIs were belonging to 49 sources sites.
|
| 32 |
+
|
| 33 |
+
§ 2.3 PRE-PROCESSING OF WSIS
|
| 34 |
+
|
| 35 |
+
Quality of WSIs can be negatively affected by artefacts (tissue folds, pen-marking, etc) initiating from histology laboratories. To ensure that any models do not exploit these tissue artefacts the tissue regions of WSIs are segmented using a tissue segmentation method. The tissue segmentation means that only information tissue regions remain and artefacts are removed. Since, an entire WSI at full resolution can be very large $\left( {{100},{000} \times {100},{000}\text{ pixels }}\right)$ and cannot be fitted into a GPU memory each WSI is tiled into patches of size ${512} \times {512}$ at a spatial resolution of 0.50 microns-per-pixel (MPP). Tiles that capture less than ${40}\%$ of informative tissue area (mean pixel intensity greater than 200) are filtered out.
|
| 36 |
+
|
| 37 |
+
§ 2.4 TISSUE STAINING AND STAIN NORMALIZATION METHODS
|
| 38 |
+
|
| 39 |
+
Histology images are acquired by staining a tissue specimen with a dye that shows variable affinities to different tissue components. In case of routine Hematoxylin and Eosin (H&E) staining, nuclei are stained with Hematoxylin and are highlighted in bluish color, while cytoplasm and extracellular matrix are stained with Eosin and can be seen in pinkish color Fischer et al. (2008). However, due to variations in staining protocols, characteristics of the dye, duration for which the dyes are applied, tissue type and thickness, scanner characteristics and a number of other factors can impact the stain characteristics of the tissue resulting in center-specific confounding factors which are not at all related to any underlying pathology. These constitute a batch effect that can leave a centre-specific signature in the tissue image and affect the generalization performance of any machine learning method.
|
| 40 |
+
|
| 41 |
+
One way of addressing such variations is stain normalization using methods such as the ones proposed by Reinhard Reinhard et al. (2001) and Macenko Macenko et al. (2009). These stain normalization methods map the color style of source image to target images [Khan et al. (2014) while preserving cellular and morphometric information contained in the images.
|
| 42 |
+
|
| 43 |
+
§ 2.5 SOURCE SITE PREDICTION
|
| 44 |
+
|
| 45 |
+
We hypothesise that majority of stain normalization methods try to make the images look similar but even after stain-normalization the histology laboratory from which tissue specimen is originating can still be predicted. More specifically, we argue that the use of stain normalization methods are not likely to make computational pathology algorithm generalize in case of domain-shift as these methods can not completely eliminate stain-specific information of the source site. To illustrate this, we used stain-normalized and non stain-normalized images and tried to predict the tissue Source site as target variable. The hypothesis is that, if stain-normalization is removing the tissue site specific information then the tissue source site is less likely to be predicted with high accuracy from the stain-normalized images compared to non-stain-normalized images.
|
| 46 |
+
|
| 47 |
+
We demonstrated the predictability of tissue source site from stain-normalized and non-stain normalized images using a multiple instance learning method and also a Kernel based methods. As a multiple instance learning method, we used Clustering-constrained Attention Multiple Instance Learning (CLAM) which is a weakly-supervised method that has shown promising performance in several computational pathology tasks Lu et al. (2021). CLAM consider each WSI as a bag of patches and then used attention-based pooling function for obtaining slide-level representation from patch-level representation. As a second predictive model, we used a recently published support vector machine (SVM) based classification method that constructs a whole slide image level kernel matrix using a Maximum Mean Discrepancy (MMD) over shuffle-net derived feature representations of patches in whole slide images Keller et al. (2023). In recent work, this method has been shown to have strong predictive power for TP53 mutation prediction and survival analysis from whole slide images. Note that the two methods have fundamentally different principles of operation so that any subsequent findings can be understood in a broad context independent of the specific nature of the predictive model being used. As it is not the goal of this work to present these specific predictors, the interested reader is referred to their original publications for further details.
|
| 48 |
+
|
| 49 |
+
We evaluated the performance of both these methods in predicting tissue source site using both stain-normalized and non-stain normalized data. The experiments were performed using stratified five fold cross validation. For each source site we train a separate model using one-vs-rest approach, in which all tissue images of patients originating from a given source site $L$ are labelled as 1, while the rest are labelled as 0 . We then train the predictive model for predicting the source site of each WSI. In order to make meaningful comparisons, we restricted our analysis to prediction of 8 sources sites each of which has 50 or more images each.
|
| 50 |
+
|
| 51 |
+
Performance evaluation we done via 5-fold cross validation stratified with respect to $L$ . The hyper-parameters were selected by utilising a validation set $({30}\%$ of each train split fold). Average Area under the Receiver Operating Characteristic curve (AUC-ROC) across the 5 folds along with its standard deviation was used as the performance metric.
|
| 52 |
+
|
| 53 |
+
§ 2.6 SIMILARITY KERNEL AND CLUSTERING ANALYSIS
|
| 54 |
+
|
| 55 |
+
In order to further understand the implication of stain normalization at a dataset level, we performed hierarchical clustering over the whole slide image maximum mean discrepancy kernel matrix for the whole dataset. The matrix shows the degree of pairwise similarity between whole slide images. We show the kernel matrices both before and after stain normalization together with clustering. If the stain normalization had been effective at removing any information about the center, we would expect that any clustering done after stain normalization will not be possible to group WSIs from the same center in the same cluster. This serves as an additional un-supervised analysis of whether clustering is able to remove center-specific information or not.
|
| 56 |
+
|
| 57 |
+
§ 3 RESULTS AND DISCUSSION
|
| 58 |
+
|
| 59 |
+
§ 3.1 EFFECT OF STAIN NORMALIZATION
|
| 60 |
+
|
| 61 |
+
Figure 1 shows the visual results of stain normalization over patches of a few example whole slide images. From figure it can be clearly seen that patches belonging to different center are looking same after same normalization. Similarly, from the ROC curve it can be seen that tissue source site of majority of all most all center can be predicted with high AUC-ROC, which support our hypothesis that stain-normalization methods are not removing the source site information, though might be looking the same but the underlying footprint is still there. If stain normalization methods have removed source site information, then we will be seeing AUC-ROC of 0.5 (random) but its not the case. From this analysis we can say that, the analyzed stain-normalization methods are less likely to make model robust against domain-shift.
|
| 62 |
+
|
| 63 |
+
§ 3.2 PREDICTIVE POWER OVER ORIGINAL DATA
|
| 64 |
+
|
| 65 |
+
Figure-1 and Tables 1-2 show the results of prediction of the source center from original whole slide images, i.e., without any stain normalization using two different predictive pipelines (CLAM in Table-1 and MMD Kernel in Table-2). These results show that it is possible to predict the source of a given whole slide image with very high predictive power as measured using area under the receiver operating characteristic curve (AUC-ROC) using both methods. This shows that, as expected, there is a significant signature in a whole slide image specific to the laboratory of origin.
|
| 66 |
+
|
| 67 |
+
< g r a p h i c s >
|
| 68 |
+
|
| 69 |
+
Figure 2: The visualisation of the kernels for non-stained and stain normalized WSIs along with their respective dendrogram. Two WSIs from each cluster are also displayed.
|
| 70 |
+
|
| 71 |
+
§ 3.3 PREDICTIVE POWER OVER STAIN NORMALIZED DATA
|
| 72 |
+
|
| 73 |
+
Figure-1 and Tables 1-2 show the results of prediction of the source center from original whole slide images, i.e., with stain normalization using two different predictive pipelines (CLAM in Table-1 and MMD Kernel in Table-2) and two different stain normalization methods (Reinhard and Ma-cenko stain normalization). These results show that it is possible to predict the source of a given whole slide image with very high predictive power as measured using area under the receiver operating characteristic curve (AUC-ROC) using both predictive pipelines even after stain normalization. There is effectively very little change in predictive power as a consequence of stain normalization. This shows that stain normalization alone is not able to remove the site-specific information contained in a whole slide image and the batch effect still exists even after stain adjustment.
|
| 74 |
+
|
| 75 |
+
§ 3.4 CLUSTERING ANALYSIS
|
| 76 |
+
|
| 77 |
+
The hierarchically-clustered heatmaps along with their respective dendrograms for the kernels are shown in Figure 2. From this figure one can see that both non-normalized and stain normalized WSIs, have a large proportion of brightly coloured regions in their heatmaps indicating that there are many slides that share similar characteristics. The dataset has been split into 4 main clusters as can be seen on the dendrograms where slides within the same cluster seem to regularly originate from the same laboratory, for example the orange cluster contains many slides from laboratory E2 (Roswell Park). This indicates to us that some hidden site identification markers are likely to still be present even after normalisation.
|
| 78 |
+
|
| 79 |
+
§ 3.5 CODE AND DATA AVAILABILITY
|
| 80 |
+
|
| 81 |
+
The code and data used in this paper will be made available on the insitutional github upon the acceptance of the paper but has been skipped in line with doubly blind review requirements at this time.
|
| 82 |
+
|
| 83 |
+
Table 1: Comparison of performance of CLAM trained for source site prediction for various stain normalization protocols. Here + indicates WSIs that originated from the chosen site and - indicates WSIs from one of the remaining source sites.
|
| 84 |
+
|
| 85 |
+
max width=
|
| 86 |
+
|
| 87 |
+
Source Site $\left( {+, - }\right)$ Unstained AUCROC $\pm$ std Reinhard Stained AUCROC $\pm$ std Macenko Stained AUCROC $\pm$ std
|
| 88 |
+
|
| 89 |
+
1-5
|
| 90 |
+
University of Pittsburgh (BH) (142,903) ${0.84} \pm {0.04}$ ${0.82} \pm {0.06}$ ${0.86} \pm {0.03}$
|
| 91 |
+
|
| 92 |
+
1-5
|
| 93 |
+
Walter Reed (A2) (100,945) ${0.82} \pm {0.10}$ ${0.73} \pm {0.07}$ ${0.87} \pm {0.07}$
|
| 94 |
+
|
| 95 |
+
1-5
|
| 96 |
+
Roswell Park (E2) (90,955) ${0.96} \pm {0.01}$ ${0.92} \pm {0.02}$ ${0.96} \pm {0.01}$
|
| 97 |
+
|
| 98 |
+
1-5
|
| 99 |
+
Indivumed (A8) (74,971) ${1.00} \pm {0.00}$ ${0.99} \pm {0.00}$ ${1.00} \pm {0.00}$
|
| 100 |
+
|
| 101 |
+
1-5
|
| 102 |
+
Greater Poland Cancer Center (D8) (78,967) ${0.97} \pm {0.03}$ ${0.94} \pm {0.04}$ ${0.97} \pm {0.02}$
|
| 103 |
+
|
| 104 |
+
1-5
|
| 105 |
+
Mayo (AR) (69,976) ${0.98} \pm {0.02}$ ${0.95} \pm {0.04}$ ${0.97} \pm {0.03}$
|
| 106 |
+
|
| 107 |
+
1-5
|
| 108 |
+
Asterand (E9) (62,983) ${0.98} \pm {0.02}$ ${0.98} \pm {0.01}$ ${0.96} \pm {0.04}$
|
| 109 |
+
|
| 110 |
+
1-5
|
| 111 |
+
Duke (B6) (50,995) ${0.97} \pm {0.03}$ ${0.92} \pm {0.04}$ ${0.94} \pm {0.06}$
|
| 112 |
+
|
| 113 |
+
1-5
|
| 114 |
+
Average AUCROC X ${0.94} \pm {0.08}$ ${0.94} \pm {0.06}$ ${0.91} \pm {0.09}$
|
| 115 |
+
|
| 116 |
+
1-5
|
| 117 |
+
|
| 118 |
+
Table 2: Comparison of performance of an SVM trained for source site prediction for various stain normalization protocols. Here + indicates WSIs that originated from the chosen site and - indicates WSIs from one of the remaining source sites.
|
| 119 |
+
|
| 120 |
+
max width=
|
| 121 |
+
|
| 122 |
+
Source Site $\left( {+, - }\right)$ Unstained AUCROC $\pm$ std Reinhard Stained AUCROC $\pm$ std Macenko Stained AUCROC $\pm$ std
|
| 123 |
+
|
| 124 |
+
1-5
|
| 125 |
+
University of Pittsburgh (BH) (142,903) ${0.95} \pm {0.02}$ ${0.93} \pm {0.02}$ ${0.95} \pm {0.01}$
|
| 126 |
+
|
| 127 |
+
1-5
|
| 128 |
+
Walter Reed (A2) (100,945) ${0.95} \pm {0.03}$ ${0.88} \pm {0.04}$ ${0.96} \pm {0.02}$
|
| 129 |
+
|
| 130 |
+
1-5
|
| 131 |
+
Roswell Park (E2) (90,955) ${0.98} \pm {0.01}$ ${0.98} \pm {0.02}$ ${0.99} \pm {0.01}$
|
| 132 |
+
|
| 133 |
+
1-5
|
| 134 |
+
Indivumed (A8) (74,971) ${1.0} \pm {0.00}$ ${1.0} \pm {0.00}$ ${1.0} \pm {0.00}$
|
| 135 |
+
|
| 136 |
+
1-5
|
| 137 |
+
Greater Poland Cancer Center (D8) (78,967) ${0.99} \pm {0.00}$ ${0.99} \pm {0.01}$ ${0.99} \pm {0.00}$
|
| 138 |
+
|
| 139 |
+
1-5
|
| 140 |
+
Mayo (AR) (69,976) ${0.99} \pm {0.00}$ ${0.98} \pm {0.01}$ ${0.99} \pm {0.01}$
|
| 141 |
+
|
| 142 |
+
1-5
|
| 143 |
+
Asterand (E9) (62,983) ${0.98} \pm {0.01}$ ${0.98} \pm {0.01}$ ${0.98} \pm {0.02}$
|
| 144 |
+
|
| 145 |
+
1-5
|
| 146 |
+
Duke (B6) (50,995) ${0.98} \pm {0.02}$ ${0.97} \pm {0.01}$ ${0.98} \pm {0.01}$
|
| 147 |
+
|
| 148 |
+
1-5
|
| 149 |
+
Average AUCROC X ${0.98} \pm {0.02}$ ${0.96} \pm {0.04}$ ${0.98} \pm {0.02}$
|
| 150 |
+
|
| 151 |
+
1-5
|
| 152 |
+
|
| 153 |
+
§ 4 CONCLUSIONS AND FUTURE WORK
|
| 154 |
+
|
| 155 |
+
We conclude that tissue source sites leave identifiable markers that can be picked by machine learning models. We show that this may be one of the reasons why many models often result in poor generalization when used outside a research setting thus we urge computational pathologists to keep this in mind when designing models and datasets. In the future we would like to verify our results on a larger database as well as explore what exactly are the most prominent factors that make a source site so easily distinguishable and how we can develop strategies to counter such confounding factors.
|
| 156 |
+
|
| 157 |
+
§ ACKNOWLEDGEMENTS
|
| 158 |
+
|
| 159 |
+
Skipped due to doubly blind requirements.
|
papers/ICLR/ICLR 2023/ICLR 2023 Workshop/ICLR 2023 Workshop TML4H/rmIJnScwO6/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,284 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# A KERNEL DENSITY ESTIMATION BASED QUALITY METRIC FOR QUALITY ASSESSMENT OF OBSTETRIC Ultrasound Video
|
| 2 |
+
|
| 3 |
+
Anonymous authors
|
| 4 |
+
|
| 5 |
+
Paper under double-blind review
|
| 6 |
+
|
| 7 |
+
## Abstract
|
| 8 |
+
|
| 9 |
+
Simplified ultrasound scanning protocols (sweeps) have been developed to reduce the high skill required to perform a regular obstetric ultrasound examination. However, without automated quality assessment of the video, the utility of such protocols in clinical practice is limited. An automated quality assessment algorithm is proposed that applies an object detector to detect fetal anatomies within ultrasound videos. Kernel density estimation is applied to the bounding box annotations to estimate a probability density function of certain bounding box properties such as the spatial and temporal position during the sweeps. This allows to quantify how well the spatio-temporal position of anatomies in a sweep agree with previously seen data as a quality metric. The new quality metric is compared to other metrics of quality such as the confidence of the object detector model.
|
| 10 |
+
|
| 11 |
+
## 1 INTRODUCTION
|
| 12 |
+
|
| 13 |
+
Obstetric ultrasound scanning is a vital part of antenatal care; as it allows us to accurately date a pregnancy, identify pregnancy risk factors, check the health of the fetus and much more Crino et al. (2013); Salomon et al. (2011; 2019). Although ultrasound is considered a cheap and portable imaging modality, there is still a shortage of availability of obstetric ultrasound scans in limited resource settings Ngoya et al. (2016); Mollura & Lungren (2014); Mollura et al. (2013). However these are the regions where they are most needed; with ${94}\%$ of pregnancy and childbirth related deaths occurring in developing nations and ${80}\%$ of these occurring in areas of high birthrate and limited access to healthcare Organization et al. (2019). A major limitation for the availability of ultrasound is the lack of trained sonographers and healthcare workers Maru et al. (2010); Darmstadt et al. (2009) [World Health Organization and United Nations Children's Fund. WHO/UNICEF joint database on SDG 3.1.2 Skilled Attendance at Birth.]
|
| 14 |
+
|
| 15 |
+
The acquisition of obstetric ultrasound requires a high level of training and expertise, with the sono-graphers iteratively repositioning the probe whilst viewing real time at the surrounding anatomy to capture the most informative imaging plane. Additional difficulties include the movement of the fetal head and variable fetal position in the uterus.
|
| 16 |
+
|
| 17 |
+
Thus, simplified scanning protocols have been published that don't rely on the intuition and experience of the sonographers, but where the probe is to scan along predefined linear sweeps across the body outlined solely by external body landmarks of the pregnant woman Abu-Rustum & Zi-ade (2017); Abuhamad et al. (2015); DeStigter et al. (2011). These scanning protocols (sweeps) allow for acquisition without highly trained sonographers, which can then be analysed by clinicians remotely with the advent of tele-medicine Toscano et al. (2021); Marini et al. (2021), or by the medical image analysis algorithms.
|
| 18 |
+
|
| 19 |
+
In practice with these new acquisition protocols, we cannot assume the user will be able to differentiate between a usable scan or a low quality non-informative one. Ultrasound scans are cheaply retaken during the same appointment, however retaking a scan after the patient has left is a large burden to the patient. Whilst trained clinicians can tell when to retake a video if it's sub-optimal or not "fit-for-purpose", computers are yet to be able to mimic this human capability. Thus, developing automated algorithms for ultrasound video quality assessment is an important research challenge to aid in the adoption of ultrasounds where trained sonographers are not always available.
|
| 20 |
+
|
| 21 |
+
This paper seeks to provide a quality assessment method for sweep data. The method utilises bounding box annotations of ultrasound sweep data, to train an anatomy detector model, and via kernel density estimation, quantifies how well the spatial and temporal position of bounding boxes fit in with the expected "typical" data.
|
| 22 |
+
|
| 23 |
+
## 2 RELATED WORK
|
| 24 |
+
|
| 25 |
+
Image quality assessment in signal processing is a well researched topic with many different metrics proposed like PSNR and SSIM. These however are image-based (not video), fully referenced methods (requires an undistorted image) and mostly focus on compression losses Thung & Raveendran (2009); Hore & Ziou (2010).
|
| 26 |
+
|
| 27 |
+
No-referenced video quality assessment (NRVQA) literature presents models designed to quantify a specific type of image distortion such as blur Marziliano et al. (2002), ringing Feng & Allebach (2006), blockiness Wang et al. (2000) banding Wang et al. (2016); Tu et al. (2020) or noise Amer & Dubois (2005). Current NRVQA metrics such as Li et al. (2019); Mittal et al. (2012) rely on using natural scene statistics and models of the human visual system. Thus they are specific for natural images, and likely will not generalise well to ultrasound videos, as these contain many types of distortions that are beyond the normal range for natural images.
|
| 28 |
+
|
| 29 |
+
Quality assessment of ultrasound video must therefore be defined differently; it should be based off clinical usefulness. This field has less research. Wu et al. (2017) and Lin et al. (2019) propose quality assessment off anatomy specific criteria that can be separately assessed by individual networks; Wu et al. (2017) develops deep learning models to check things like: if the fetal stomach bubble appears full and salient with a clear boundary, and Lin et al. (2019) "the lateral sulcus must be clearly visible" and "the skull is in the middle of the ultrasound plane and larger than $2/3$ of overall fan shape area" and other criteria.
|
| 30 |
+
|
| 31 |
+
The most related work is by Komatsu et al. (2021) who developed an object detector network to detect cardiac structures in fetal ultrasound scans and used the detection results to generate an "abnormality score" for the heart. The work uses a specific imaging plane of the heart, the three-vessel trachea view and four-chamber view, where it is expected to see a full set of specific anatomical substructures in every frame of these clips. Thus they simply generated an abnormality score between 0-1 which depends linearly to the number of anatomies and number of frames detected.
|
| 32 |
+
|
| 33 |
+
## 3 METHOD
|
| 34 |
+
|
| 35 |
+
This work uses bounding box annotations provided by an expert sonographer for a simplified ultrasound scan. The method firstly finds the distribution of the spatial and temporal position of the anatomies for this type of scan from many videos. For a new video, it then evaluates how well the spatio-temporal position of the anatomies of a new video fits in the distribution.
|
| 36 |
+
|
| 37 |
+
### 3.1 DATA
|
| 38 |
+
|
| 39 |
+
The data was gathered as part of the Computer Assisted Low-cost Point of Care Ultrasound (CALO-PUS) Project (UK Research Ethics Committee 18/WS/0051). A simplified ultrasound scanning protocol proposed by Abuhamad et al. (2015) was refined by Self et al. (2022) to contain 5 steps of which are shown in Fig. 1.
|
| 40 |
+
|
| 41 |
+
The scans were taken by an experienced sonographer, who also annotated the scans frame by frame with a bounding box around each of 11 possible anatomical structures (listed in Fig. 2). Of the five sweeps outlined in Fig. 1, this work only uses T-shaped sweep (step 1). For this work, only scans of gestational age between 18-23 weeks, and cephalic presentation are used.
|
| 42 |
+
|
| 43 |
+
This was to ensure the scans looked as homogeneous as possible. With different fetal presentations, we expect different locations of anatomies, and anatomies to show up at different times along the scan. The anatomies will also be imaged along different views, which affects the way the anatomy will appear thus the size and shape of the bounding boxes too. The gestational age cutoffs were chosen for similar reasons, but also because the majority of scans lied within these bounds. This left us with 45 ultrasound videos.
|
| 44 |
+
|
| 45 |
+
Superior = step 1 = step 2.1 $=$ step 2.2 = step 3.1 = step 3.2 Left Right Probe orientation with sweep Inferior
|
| 46 |
+
|
| 47 |
+
Figure 1: Left: all the different CALOPUS sweeps in Self et al. (2022) outlined in Section 3.1; right: the T-shaped sweep (step 1 on left) that we used in this work outlined on a participant.
|
| 48 |
+
|
| 49 |
+
Whilst we use only sweep 1 (see Fig. 1), this work could easily extend to any other sweeps. We chose the T-shaped sweep because it often showed the most anatomies, thus bounding boxes. The different bounding box annotations consisted of 11 anatomies, of which were distributed as shown in Fig. 2
|
| 50 |
+
|
| 51 |
+
The CALOPUS project is a collaboration with the Translational Health Science and Technology Institute (THSTI) based in Delhi, India Self et al. (2022). THSTI have also scanned 72 patients using the same protocol, with cephalic presentation and of 18-23 gestational age, these cannot be viewed/accessed. However, the bounding box annotations of these scans which include: the bounding box co-ordinates, frame number and anatomy can be utilised. This gives an extra 72 videos' bounding box information.
|
| 52 |
+
|
| 53 |
+
cerebellum number of labelled frames of each anatomy 4000 6000 8000 number of frames stomach-bubble heart bladder anatomy label abdomen pelvis femur head spine placenta 2000
|
| 54 |
+
|
| 55 |
+
Figure 2: Number of frames per anatomy in the UK dataset after applying exclusion criteria (see Section 3.1) Maybe make this so font is bigger, so can make fig smaller/ portrait?.
|
| 56 |
+
|
| 57 |
+
### 3.2 OVERVIEW
|
| 58 |
+
|
| 59 |
+
A brief overview of our pipeline is: (1) using the bounding box annotated videos, we train an object detector that can produce bounding boxes around fetal anatomies. (2) Estimate the probability distribution of spatial position and timing of the bounding boxes for a cephalic T-shaped sweep using kernel density estimation. (3) Perform inference on new data with our trained anatomy detector model to get bounding boxes for each anatomy for this video. (4) Compare the spatial and temporal position of the bounding boxes of the new video against our distribution of annotated videos. If the new bounding box properties fit in well with our estimated distribution, we propose it is of high quality, whilst if it doesn't, we propose it is abnormal or low quality. Numerical values for 'how well it fits the distribution' can be produced via calculating the probability of the bounding box having a property of a certain value or a less likely value via integration of the probability density function (PDF). One probability score for each bounding box (so frame-level), and averaging the probability over the entire video clip is our quality metric.
|
| 60 |
+
|
| 61 |
+
### 3.3 ANATOMY DETECTOR MODEL
|
| 62 |
+
|
| 63 |
+
Our anatomy detector model was based off the RetinaNet architecture with focal loss Lin et al. (2017). This is a one-shot detector, that contains feature pyramid networks to propagate multiple scale features down the network. The network has subnets for object classification and bounding box location regression. The backbone used was simply a ResNet-50 architecture He et al. (2016) that was pretrained off of ImageNet1k image classification dataset. The overall network was pretrained itself with the COCO dataset to achieve a mean average precision of the bounding box of 36.9 in the COCO dataset. This is a one-shot detector, and can do real-time inference (30 frames per second).
|
| 64 |
+
|
| 65 |
+
The data imbalance as seen in Fig. 2 caused large drops in performance; anatomies such as the cerebellum and stomach bubble were never being detected, whilst the placenta and AF were over predicted.
|
| 66 |
+
|
| 67 |
+
Thus we used a subset of only the 6 anatomies: the spine, head, femur, pelvis, abdomen and bladder. There is still a considerable data imbalance, but not as problematic as with all 12. These anatomies were chosen as they were consistently showing in the majority of the scans, and were not within each other - i.e. the stomach bubble will be within the abdomen, cerebellum is within the head.
|
| 68 |
+
|
| 69 |
+
### 3.4 Defining Bounding Box Properties
|
| 70 |
+
|
| 71 |
+
We chose to use the timing, spatial position of the bounding boxes as discriminators to determine if a scan was 'typical' or not. There were other properties that could have been used: the size, the aspect ratio, properties of the image content inside the bounding box like texture, gradients and more. Size, and aspect ratio were found to be uninformative, because the bounding box annotations were not tightly aligned to the exact edge of the anatomy structure, but often loosely drawn to the approximate size and location of the anatomy - thus seemed uninformative of the actual shape and size of the anatomy visible.
|
| 72 |
+
|
| 73 |
+
### 3.5 Probability Density Function of Bounding Box Properties
|
| 74 |
+
|
| 75 |
+
We use kernel density estimation (KDE) with a Gaussian kernel to estimate the probability distribution function (PDF) of the timing and position of the bounding boxes. A plot of the PDF and histogram for the timing of the femur is shown in Fig 3.
|
| 76 |
+
|
| 77 |
+
However we cannot assume these properties of bounding boxes are independent, rather, it is likely they are highly dependant. For example, due to the shape of the sweep, the timing and the position of the bounding boxes interact; as we pan side to side (the top of the "T" of the sweep), the anatomies first go right, then left in that order, thus the $\mathrm{x}$ co-ordinate and the time of the bounding boxes are highly correlated. We account for these dependencies by, instead of having a probability distribution function for each property, having a joint multi-dimensional probability distribution for all properties. A two-dimensional probability distribution of the $\mathrm{x}$ and $\mathrm{y}$ co-ordinate of the head bounding boxes is shown in Fig. 3
|
| 78 |
+
|
| 79 |
+
We extend this to a three dimensional joint probability distribution, with the axes as: x, y-co-ordinate and time. This is a three dimensional function, so cannot be visualised easily, but plotting this on top of a frame at different time points gives Fig. 4.
|
| 80 |
+
|
| 81 |
+
The bandwidth of the kernel used in the kernel density estimation strongly influences the overall PDF - much more so than the shape of the kernel (Turlach et al., 1993). In this work we use a Gaussian kernel, and fine-tuned the bandwidth using contextual knowledge. Both Silverman's rule
|
| 82 |
+
|
| 83 |
+
PDF and histogram for femur bounding box time 1e-5 1.75 PDF for centroid of head bounding boxes 1.50 700 1.25 Vertical pixel number 600 1.00 500 0.75 400 0.50 300 0.25 Horizontal pixel number 0.00 700 3.0 600 2.5 Frequency (histogram) 500 Probability Density 400 300 200 -0.5 100 0.0 0.2 0.6 0.8 1.0 time
|
| 84 |
+
|
| 85 |
+
Figure 3: Left: histogram and PDF of the timing for the femur; right: the joint PDF for x, y coordinates of the head bounding boxes.
|
| 86 |
+
|
| 87 |
+

|
| 88 |
+
|
| 89 |
+
Figure 4: The change across time for the contour of the PDF for the spatial position of the head in the context of the actual ultrasound frame. From left to right, time point in the scan increases. The more intensity of the green reflects the higher PDF values.
|
| 90 |
+
|
| 91 |
+
and Scott's rule gave very similar PDFs, however the PDF was very peaky and multi-modal as can be seen in Fig. 5. The rightmost contour plot shows steep decreases in the PDF value within a 20 pixel radius of the mode, and many different peaks. It would be wrong to assume a spine that appears 20 pixels away from this peak indicates a much lower quality scan. We believe there are two general regions within the fan shaped area of the ultrasound where we expect to find a spine during the scan, so we adjust the bandwidth to reflect this. With too large bandwidth, we lose spatial resolution. Similar intuition was used for the temporal domain too.
|
| 92 |
+
|
| 93 |
+
PDF for centroid of spine bounding boxes PDF for centroid of spine bounding boxes 1.20 PDF for centroid of spine bounding boxes 800 1.05 7.5 700 6.0 0.75 0.60 0.45 300 0.30 800 Horizontal pixel number 0.00 800 5.6 700 Vertical pixel number 600 400 700 800 1000 110 800 ntal pixel number 0.0 Horizontal pixel number
|
| 94 |
+
|
| 95 |
+
Figure 5: Contour plots of the PDF for the spatial position of the spine bounding boxes. The PDF becomes less peaky as the kernel bandwidth decreases from left to right. The right PDF was produced when using Scott's Rule for bandwidth selection.
|
| 96 |
+
|
| 97 |
+
### 3.6 INTEGRATION OF THE PROBABILITY DENSITY FUNCTION
|
| 98 |
+
|
| 99 |
+
To get a numerical value of how well the position and timing of the bounding box fits in with the estimated distribution, we can find the probability of the bounding box having this position and timing or a less likely timing/position (equivalent to a p-value). Thus we integrate the PDF for all areas where the PDF evaluates to a lower probability density than that of a given position and timing. I.e.
|
| 100 |
+
|
| 101 |
+
$$
|
| 102 |
+
p = {\iiint }_{V}f\left( {x, y, t}\right) {dxdydt}
|
| 103 |
+
$$
|
| 104 |
+
|
| 105 |
+
where $f$ is the PDF, $x, y, t$ are the $\mathrm{x}$ and $\mathrm{y}$ co-ordinates, and time of the bounding box respectively, and the limits of integration $V$ is the volume/region in $x, y, t$ domain that encloses:
|
| 106 |
+
|
| 107 |
+
$$
|
| 108 |
+
V = V\left( {x, y, t}\right) \;\text{ where: }\;f\left( {x, y, t}\right) < f\left( {{x}_{bbox},{y}_{bbox},{t}_{bbox}}\right)
|
| 109 |
+
$$
|
| 110 |
+
|
| 111 |
+
Rather than direct integration of the probability density function to find the probability, we used Monte Carlo sampling methods to estimate the probability that the integral evaluates to. The exact algorithm is written in pseudo-code in the Appendix.
|
| 112 |
+
|
| 113 |
+
This method relies on using many random samples of the PDF to get an estimate of the PDF, thus becomes increasingly more accurate and precise with more samples. Although sampling many times takes time, it takes much less time than using the built-in integration libraries available in sci-py as we don't require such exact solutions. With this method, we can change the precision with the number of samples we use. As we require one integration with each bounding box, and there often being more than one bounding box per frame of a video of thirty seconds, we compromise between run time and precision. A plot of precision and number of samples for the same integral is shown in Fig. 6.
|
| 114 |
+
|
| 115 |
+
0.86 Boxplot of probability vs number of samples 10,000 100,000 Number of samples 0.82 Probability (femur) 0.80 0.78 0.76 0.74 0.72 100 1.000
|
| 116 |
+
|
| 117 |
+
Figure 6: Box plot of a random integral (an arbitrary femur position and time) performed 30 times, but with increasing sample number for the Monte-Carlo method.
|
| 118 |
+
|
| 119 |
+
Whilst the time taken for each integration is directly proportional to the sample size, the error is inversely proportional to the square root of the sample size, thus we compromised with 1,000 samples for each integral.
|
| 120 |
+
|
| 121 |
+
To get from probabilities to a quality score, we took the mean probability for the video. Therefore our quality score is a measure of how similar are the bounding boxes of the new scan to all our current annotated scans that we have used to generate the PDF. We thus assume that a "high quality scan" is one which is similar to all the scans we have annotated. Almost all our current annotated scans were seen as highly typical and easily annotated, so this assumption holds.
|
| 122 |
+
|
| 123 |
+
Our quality metric uses only bounding box properties without explicitly requiring manual annotations of quality score by an expert. A large motivation for this method of quality assessment, was to leverage our already annotated dataset rather than requiring new annotations, something we imagine other groups that apply computer vision to medical imaging can use. Additionally, basing a quality score off the bounding boxes makes sense as these bounding boxes contain the clinically important features in the video, and the rest outside the bounding boxes aren't focused on by the sonographer.
|
| 124 |
+
|
| 125 |
+
## 4 EXPERIMENTS AND RESULTS
|
| 126 |
+
|
| 127 |
+
### 4.1 ANATOMY DETECTOR MODEL
|
| 128 |
+
|
| 129 |
+
To train the anatomy detector model, the data was split at the patient level, with 31:7:7 patients respectively for training, validation and testing. Simple data augmentation was used: random horizontal flips, brightness, crops, and slight rotations. The minority classes were not over-sampled or over-augmented, but proportionally sampled. The network was trained for 80 epochs, with an initial learning rate: 0.001 , which dropped to 0.0001 at 40 epochs. The batch size was 16 , and momentum: 0.8. Early stopping was used, so the best validation model saved. The trained model achieved the results as shown in the appendix.
|
| 130 |
+
|
| 131 |
+
We can assess how well the detector performs for our purpose by comparing it with a "perfect" detector (the ground truth annotations). A scatter plot of the probability for each bounding box against time can be used for this comparison. If the plots look similar, and mean probability values are similar then the detector is good enough. The comparison is shown in Fig. 7
|
| 132 |
+
|
| 133 |
+
Probability for bbox throughout video P18_step2.mp4 Probability for bbox throughout video P18_step2.mp4 1.0 color bladder femur 0.8 head pelvis spine 0.6 Probability 0.4 0.2 0.0 0.2 0.4 0.6 0.8 1.0 Percentage through vid 1.0 color abdomen bladder femur 0.8 head pelvis spine 0.6 Probability 0.4 0.2 0.0 0.2 0.4 0.6 0.8 1.0 Percentage through vid
|
| 134 |
+
|
| 135 |
+
Figure 7: The vertical axes show how well the spatial and temporal position of the anatomy fits with our seen data. The higher up on the vertical axes, the better the fit with our distribution from the PDF. Each point in the plot is a bounding box of an anatomy.
|
| 136 |
+
|
| 137 |
+
Table 1: Table of the mean and standard deviation of the probability score for our test set videos for the model vs ground truth.
|
| 138 |
+
|
| 139 |
+
<table><tr><td>$\mathbf{{Video}}$</td><td>Mean (GT)</td><td>Mean (Model)</td><td>$\mathbf{{Std}.{Dev}\left( {GT}\right) }$</td><td>$\mathbf{{Std}.{Dev}\left( {Model}\right) }$</td></tr><tr><td>P144_step1</td><td>0.453</td><td>0.450</td><td>0.279</td><td>0.236</td></tr><tr><td>P163_step1</td><td>0.406</td><td>0.274</td><td>0.310</td><td>0.196</td></tr><tr><td>P166_step1</td><td>0.466</td><td>0.515</td><td>0.284</td><td>0.279</td></tr><tr><td>P115_step1</td><td>0.308</td><td>0.372</td><td>0.298</td><td>0.297</td></tr><tr><td>P18_step1</td><td>0.322</td><td>0.301</td><td>0.313</td><td>0.331</td></tr></table>
|
| 140 |
+
|
| 141 |
+
### 4.2 Probability Model As a Metric for Quality
|
| 142 |
+
|
| 143 |
+
In this work, a high quality scan is one where the anatomies appear spatially and temporally at a similar position along the scan as the previous scans. The videos have not been annotated with a quality score since they are all typically of high quality as part of the data gathering process. Therefore we use other types of sweeps (see Fig. 1) and other fetal presentations as our "bad quality" scan. The anatomies should appear at different locations and timings for different fetal presentations/scans, thus will serve as our "bad quality" scan and our method should differentiate between these.
|
| 144 |
+
|
| 145 |
+
Thus we run our method on 7 videos of step 1, but breech presentation, and 3 videos each of step 2.1,2.2,3.1(Fig. 1) each,(all in cephalic presentation).
|
| 146 |
+
|
| 147 |
+
We expect the step 1 cephalic sweeps to have the highest mean probability score, and the breech and other steps to have lower ones. The mean probability should be noticeably higher in the step 1 cephalic than any others (similar to one class classification). A box plot of the probability scores are shown on the left of Fig. 8.
|
| 148 |
+
|
| 149 |
+
Boxplot of quality probability score vs sweeps Boxplot of detection class softmax output by model vs sweeps ass prediction softmax output by object def 0.9 0.8 0.7 0.6 0.5 Type of sweep and fetal presentation 1.0 0.8 Quality probability score from PDF 0.4 0.2 0.0 Type of sweep and fetal presentation
|
| 150 |
+
|
| 151 |
+
Figure 8: The left is a box plot using the ground truth bounding box annotations for the videos. And the right shows the quality score using the model's detected bounding box . Step-1-ceph means step 1 in Fig. 1 with a cephalic presentation fetus.
|
| 152 |
+
|
| 153 |
+
To have a comparison we use the softmax output by the detection model for the class of the object. From intuition, if the detector sees a head that is very similar to all the previous heads it has been trained on, then the model will confidently predict that structure as a head, and so the softmax output for the head class for this bounding box will be almost 1 . However some head bounding boxes will have much lower values, which will arise from the structure looking less similar to the training samples. So with different types of sweeps and different fetal presentations, the ultrasound view slices the anatomy differently, so the anatomies look visibly different, thus we expect much lower class confidence for these other sweeps and presentations.
|
| 154 |
+
|
| 155 |
+
We can see from Fig. 8 that there is no obvious difference between the mean values of the step 1 cephalic sweeps vs any of the others using this class confidence method.
|
| 156 |
+
|
| 157 |
+
## 5 CONCLUSION
|
| 158 |
+
|
| 159 |
+
We present a method of using kernel density estimation to assess whether the spatial and temporal position of anatomies in a specific scan follow the typical distribution. We show that this is an effective method for discriminating between different sweep steps and fetal presentations. We compare this to using the detector class softmax value as a way of discriminating between the different sweep steps and presentations, however we find our kernel density estimation method works much better.
|
| 160 |
+
|
| 161 |
+
REFERENCES
|
| 162 |
+
|
| 163 |
+
Reem S Abu-Rustum and M Fouad Ziade. The 3-sweep approach: A standardized technique for fetal anatomic assessment in the limited resource setting. Journal of Fetal Medicine, 4(1):25-30, 2017.
|
| 164 |
+
|
| 165 |
+
Alfred Abuhamad, Yili Zhao, Sharon Abuhamad, Elena Sinkovskaya, Rashmi Rao, Camille Kanaan, and Lawrence Platt. Standardized six-step approach to the performance of the focused basic obstetric ultrasound examination. American Journal of Perinatology, 33:90-98, 8 2015. ISSN 10988785. doi: 10.1055/s-0035-1558828.
|
| 166 |
+
|
| 167 |
+
Aishy Amer and Eric Dubois. Fast and reliable structure-oriented video noise estimation. IEEE Transactions on Circuits and Systems for Video Technology, 15(1):113-118, 2005.
|
| 168 |
+
|
| 169 |
+
Jude Crino, Harris J Finberg, Faith Frieden, Jeffrey Kuller, Anthony Odibo, Alfred Robichaux, Marcela Bohm-Velez, Dolores H Pretorius, Sheila Sheth, Teresita L Angtuaco, et al. Aium practice guideline for the performance of obstetric ultrasound examinations. Journal of Ultrasound in Medicine, 32(6):1083-1101, 2013.
|
| 170 |
+
|
| 171 |
+
Gary L Darmstadt, Anne CC Lee, Simon Cousens, Lynn Sibley, Zulfiqar A Bhutta, France Don-nay, Dave Osrin, Abhay Bang, Vishwajeet Kumar, Steven N Wall, et al. 60 million non-facility births: who can deliver in community settings to reduce intrapartum-related deaths? International Journal of Gynecology & Obstetrics, 107:S89-S112, 2009.
|
| 172 |
+
|
| 173 |
+
Kristen K DeStigter, G Eli Morey, Brian S Garra, Matthew R Rielly, Martin E Anderson, Michael G Kawooya, Alphonsus Matovu, and Frank R Miele. Low-cost teleradiology for rural ultrasound. In 2011 IEEE Global Humanitarian Technology Conference, pp. 290-295. IEEE, 2011.
|
| 174 |
+
|
| 175 |
+
Xiaojun Feng and Jan P Allebach. Measurement of ringing artifacts in jpeg images. In Digital Publishing, volume 6076, pp. 74-83. SPIE, 2006.
|
| 176 |
+
|
| 177 |
+
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770-778, 2016.
|
| 178 |
+
|
| 179 |
+
Alain Hore and Djemel Ziou. Image quality metrics: Psnr vs. ssim. In 2010 20th international conference on pattern recognition, pp. 2366-2369. IEEE, 2010.
|
| 180 |
+
|
| 181 |
+
Masaaki Komatsu, Akira Sakai, Reina Komatsu, Ryu Matsuoka, Suguru Yasutomi, Kanto Shozu, Ai Dozen, Hidenori Machino, Hirokazu Hidaka, Tatsuya Arakaki, et al. Detection of cardiac structural abnormalities in fetal ultrasound videos using deep learning. Applied Sciences, 11(1): 371, 2021.
|
| 182 |
+
|
| 183 |
+
Dingquan Li, Tingting Jiang, and Ming Jiang. Quality assessment of in-the-wild videos. In Proceedings of the 27th ACM International Conference on Multimedia, pp. 2351-2359, 2019.
|
| 184 |
+
|
| 185 |
+
Tsung-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He, and Piotr Dollár. Focal loss for dense object detection. In Proceedings of the IEEE international conference on computer vision, pp. 2980-2988, 2017.
|
| 186 |
+
|
| 187 |
+
Zehui Lin, Shengli Li, Dong Ni, Yimei Liao, Huaxuan Wen, Jie Du, Siping Chen, Tianfu Wang, and Baiying Lei. Multi-task learning for quality assessment of fetal head ultrasound images. Medical image analysis, 58:101548, 2019.
|
| 188 |
+
|
| 189 |
+
Thomas J Marini, Daniel C Oppenheimer, Timothy M Baran, Deborah J Rubens, Marika Toscano, Kathryn Drennan, Brian Garra, Frank R Miele, Gail Garra, Sylvia Jacobo Noone, et al. New ultrasound telediagnostic system for low-resource areas: Pilot results from peru. Journal of Ultrasound in Medicine, 40(3):583-595, 2021.
|
| 190 |
+
|
| 191 |
+
Duncan Smith-Rohrberg Maru, Ryan Schwarz, Jason Andrews, Sanjay Basu, Aditya Sharma, and Christopher Moore. Turning a blind eye: the mobilization of radiology services in resource-poor regions. Globalization and health, 6(1):1-8, 2010.
|
| 192 |
+
|
| 193 |
+
Pina Marziliano, Frederic Dufaux, Stefan Winkler, and Touradj Ebrahimi. A no-reference perceptual blur metric. In Proceedings. International conference on image processing, volume 3, pp. III-III. IEEE, 2002.
|
| 194 |
+
|
| 195 |
+
Anish Mittal, Anush Krishna Moorthy, and Alan Conrad Bovik. No-reference image quality assessment in the spatial domain. IEEE Transactions on image processing, 21(12):4695-4708, 2012.
|
| 196 |
+
|
| 197 |
+
Daniel Mollura and Matthew P Lungren. Radiology in global health, volume 1. Springer, 2014.
|
| 198 |
+
|
| 199 |
+
Daniel J Mollura, Jonathan Mazal, Kathryn L Everton, and RAD-AID Conference Writing Group. White paper report of the 2012 rad-aid conference on international radiology for developing countries: planning the implementation of global radiology. Journal of the American College of Radiology, 10(8):618-624, 2013.
|
| 200 |
+
|
| 201 |
+
Patrick Sitati Ngoya, Wilbroad Edward Muhogora, and Richard Denys Pitcher. Defining the diagnostic divide: an analysis of registered radiological equipment resources in a low-income african country. The Pan African Medical Journal, 25, 2016.
|
| 202 |
+
|
| 203 |
+
World Health Organization et al. Trends in maternal mortality 2000 to 2017: estimates by who, unicef, unfpa, world bank group and the united nations population division. 2019.
|
| 204 |
+
|
| 205 |
+
L. J. Salomon, Z. Alfirevic, F. Da Silva Costa, R. L. Deter, F. Figueras, T. Ghi, P. Glanc, A. Khalil, W. Lee, R. Napolitano, A. Papageorghiou, A. Sotiradis, J. Stirnemann, A. Toi, and G. Yeo. Isuog practice guidelines: ultrasound assessment of fetal biometry and growth. Ultrasound in Obstetrics and Gynecology, 53:715-723, 6 2019. ISSN 14690705. doi: 10.1002/uog.20272.
|
| 206 |
+
|
| 207 |
+
Laurent Julien Salomon, Z Alfirevic, V Berghella, C Bilardo, E Hernandez-Andrade, SL Johnsen, K Kalache, K-Y Leung, G Malinger, H Munoz, et al. Practice guidelines for performance of the routine mid-trimester fetal ultrasound scan. Ultrasound in Obstetrics & Gynecology, 37(1): 116-126, 2011.
|
| 208 |
+
|
| 209 |
+
A Self, Q Chen, JA Noble, AT Papageorghiou, Bapu Koundinya Desiraju, Sumeet Dhariwal, Alexander Gleed, Divyanshu Mishra, Ramachandran Thiruvengadam, Varun Chandramohan, Rachel Craik, Elizabeth Wilden, and Ashok Khurana. Developing clinical artificial intelligence for obstetric ultrasound to improve access in underserved regions: the computer-assisted low-cost point-of-care ultrasound (calopus) study protocol. Journal of Medical Internet Research, 2022. doi: 10.2196/37374. URL https://www.researchprotocols.org/2022/0/e0/.
|
| 210 |
+
|
| 211 |
+
Kim-Han Thung and Paramesran Raveendran. A survey of image quality measures. In 2009 international conference for technical postgraduates (TECHPOS), pp. 1-4. IEEE, 2009.
|
| 212 |
+
|
| 213 |
+
Marika Toscano, Thomas J. Marini, Kathryn Drennan, Timothy M. Baran, Jonah Kan, Brian Garra, Ann M. Dozier, Rafael L. Ortega, Rosemary A. Quinn, Yu T. Zhao, Miguel S. Egoavil, Lorena Tamayo, Claudia Carlotto, and Benjamin Castaneda. Testing telediagnostic obstetric ultrasound in peru: a new horizon in expanding access to prenatal ultrasound. BMC Pregnancy and Childbirth, 21, 12 2021. ISSN 14712393. doi: 10.1186/s12884-021-03720-w.
|
| 214 |
+
|
| 215 |
+
Zhengzhong Tu, Jessie Lin, Yilin Wang, Balu Adsumilli, and Alan C Bovik. Adaptive debanding filter. IEEE Signal Processing Letters, 27:1715-1719, 2020.
|
| 216 |
+
|
| 217 |
+
Berwin A Turlach et al. Bandwidth selection in kernel density estimation: a rewiew. Technical report, Humboldt Universitaet Berlin, 1993.
|
| 218 |
+
|
| 219 |
+
Yilin Wang, Sang-Uok Kum, Chao Chen, and Anil Kokaram. A perceptual visibility metric for banding artifacts. In 2016 IEEE International Conference on Image Processing (ICIP), pp. 2067- 2071. IEEE, 2016.
|
| 220 |
+
|
| 221 |
+
Zhou Wang, Alan C Bovik, and Brian L Evan. Blind measurement of blocking artifacts in images. In Proceedings 2000 International Conference on Image Processing (Cat. No. 00CH37101), volume 3, pp. 981-984. Ieee, 2000.
|
| 222 |
+
|
| 223 |
+
Lingyun Wu, Jie-Zhi Cheng, Shengli Li, Baiying Lei, Tianfu Wang, and Dong Ni. Fuiqa: fetal ultrasound image quality assessment with deep convolutional networks. IEEE transactions on cybernetics, 47(5):1336-1349, 2017.
|
| 224 |
+
|
| 225 |
+
## A APPENDIX
|
| 226 |
+
|
| 227 |
+
## Pseudo-code of the Monte-Carlo sampling used to estimate the probability
|
| 228 |
+
|
| 229 |
+
---
|
| 230 |
+
|
| 231 |
+
#We want to find the area under the pdf
|
| 232 |
+
|
| 233 |
+
for all $x, y, s, t$ where the pdf evaluates to
|
| 234 |
+
|
| 235 |
+
a value lower than our bounding box p_density.
|
| 236 |
+
|
| 237 |
+
function $\operatorname{pdf}\left( {x, y, t}\right)$ #input shape:(3,)
|
| 238 |
+
|
| 239 |
+
return prob_density #output shape:(1,)
|
| 240 |
+
|
| 241 |
+
#is a function returns the probability
|
| 242 |
+
|
| 243 |
+
density for the pdf evaluated at(x, y, t).
|
| 244 |
+
|
| 245 |
+
#Evaluate the pdf for new bounding box
|
| 246 |
+
|
| 247 |
+
$x, y, t :$
|
| 248 |
+
|
| 249 |
+
p_density_of_bbox = pdf(x_bbox, y_bbox, t_bbox)
|
| 250 |
+
|
| 251 |
+
#Randomly sample from your pdf 10000 times
|
| 252 |
+
|
| 253 |
+
to get a list of $x, y, t$ data:
|
| 254 |
+
|
| 255 |
+
samples_xyt = sample(pdf, size=10000) # shape = (3,10000)
|
| 256 |
+
|
| 257 |
+
#The $p$ of the sample evaluating to below
|
| 258 |
+
|
| 259 |
+
our p_density_bbox = value of the integral.
|
| 260 |
+
|
| 261 |
+
#So we count the number of samples where
|
| 262 |
+
|
| 263 |
+
p_density_sample < p_density_bbox and divide
|
| 264 |
+
|
| 265 |
+
by the number of all the samples, to get an
|
| 266 |
+
|
| 267 |
+
estimate of this probability:
|
| 268 |
+
|
| 269 |
+
lowsamples = pdf(samples_xyst) < p_density_of_bbox
|
| 270 |
+
|
| 271 |
+
#lowsamples is a 10000 logical vector
|
| 272 |
+
|
| 273 |
+
integral = sum(lowsamples) / 10000
|
| 274 |
+
|
| 275 |
+
---
|
| 276 |
+
|
| 277 |
+
Normalised Confusion Matrix 0.0047 0.16 0.098 0.8 0.7 0.29 0.035 0 0.6 0.022 0.39 0.089 0.5 0.4 0.88 0.0097 0.083 0.3 0.016 0.85 0.026 0.2 0.1 0.063 0.064 0.65 0.0 head pelvis spine abdomen 0.73 0 0.0047 bladder 0 0.64 0.035 femur 0.23 0 0.27 Actual head 0.026 0 0.00081 pelvis 0.084 0 0.026 spine 0.21 0 0.0061 abdomen bladder femur Predicted
|
| 278 |
+
|
| 279 |
+
Figure 9: Confusion matrix of the object detector model.
|
| 280 |
+
|
| 281 |
+
0.8 Mean IOU per category incl. 0 non-0 only head pelvis spine Anatomy 0.7 0.6 Intersection over union 0.5 0.4 0.3 0.2 0.1 0.0 abdomen bladder femur
|
| 282 |
+
|
| 283 |
+
Figure 10: The red bars show the intersection over union of the bounding boxes for each anatomy where the no prediction of a box count as 0 intersection. The cyan bars show only the intersection over union of the boxes that have been predicted.
|
| 284 |
+
|
papers/ICLR/ICLR 2023/ICLR 2023 Workshop/ICLR 2023 Workshop TML4H/rmIJnScwO6/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,178 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ A KERNEL DENSITY ESTIMATION BASED QUALITY METRIC FOR QUALITY ASSESSMENT OF OBSTETRIC ULTRASOUND VIDEO
|
| 2 |
+
|
| 3 |
+
Anonymous authors
|
| 4 |
+
|
| 5 |
+
Paper under double-blind review
|
| 6 |
+
|
| 7 |
+
§ ABSTRACT
|
| 8 |
+
|
| 9 |
+
Simplified ultrasound scanning protocols (sweeps) have been developed to reduce the high skill required to perform a regular obstetric ultrasound examination. However, without automated quality assessment of the video, the utility of such protocols in clinical practice is limited. An automated quality assessment algorithm is proposed that applies an object detector to detect fetal anatomies within ultrasound videos. Kernel density estimation is applied to the bounding box annotations to estimate a probability density function of certain bounding box properties such as the spatial and temporal position during the sweeps. This allows to quantify how well the spatio-temporal position of anatomies in a sweep agree with previously seen data as a quality metric. The new quality metric is compared to other metrics of quality such as the confidence of the object detector model.
|
| 10 |
+
|
| 11 |
+
§ 1 INTRODUCTION
|
| 12 |
+
|
| 13 |
+
Obstetric ultrasound scanning is a vital part of antenatal care; as it allows us to accurately date a pregnancy, identify pregnancy risk factors, check the health of the fetus and much more Crino et al. (2013); Salomon et al. (2011; 2019). Although ultrasound is considered a cheap and portable imaging modality, there is still a shortage of availability of obstetric ultrasound scans in limited resource settings Ngoya et al. (2016); Mollura & Lungren (2014); Mollura et al. (2013). However these are the regions where they are most needed; with ${94}\%$ of pregnancy and childbirth related deaths occurring in developing nations and ${80}\%$ of these occurring in areas of high birthrate and limited access to healthcare Organization et al. (2019). A major limitation for the availability of ultrasound is the lack of trained sonographers and healthcare workers Maru et al. (2010); Darmstadt et al. (2009) [World Health Organization and United Nations Children's Fund. WHO/UNICEF joint database on SDG 3.1.2 Skilled Attendance at Birth.]
|
| 14 |
+
|
| 15 |
+
The acquisition of obstetric ultrasound requires a high level of training and expertise, with the sono-graphers iteratively repositioning the probe whilst viewing real time at the surrounding anatomy to capture the most informative imaging plane. Additional difficulties include the movement of the fetal head and variable fetal position in the uterus.
|
| 16 |
+
|
| 17 |
+
Thus, simplified scanning protocols have been published that don't rely on the intuition and experience of the sonographers, but where the probe is to scan along predefined linear sweeps across the body outlined solely by external body landmarks of the pregnant woman Abu-Rustum & Zi-ade (2017); Abuhamad et al. (2015); DeStigter et al. (2011). These scanning protocols (sweeps) allow for acquisition without highly trained sonographers, which can then be analysed by clinicians remotely with the advent of tele-medicine Toscano et al. (2021); Marini et al. (2021), or by the medical image analysis algorithms.
|
| 18 |
+
|
| 19 |
+
In practice with these new acquisition protocols, we cannot assume the user will be able to differentiate between a usable scan or a low quality non-informative one. Ultrasound scans are cheaply retaken during the same appointment, however retaking a scan after the patient has left is a large burden to the patient. Whilst trained clinicians can tell when to retake a video if it's sub-optimal or not "fit-for-purpose", computers are yet to be able to mimic this human capability. Thus, developing automated algorithms for ultrasound video quality assessment is an important research challenge to aid in the adoption of ultrasounds where trained sonographers are not always available.
|
| 20 |
+
|
| 21 |
+
This paper seeks to provide a quality assessment method for sweep data. The method utilises bounding box annotations of ultrasound sweep data, to train an anatomy detector model, and via kernel density estimation, quantifies how well the spatial and temporal position of bounding boxes fit in with the expected "typical" data.
|
| 22 |
+
|
| 23 |
+
§ 2 RELATED WORK
|
| 24 |
+
|
| 25 |
+
Image quality assessment in signal processing is a well researched topic with many different metrics proposed like PSNR and SSIM. These however are image-based (not video), fully referenced methods (requires an undistorted image) and mostly focus on compression losses Thung & Raveendran (2009); Hore & Ziou (2010).
|
| 26 |
+
|
| 27 |
+
No-referenced video quality assessment (NRVQA) literature presents models designed to quantify a specific type of image distortion such as blur Marziliano et al. (2002), ringing Feng & Allebach (2006), blockiness Wang et al. (2000) banding Wang et al. (2016); Tu et al. (2020) or noise Amer & Dubois (2005). Current NRVQA metrics such as Li et al. (2019); Mittal et al. (2012) rely on using natural scene statistics and models of the human visual system. Thus they are specific for natural images, and likely will not generalise well to ultrasound videos, as these contain many types of distortions that are beyond the normal range for natural images.
|
| 28 |
+
|
| 29 |
+
Quality assessment of ultrasound video must therefore be defined differently; it should be based off clinical usefulness. This field has less research. Wu et al. (2017) and Lin et al. (2019) propose quality assessment off anatomy specific criteria that can be separately assessed by individual networks; Wu et al. (2017) develops deep learning models to check things like: if the fetal stomach bubble appears full and salient with a clear boundary, and Lin et al. (2019) "the lateral sulcus must be clearly visible" and "the skull is in the middle of the ultrasound plane and larger than $2/3$ of overall fan shape area" and other criteria.
|
| 30 |
+
|
| 31 |
+
The most related work is by Komatsu et al. (2021) who developed an object detector network to detect cardiac structures in fetal ultrasound scans and used the detection results to generate an "abnormality score" for the heart. The work uses a specific imaging plane of the heart, the three-vessel trachea view and four-chamber view, where it is expected to see a full set of specific anatomical substructures in every frame of these clips. Thus they simply generated an abnormality score between 0-1 which depends linearly to the number of anatomies and number of frames detected.
|
| 32 |
+
|
| 33 |
+
§ 3 METHOD
|
| 34 |
+
|
| 35 |
+
This work uses bounding box annotations provided by an expert sonographer for a simplified ultrasound scan. The method firstly finds the distribution of the spatial and temporal position of the anatomies for this type of scan from many videos. For a new video, it then evaluates how well the spatio-temporal position of the anatomies of a new video fits in the distribution.
|
| 36 |
+
|
| 37 |
+
§ 3.1 DATA
|
| 38 |
+
|
| 39 |
+
The data was gathered as part of the Computer Assisted Low-cost Point of Care Ultrasound (CALO-PUS) Project (UK Research Ethics Committee 18/WS/0051). A simplified ultrasound scanning protocol proposed by Abuhamad et al. (2015) was refined by Self et al. (2022) to contain 5 steps of which are shown in Fig. 1.
|
| 40 |
+
|
| 41 |
+
The scans were taken by an experienced sonographer, who also annotated the scans frame by frame with a bounding box around each of 11 possible anatomical structures (listed in Fig. 2). Of the five sweeps outlined in Fig. 1, this work only uses T-shaped sweep (step 1). For this work, only scans of gestational age between 18-23 weeks, and cephalic presentation are used.
|
| 42 |
+
|
| 43 |
+
This was to ensure the scans looked as homogeneous as possible. With different fetal presentations, we expect different locations of anatomies, and anatomies to show up at different times along the scan. The anatomies will also be imaged along different views, which affects the way the anatomy will appear thus the size and shape of the bounding boxes too. The gestational age cutoffs were chosen for similar reasons, but also because the majority of scans lied within these bounds. This left us with 45 ultrasound videos.
|
| 44 |
+
|
| 45 |
+
< g r a p h i c s >
|
| 46 |
+
|
| 47 |
+
Figure 1: Left: all the different CALOPUS sweeps in Self et al. (2022) outlined in Section 3.1; right: the T-shaped sweep (step 1 on left) that we used in this work outlined on a participant.
|
| 48 |
+
|
| 49 |
+
Whilst we use only sweep 1 (see Fig. 1), this work could easily extend to any other sweeps. We chose the T-shaped sweep because it often showed the most anatomies, thus bounding boxes. The different bounding box annotations consisted of 11 anatomies, of which were distributed as shown in Fig. 2
|
| 50 |
+
|
| 51 |
+
The CALOPUS project is a collaboration with the Translational Health Science and Technology Institute (THSTI) based in Delhi, India Self et al. (2022). THSTI have also scanned 72 patients using the same protocol, with cephalic presentation and of 18-23 gestational age, these cannot be viewed/accessed. However, the bounding box annotations of these scans which include: the bounding box co-ordinates, frame number and anatomy can be utilised. This gives an extra 72 videos' bounding box information.
|
| 52 |
+
|
| 53 |
+
< g r a p h i c s >
|
| 54 |
+
|
| 55 |
+
Figure 2: Number of frames per anatomy in the UK dataset after applying exclusion criteria (see Section 3.1) Maybe make this so font is bigger, so can make fig smaller/ portrait?.
|
| 56 |
+
|
| 57 |
+
§ 3.2 OVERVIEW
|
| 58 |
+
|
| 59 |
+
A brief overview of our pipeline is: (1) using the bounding box annotated videos, we train an object detector that can produce bounding boxes around fetal anatomies. (2) Estimate the probability distribution of spatial position and timing of the bounding boxes for a cephalic T-shaped sweep using kernel density estimation. (3) Perform inference on new data with our trained anatomy detector model to get bounding boxes for each anatomy for this video. (4) Compare the spatial and temporal position of the bounding boxes of the new video against our distribution of annotated videos. If the new bounding box properties fit in well with our estimated distribution, we propose it is of high quality, whilst if it doesn't, we propose it is abnormal or low quality. Numerical values for 'how well it fits the distribution' can be produced via calculating the probability of the bounding box having a property of a certain value or a less likely value via integration of the probability density function (PDF). One probability score for each bounding box (so frame-level), and averaging the probability over the entire video clip is our quality metric.
|
| 60 |
+
|
| 61 |
+
§ 3.3 ANATOMY DETECTOR MODEL
|
| 62 |
+
|
| 63 |
+
Our anatomy detector model was based off the RetinaNet architecture with focal loss Lin et al. (2017). This is a one-shot detector, that contains feature pyramid networks to propagate multiple scale features down the network. The network has subnets for object classification and bounding box location regression. The backbone used was simply a ResNet-50 architecture He et al. (2016) that was pretrained off of ImageNet1k image classification dataset. The overall network was pretrained itself with the COCO dataset to achieve a mean average precision of the bounding box of 36.9 in the COCO dataset. This is a one-shot detector, and can do real-time inference (30 frames per second).
|
| 64 |
+
|
| 65 |
+
The data imbalance as seen in Fig. 2 caused large drops in performance; anatomies such as the cerebellum and stomach bubble were never being detected, whilst the placenta and AF were over predicted.
|
| 66 |
+
|
| 67 |
+
Thus we used a subset of only the 6 anatomies: the spine, head, femur, pelvis, abdomen and bladder. There is still a considerable data imbalance, but not as problematic as with all 12. These anatomies were chosen as they were consistently showing in the majority of the scans, and were not within each other - i.e. the stomach bubble will be within the abdomen, cerebellum is within the head.
|
| 68 |
+
|
| 69 |
+
§ 3.4 DEFINING BOUNDING BOX PROPERTIES
|
| 70 |
+
|
| 71 |
+
We chose to use the timing, spatial position of the bounding boxes as discriminators to determine if a scan was 'typical' or not. There were other properties that could have been used: the size, the aspect ratio, properties of the image content inside the bounding box like texture, gradients and more. Size, and aspect ratio were found to be uninformative, because the bounding box annotations were not tightly aligned to the exact edge of the anatomy structure, but often loosely drawn to the approximate size and location of the anatomy - thus seemed uninformative of the actual shape and size of the anatomy visible.
|
| 72 |
+
|
| 73 |
+
§ 3.5 PROBABILITY DENSITY FUNCTION OF BOUNDING BOX PROPERTIES
|
| 74 |
+
|
| 75 |
+
We use kernel density estimation (KDE) with a Gaussian kernel to estimate the probability distribution function (PDF) of the timing and position of the bounding boxes. A plot of the PDF and histogram for the timing of the femur is shown in Fig 3.
|
| 76 |
+
|
| 77 |
+
However we cannot assume these properties of bounding boxes are independent, rather, it is likely they are highly dependant. For example, due to the shape of the sweep, the timing and the position of the bounding boxes interact; as we pan side to side (the top of the "T" of the sweep), the anatomies first go right, then left in that order, thus the $\mathrm{x}$ co-ordinate and the time of the bounding boxes are highly correlated. We account for these dependencies by, instead of having a probability distribution function for each property, having a joint multi-dimensional probability distribution for all properties. A two-dimensional probability distribution of the $\mathrm{x}$ and $\mathrm{y}$ co-ordinate of the head bounding boxes is shown in Fig. 3
|
| 78 |
+
|
| 79 |
+
We extend this to a three dimensional joint probability distribution, with the axes as: x, y-co-ordinate and time. This is a three dimensional function, so cannot be visualised easily, but plotting this on top of a frame at different time points gives Fig. 4.
|
| 80 |
+
|
| 81 |
+
The bandwidth of the kernel used in the kernel density estimation strongly influences the overall PDF - much more so than the shape of the kernel (Turlach et al., 1993). In this work we use a Gaussian kernel, and fine-tuned the bandwidth using contextual knowledge. Both Silverman's rule
|
| 82 |
+
|
| 83 |
+
< g r a p h i c s >
|
| 84 |
+
|
| 85 |
+
Figure 3: Left: histogram and PDF of the timing for the femur; right: the joint PDF for x, y coordinates of the head bounding boxes.
|
| 86 |
+
|
| 87 |
+
< g r a p h i c s >
|
| 88 |
+
|
| 89 |
+
Figure 4: The change across time for the contour of the PDF for the spatial position of the head in the context of the actual ultrasound frame. From left to right, time point in the scan increases. The more intensity of the green reflects the higher PDF values.
|
| 90 |
+
|
| 91 |
+
and Scott's rule gave very similar PDFs, however the PDF was very peaky and multi-modal as can be seen in Fig. 5. The rightmost contour plot shows steep decreases in the PDF value within a 20 pixel radius of the mode, and many different peaks. It would be wrong to assume a spine that appears 20 pixels away from this peak indicates a much lower quality scan. We believe there are two general regions within the fan shaped area of the ultrasound where we expect to find a spine during the scan, so we adjust the bandwidth to reflect this. With too large bandwidth, we lose spatial resolution. Similar intuition was used for the temporal domain too.
|
| 92 |
+
|
| 93 |
+
< g r a p h i c s >
|
| 94 |
+
|
| 95 |
+
Figure 5: Contour plots of the PDF for the spatial position of the spine bounding boxes. The PDF becomes less peaky as the kernel bandwidth decreases from left to right. The right PDF was produced when using Scott's Rule for bandwidth selection.
|
| 96 |
+
|
| 97 |
+
§ 3.6 INTEGRATION OF THE PROBABILITY DENSITY FUNCTION
|
| 98 |
+
|
| 99 |
+
To get a numerical value of how well the position and timing of the bounding box fits in with the estimated distribution, we can find the probability of the bounding box having this position and timing or a less likely timing/position (equivalent to a p-value). Thus we integrate the PDF for all areas where the PDF evaluates to a lower probability density than that of a given position and timing. I.e.
|
| 100 |
+
|
| 101 |
+
$$
|
| 102 |
+
p = {\iiint }_{V}f\left( {x,y,t}\right) {dxdydt}
|
| 103 |
+
$$
|
| 104 |
+
|
| 105 |
+
where $f$ is the PDF, $x,y,t$ are the $\mathrm{x}$ and $\mathrm{y}$ co-ordinates, and time of the bounding box respectively, and the limits of integration $V$ is the volume/region in $x,y,t$ domain that encloses:
|
| 106 |
+
|
| 107 |
+
$$
|
| 108 |
+
V = V\left( {x,y,t}\right) \;\text{ where: }\;f\left( {x,y,t}\right) < f\left( {{x}_{bbox},{y}_{bbox},{t}_{bbox}}\right)
|
| 109 |
+
$$
|
| 110 |
+
|
| 111 |
+
Rather than direct integration of the probability density function to find the probability, we used Monte Carlo sampling methods to estimate the probability that the integral evaluates to. The exact algorithm is written in pseudo-code in the Appendix.
|
| 112 |
+
|
| 113 |
+
This method relies on using many random samples of the PDF to get an estimate of the PDF, thus becomes increasingly more accurate and precise with more samples. Although sampling many times takes time, it takes much less time than using the built-in integration libraries available in sci-py as we don't require such exact solutions. With this method, we can change the precision with the number of samples we use. As we require one integration with each bounding box, and there often being more than one bounding box per frame of a video of thirty seconds, we compromise between run time and precision. A plot of precision and number of samples for the same integral is shown in Fig. 6.
|
| 114 |
+
|
| 115 |
+
< g r a p h i c s >
|
| 116 |
+
|
| 117 |
+
Figure 6: Box plot of a random integral (an arbitrary femur position and time) performed 30 times, but with increasing sample number for the Monte-Carlo method.
|
| 118 |
+
|
| 119 |
+
Whilst the time taken for each integration is directly proportional to the sample size, the error is inversely proportional to the square root of the sample size, thus we compromised with 1,000 samples for each integral.
|
| 120 |
+
|
| 121 |
+
To get from probabilities to a quality score, we took the mean probability for the video. Therefore our quality score is a measure of how similar are the bounding boxes of the new scan to all our current annotated scans that we have used to generate the PDF. We thus assume that a "high quality scan" is one which is similar to all the scans we have annotated. Almost all our current annotated scans were seen as highly typical and easily annotated, so this assumption holds.
|
| 122 |
+
|
| 123 |
+
Our quality metric uses only bounding box properties without explicitly requiring manual annotations of quality score by an expert. A large motivation for this method of quality assessment, was to leverage our already annotated dataset rather than requiring new annotations, something we imagine other groups that apply computer vision to medical imaging can use. Additionally, basing a quality score off the bounding boxes makes sense as these bounding boxes contain the clinically important features in the video, and the rest outside the bounding boxes aren't focused on by the sonographer.
|
| 124 |
+
|
| 125 |
+
§ 4 EXPERIMENTS AND RESULTS
|
| 126 |
+
|
| 127 |
+
§ 4.1 ANATOMY DETECTOR MODEL
|
| 128 |
+
|
| 129 |
+
To train the anatomy detector model, the data was split at the patient level, with 31:7:7 patients respectively for training, validation and testing. Simple data augmentation was used: random horizontal flips, brightness, crops, and slight rotations. The minority classes were not over-sampled or over-augmented, but proportionally sampled. The network was trained for 80 epochs, with an initial learning rate: 0.001, which dropped to 0.0001 at 40 epochs. The batch size was 16, and momentum: 0.8. Early stopping was used, so the best validation model saved. The trained model achieved the results as shown in the appendix.
|
| 130 |
+
|
| 131 |
+
We can assess how well the detector performs for our purpose by comparing it with a "perfect" detector (the ground truth annotations). A scatter plot of the probability for each bounding box against time can be used for this comparison. If the plots look similar, and mean probability values are similar then the detector is good enough. The comparison is shown in Fig. 7
|
| 132 |
+
|
| 133 |
+
< g r a p h i c s >
|
| 134 |
+
|
| 135 |
+
Figure 7: The vertical axes show how well the spatial and temporal position of the anatomy fits with our seen data. The higher up on the vertical axes, the better the fit with our distribution from the PDF. Each point in the plot is a bounding box of an anatomy.
|
| 136 |
+
|
| 137 |
+
Table 1: Table of the mean and standard deviation of the probability score for our test set videos for the model vs ground truth.
|
| 138 |
+
|
| 139 |
+
max width=
|
| 140 |
+
|
| 141 |
+
$\mathbf{{Video}}$ Mean (GT) Mean (Model) $\mathbf{{Std}.{Dev}\left( {GT}\right) }$ $\mathbf{{Std}.{Dev}\left( {Model}\right) }$
|
| 142 |
+
|
| 143 |
+
1-5
|
| 144 |
+
P144_step1 0.453 0.450 0.279 0.236
|
| 145 |
+
|
| 146 |
+
1-5
|
| 147 |
+
P163_step1 0.406 0.274 0.310 0.196
|
| 148 |
+
|
| 149 |
+
1-5
|
| 150 |
+
P166_step1 0.466 0.515 0.284 0.279
|
| 151 |
+
|
| 152 |
+
1-5
|
| 153 |
+
P115_step1 0.308 0.372 0.298 0.297
|
| 154 |
+
|
| 155 |
+
1-5
|
| 156 |
+
P18_step1 0.322 0.301 0.313 0.331
|
| 157 |
+
|
| 158 |
+
1-5
|
| 159 |
+
|
| 160 |
+
§ 4.2 PROBABILITY MODEL AS A METRIC FOR QUALITY
|
| 161 |
+
|
| 162 |
+
In this work, a high quality scan is one where the anatomies appear spatially and temporally at a similar position along the scan as the previous scans. The videos have not been annotated with a quality score since they are all typically of high quality as part of the data gathering process. Therefore we use other types of sweeps (see Fig. 1) and other fetal presentations as our "bad quality" scan. The anatomies should appear at different locations and timings for different fetal presentations/scans, thus will serve as our "bad quality" scan and our method should differentiate between these.
|
| 163 |
+
|
| 164 |
+
Thus we run our method on 7 videos of step 1, but breech presentation, and 3 videos each of step 2.1,2.2,3.1(Fig. 1) each,(all in cephalic presentation).
|
| 165 |
+
|
| 166 |
+
We expect the step 1 cephalic sweeps to have the highest mean probability score, and the breech and other steps to have lower ones. The mean probability should be noticeably higher in the step 1 cephalic than any others (similar to one class classification). A box plot of the probability scores are shown on the left of Fig. 8.
|
| 167 |
+
|
| 168 |
+
< g r a p h i c s >
|
| 169 |
+
|
| 170 |
+
Figure 8: The left is a box plot using the ground truth bounding box annotations for the videos. And the right shows the quality score using the model's detected bounding box . Step-1-ceph means step 1 in Fig. 1 with a cephalic presentation fetus.
|
| 171 |
+
|
| 172 |
+
To have a comparison we use the softmax output by the detection model for the class of the object. From intuition, if the detector sees a head that is very similar to all the previous heads it has been trained on, then the model will confidently predict that structure as a head, and so the softmax output for the head class for this bounding box will be almost 1 . However some head bounding boxes will have much lower values, which will arise from the structure looking less similar to the training samples. So with different types of sweeps and different fetal presentations, the ultrasound view slices the anatomy differently, so the anatomies look visibly different, thus we expect much lower class confidence for these other sweeps and presentations.
|
| 173 |
+
|
| 174 |
+
We can see from Fig. 8 that there is no obvious difference between the mean values of the step 1 cephalic sweeps vs any of the others using this class confidence method.
|
| 175 |
+
|
| 176 |
+
§ 5 CONCLUSION
|
| 177 |
+
|
| 178 |
+
We present a method of using kernel density estimation to assess whether the spatial and temporal position of anatomies in a specific scan follow the typical distribution. We show that this is an effective method for discriminating between different sweep steps and fetal presentations. We compare this to using the detector class softmax value as a way of discriminating between the different sweep steps and presentations, however we find our kernel density estimation method works much better.
|
papers/ICLR/ICLR 2023/ICLR 2023 Workshop/ICLR 2023 Workshop TML4H/y7XveyWYzIB/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,211 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# CONSIDERATIONS FOR DISTRIBUTION SHIFT ROBUST- NESS IN HEALTH
|
| 2 |
+
|
| 3 |
+
Anonymous authors
|
| 4 |
+
|
| 5 |
+
Paper under double-blind review
|
| 6 |
+
|
| 7 |
+
## Abstract
|
| 8 |
+
|
| 9 |
+
When analyzing robustness of predictive models under distribution shift, many works focus on tackling generalization in the presence of spurious correlations. In this case, one typically makes use of covariates or environment indicators to enforce independencies in learned models to guarantee generalization under various distribution shifts. In this work, we analyze a class of distribution shifts, where such independencies are not desirable, as there is a causal association between covariates and outcomes of interest. This case is common in the health space where covariates can be causally, as opposed to spuriously, related to outcomes of interest. We formalize this setting and relate it to common distribution shift settings from the literature. We theoretically show why standard supervised learning and invariant learning will not yield robust predictors in this case, while including the causal covariates into the prediction model can recover robustness. We demonstrate our theoretical findings in practice in experiments on synthetic data.
|
| 10 |
+
|
| 11 |
+
## 1 INTRODUCTION
|
| 12 |
+
|
| 13 |
+
In this work, we motivate how common assumptions in the domain generalization and invariant learning literature (Arjovsky et al., 2019; Ganin et al., 2016; Veitch et al., 2021) are violated in a set of broadly applicable problems, e.g. in healthcare. Invariant learning typically assumes a spurious or confounded association between outcome and covariates or auxiliary information. Hence, building a predictor that is invariant to the covariates or associated environment indicators can be shown to generalize better under distribution shift than standard empirical risk minimization. However, in many applications of interest for machine learning, e.g. in healthcare, there might not be only spurious associations between covariates and outcome, but also causal ones.
|
| 14 |
+
|
| 15 |
+
One illustrative example is body mass index (BMI), which is causally related to a host of conditions, e.g., left ventricular hypertrophy (LVH) (Lorell & Carabello, 2000). BMI is not "spurious" in the sense that it is merely associated with LVH, but can directly cause changes in left ventricular mass, which in turn can lead to LVH (Himeno et al., 1996). However, a shift in the prevalence of elevated BMI can shift the association between a signal - e.g., an electrocardiogram (ECG) - that is influenced by both BMI and LVH.
|
| 16 |
+
|
| 17 |
+
We formalize such a causal setting and show that it leads to regression in performance of machine learning models under distribution shift, that can not be mitigated with common invariant learning methods. Our contributions are the following:
|
| 18 |
+
|
| 19 |
+
- We motivate and formalize a class of problems where covariates, such as demographics or other auxiliary data causally influence the outcome of interest and explain the difference to the commonly considered confounded or spurious associations.
|
| 20 |
+
|
| 21 |
+
- For this class of problems we show theoretically and on simulated data, how distribution shifts along such causally influencing covariates cause discrepancies in performance that can not be mitigated with invariant learning methods designed for the commonly considered confounded setting.
|
| 22 |
+
|
| 23 |
+
$Y$ $C$ ${I}_{V}$ $Y$ ${I}_{V}$ $X$ $V$ (b) "Direct causal graph" (this work). $X$ $V$ (a) "Confounded graph"
|
| 24 |
+
|
| 25 |
+
Figure 1: Causal graphs considered in this work. (a) The "confounded graph" describes a spurious/confounded association between $Y$ and $V$ , and has been considered in the ML literature (Heinze-Deml & Meinshausen, 2021; Veitch et al., 2021; Makar et al., 2022; Puli et al., 2022). This setting requires that the marginal $P\left( Y\right)$ remains invariant across distribution shifts. (b) In the "direct causal graph" (this work), the shortcut variable $V$ is a direct cause of the outcome $Y$ , shifting the marginal $P\left( Y\right)$ when the intervention variable ${I}_{V}$ shifts the marginal $P\left( V\right)$ .
|
| 26 |
+
|
| 27 |
+
## 2 THEORY
|
| 28 |
+
|
| 29 |
+
Consider predicting outcome $Y$ (e.g., health status) from features $X$ (e.g., an ECG recording) in the presence of an auxiliary covariate $V$ (e.g., age or BMI). One source of model brittleness can be "shortcuts", or features that are predictive in the training distribution, but not predictive under relevant distribution shifts (Arjovsky et al., 2019; Geirhos et al., 2020). To cope with such instability, one may try to remove the shortcuts during learning. One common approach to shortcut removal assumes a non-causal association between the potential shortcut $V$ and the outcome to be predicted $Y$ (Heinze-Deml & Meinshausen,2021; Veitch et al.,2021; Makar et al.,2022; Puli et al.,2022), which, for instance, can arise due to a confounding covariate between $V$ and $Y$ . The goal is then to seek a predictor using only $X$ that performs well across a range of distributions. For example, Makar et al. (2022) develop a risk invariant predictor across a family of related probability distributions motivated by the graph depicted in Figure 1a, that can be simplified for our analysis to
|
| 30 |
+
|
| 31 |
+
$$
|
| 32 |
+
{\mathcal{P}}_{\text{spur }} = \left\{ {{P}_{s}\left( {X \mid Y, V}\right) {P}_{s}\left( Y\right) {P}_{t}\left( {V \mid Y}\right) }\right\} , \tag{1}
|
| 33 |
+
$$
|
| 34 |
+
|
| 35 |
+
for a source distribution denoted by $s$ and shifted target distributions indexed by $t$ . All target distributions in this family of distributions thus factor as ${\bar{P}}_{t}\left( {X, Y, V}\right) = {P}_{s}\left( {X \mid Y, V}\right) {P}_{s}\left( Y\right) {P}_{t}\left( {V \mid Y}\right)$ , i.e., they vary only in $P\left( {V \mid Y}\right)$ from the source distribution, while $P\left( {X \mid Y, V}\right)$ and $P\left( Y\right)$ remain unchanged. Notably, assuming that $P\left( Y\right)$ remains the same across all potential shifted distributions, can be an unrealistically strong assumption in applications like healthcare. For example, we would expect the prevalence of heart diseases(Y)to be higher in an older population(V).
|
| 36 |
+
|
| 37 |
+
Instead, in this work, we consider the scenario where the shortcut variable (e.g., age or BMI) is a direct causal parent of the outcome we wish to predict (e.g., myocardial infarction in an ECG), as depicted in Figure 1b. In this scenario, we wish to form good predictions for the family of distributions
|
| 38 |
+
|
| 39 |
+
$$
|
| 40 |
+
{\mathcal{P}}_{\text{cause }} = \left\{ {{P}_{s}\left( {X \mid Y, V}\right) {P}_{s}\left( {Y \mid V}\right) {P}_{t}\left( V\right) }\right\} . \tag{2}
|
| 41 |
+
$$
|
| 42 |
+
|
| 43 |
+
That is, we allow for changing marginal distribution of $P\left( V\right)$ , while holding the conditional distributions $P\left( {Y \mid V}\right)$ and $P\left( {X \mid Y, V}\right)$ fixed.
|
| 44 |
+
|
| 45 |
+
Using the notion of so-called stable sets from Pfister et al. (2021), one can derive from the graph in Figure 1b which sets of predictors are associated with the same conditional expectation across different interventions on $V$ by checking which sets of covariates block all paths between ${I}_{V}$ and $Y$ . Hence, in our model, to block the path ${I}_{V} \rightarrow V \rightarrow Y$ , the covariate $V$ must be included in the set of predictors. The predictive distribution derived from the source that conditions on $X$ and $V$ is then invariant across the entire family, i.e., ${P}_{s}\left( {Y \mid X, V}\right) = {P}_{t}\left( {Y \mid X, V}\right)$ , whereas the predictive distribution that only conditions on $X$ is not invariant, i.e., ${P}_{s}\left( {Y \mid X}\right) \neq {P}_{t}\left( {Y \mid X}\right)$ in general. We formalize this in the following proposition (proof in Appendix A.1).
|
| 46 |
+
|
| 47 |
+
Proposition 1 For any element ${P}_{t} \in {\mathcal{P}}_{\text{cause }}$ as defined in Eq. (2), it holds that ${P}_{t}\left( {Y \mid X, V}\right) =$ ${P}_{s}\left( {Y \mid X, V}\right)$ . Furthermore, for such a ${P}_{t}$ , in general ${P}_{t}\left( {Y \mid X}\right) \neq {P}_{s}\left( {Y \mid X}\right)$ .
|
| 48 |
+
|
| 49 |
+
$Y$ $C$ ${I}_{V}$ Y ${I}_{V}$ ${X}^{ * }$ $V$ $X$ (b) "Extended direct causal graph" (this work). ${X}^{ * }$ $V$ $X$ (a) "Extended confounded graph"
|
| 50 |
+
|
| 51 |
+
Figure 2: Extended versions of causal graphs in Figure 1. (a) Graph considered in Makar et al. (2022) explicitly including invariant latent variable ${X}^{ * }$ (still leading to shifts in ${\mathcal{P}}_{\text{spur }}$ ). Here, ${X}^{ * }$ is a latent variable that describes variation in $X$ caused by $Y$ . Recovering the predictive signal $e\left( X\right) = {X}^{ * }$ yields a predictor that is invariant across interventions ${I}_{V}$ , but one that does not use information from $V$ about $Y$ . (b) Direct graph explicitly including ${X}^{ * }$ (still leading to shifts in ${\mathcal{P}}_{\text{cause }}$ ). In this setting, recovering ${X}^{ * }$ does not guarantee an invariant predictor across shifts due to ${I}_{V}$ , demonstrated in Section 3
|
| 52 |
+
|
| 53 |
+
Hence, empirical risk minimization (ERM) using $\{ V, X\}$ as predictors would yield a robust model with respect to ${\mathcal{P}}_{\text{cause }}$ while ERM using $\{ X\}$ only would not.
|
| 54 |
+
|
| 55 |
+
Remark 1 Even an invariant representation that is invariant to $V$ and encodes only the information in $X$ related to $Y$ (e.g., ${X}^{ * }$ in Makar et al. (2022)) would suffer from a degradation in performance across the family ${\mathcal{P}}_{\text{cause }}$ .
|
| 56 |
+
|
| 57 |
+
We illustrate these findings with a simulation study in the next section.
|
| 58 |
+
|
| 59 |
+
## 3 EXPERIMENTS
|
| 60 |
+
|
| 61 |
+
To illustrate our findings above regarding the consequences of shifts in causally influencing covariates $V$ , we set up a simulation from a simple example. To allow for analyses of the behaviour of invariant methods as well, we roll out the graphs from Figure 1 similar to Makar et al. (2022) by explicitly including an unobserved variable ${X}^{ * }$ (see Figure 2). Here, ${X}^{ * } = e\left( X\right)$ for some function $e$ is assumed to be a latent variable that only contains information about $X$ that is related to $Y$ and as such is invariant to $V$ when conditioned on $Y$ .
|
| 62 |
+
|
| 63 |
+
In this extended setting (2), we define our data generating process as
|
| 64 |
+
|
| 65 |
+
$$
|
| 66 |
+
p\left( {V = 1}\right) = p \tag{3}
|
| 67 |
+
$$
|
| 68 |
+
|
| 69 |
+
$$
|
| 70 |
+
P\left( {Y = 1 \mid V = 0}\right) = {.2} \tag{4}
|
| 71 |
+
$$
|
| 72 |
+
|
| 73 |
+
$$
|
| 74 |
+
P\left( {Y = 1 \mid V = 1}\right) = {.9} \tag{5}
|
| 75 |
+
$$
|
| 76 |
+
|
| 77 |
+
$$
|
| 78 |
+
P\left( {X \mid Y = y, V = v}\right) = \mathcal{N}\left( {{\mu }_{y, v},1}\right) \tag{6}
|
| 79 |
+
$$
|
| 80 |
+
|
| 81 |
+
$$
|
| 82 |
+
P\left( {{X}^{ * } \mid Y = y}\right) = \mathcal{N}\left( {{\mu }_{y,0},1}\right) . \tag{7}
|
| 83 |
+
$$
|
| 84 |
+
|
| 85 |
+
where ${\mu }_{0,0} = - 2/3,{\mu }_{1,0} = 2/3,{\mu }_{0,1} = - {.8}$ , and ${\mu }_{1,1,} = {.8}$ . As such, it is constructed to allow to analyse shifts of the family ${\mathcal{P}}_{\text{cause }}$ , where $P\left( V\right)$ can be shifted by varying $p$ while $P\left( {Y \mid V}\right)$ and $P\left( {X \mid Y, V}\right)$ will remain the same.
|
| 86 |
+
|
| 87 |
+
In Figure 3, we compare the performance of the predictors ${P}_{s}\left( {Y \mid X}\right) ,{P}_{s}\left( {Y \mid {X}^{ * }}\right)$ and ${P}_{s}\left( {Y \mid X, V}\right)$ on distributions where the marginal $P\left( V\right)$ has been shifted, and the source distribution marginal is ${P}_{s}\left( {V = 1}\right) = {.1}$ . Predictors are obtained in closed form, and performance metrics are calculated on a sample of size 20,000 drawn according to the source distribution. As discussed in Section 2, this shift induces a shift in $P\left( Y\right) , P\left( {Y \mid X}\right)$ , and $P\left( {Y \mid {X}^{ * }}\right)$ , causing a degradation in performance of the predictors ${P}_{s}\left( {Y \mid X}\right)$ and ${P}_{s}\left( {Y \mid {X}^{ * }}\right)$ , but not of ${P}_{s}\left( {Y \mid X, V}\right)$ , i.e. conditioning on $V$ restores performance across distribution shifts.
|
| 88 |
+
|
| 89 |
+
0.9 $P\left( {Y \mid X}\right)$ 0.92 0.90 AUC 0.88 0.86 0.84 0.82 0.0 0.2 0.4 0.6 0.8 1.0 ${P}_{t}\left( V\right) = p$ , target marginal (b) AUCs across distribution shifts Invariant ${X}^{ * }$ Conditional Distributions 0.4 $Y = 0, V = 0$ $Y = 1, V = 0$ 0.3 $Y = 1, V = 1$ 0.2 0.1 0.0 -2 (d) Invariant likelihood, $P\left( {{X}^{ * } \mid Y}\right)$ . Accuracy $P\left( {Y|{X}^{ * }}\right)$ 0.6 $P\left( {Y \mid X}\right)$ $P\left( {Y \mid X, V}\right)$ 0.0 0.2 0.4 1.0 ${P}_{t}\left( V\right) = p$ , target marginal (a) Accuracy across distribution shifts. Non-Invariant $X$ Conditional Distributions 0.4 $Y = 0, V = 0$ Y=1, V=0 0.3 $Y = 0, V = 1$ Y=1, V=1 0.2 0.1 0.0 - 4 -3 0 3 (c) General likelihood, $P\left( {X \mid Y, V}\right)$ .
|
| 90 |
+
|
| 91 |
+
Figure 3: Simulation study described in Section 3. Panel (a) compares the predictive accuracy of the models ${P}_{s}\left( {Y \mid X}\right) ,{P}_{s}\left( {Y \mid {X}^{ * }}\right)$ and ${P}_{s}\left( {Y \mid X, V}\right)$ as a function of the target marginal ${P}_{t}\left( V\right) = p$ , with source ${P}_{s}\left( V\right) = {.1}$ . Only ${P}_{s}\left( {Y \mid X, V}\right)$ does not degrade in accuracy as ${P}_{t}\left( V\right)$ shifts further away from ${P}_{s}\left( V\right)$ . Panel (b) compares the predictive AUC of the same three models. Note that the predictor using ${X}^{ * },{P}_{s}\left( {Y \mid {X}^{ * }}\right)$ does achieve invariance in AUC across shifts, but not accuracy (or likelihood), and cannot make use of information about $Y$ from $V$ . Panel (c) depicts the four likelihood models (one for each combination of $Y$ and $V$ ) - note that $V = 1$ further separates the conditional distributions, making separation easier (hence the AUC goes up in Panel (b) as $p$ increases). Panel (d) depicts a $P\left( {{X}^{ * } \mid Y, V}\right)$ , which is the same across values of $V$ (unlike $P\left( {X \mid Y, V}\right)$ in Panel (c)). The overall key takeaway is the robustness of ${P}_{s}\left( {Y \mid X, V}\right)$ , i.e. the model conditioning on both $V$ and $X$ versus the lack of robustness in models conditioning only on $X$ or ${X}^{ * }$ in terms of predictive accuracy in Panel (a) (even when their AUC is robust across shifts, Panel (b)).
|
| 92 |
+
|
| 93 |
+
As a side note, note that the AUC performance of $P\left( {Y \mid {X}^{ * }}\right)$ does not degrade, though a general risk (like accuracy or log-likelihood) does degrade. This is due to the fact that shifts in $P\left( V\right)$ only influence $P\left( {Y \mid {X}^{ * }}\right)$ through the prevalence ${P}_{t}\left( Y\right) = \mathop{\sum }\limits_{{v}^{\prime }}{P}_{s}\left( {Y \mid V = {v}^{\prime }}\right) {P}_{t}\left( {V = {v}^{\prime }}\right)$ , not through ${X}^{ * }$ . The AUC metric is invariant to prevalence, but general metrics like accuracy, log-likelihood, and calibration are sensitive to prevalence. Also note that the difference in AUC performance of $P\left( {Y \mid {X}^{ * }}\right)$ and $P\left( {Y \mid X}\right)$ is due to the construction of the class conditional distributions depicted in Figures $3\mathrm{c}$ and $3\mathrm{\;d}$ .
|
| 94 |
+
|
| 95 |
+
Overall, the degradation (or robustness) of performance across shifts of the family ${\mathcal{P}}_{\text{cause }}$ is the main illustrative point to be observed in Figure 3 and this section.
|
| 96 |
+
|
| 97 |
+
## 4 DISCUSSION
|
| 98 |
+
|
| 99 |
+
Our theoretical findings show that for settings in which auxiliary covariates $V$ causally influence the outcome of interest $Y$ (rather than just being spuriously correlated to them), $P\left( {Y \mid X, V}\right)$ remains stable across shifts in $P\left( V\right)$ , while $P\left( {Y \mid X}\right)$ in general does not. As such, regressing $Y$ only on $X$ to learn $P\left( {Y \mid X}\right)$ (or invariant derivations thereof) will lead to predictions that are not robust to such shifts, while regressing $Y$ on $X, V$ recovers the desired robustness, as we empirically demonstrate on simulated data. The canonical next step of our analysis is to demonstrate our findings on real world data in healthcare applications, where we would expect shifts of the family ${\mathcal{P}}_{\text{cause }}$ to occur in practice.
|
| 100 |
+
|
| 101 |
+
## REFERENCES
|
| 102 |
+
|
| 103 |
+
Martin Arjovsky, Léon Bottou, Ishaan Gulrajani, and David Lopez-Paz. Invariant risk minimization. arXiv preprint arXiv:1907.02893, 2019.
|
| 104 |
+
|
| 105 |
+
Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, François Laviolette, Mario Marchand, and Victor Lempitsky. Domain-adversarial training of neural networks. The journal of machine learning research, 17(1):2096-2030, 2016.
|
| 106 |
+
|
| 107 |
+
Robert Geirhos, Jörn-Henrik Jacobsen, Claudio Michaelis, Richard Zemel, Wieland Brendel, Matthias Bethge, and Felix A Wichmann. Shortcut learning in deep neural networks. Nature Machine Intelligence, 2(11):665-673, 2020.
|
| 108 |
+
|
| 109 |
+
Christina Heinze-Deml and Nicolai Meinshausen. Conditional variance penalties and domain shift robustness. Machine Learning, 110:303-348, 2021. doi: https://doi.org/10.1007/ s10994-020-05924-1.
|
| 110 |
+
|
| 111 |
+
Etsuro Himeno, Kenji Nishino, Yoshiyuki Nakashima, Akio Kuroiwa, and Masaharu Ikeda. Weight reduction regresses left ventricular mass regardless of blood pressure level in obese subjects. American heart journal, 131(2):313-319, 1996.
|
| 112 |
+
|
| 113 |
+
Beverly H Lorell and Blase A Carabello. Left ventricular hypertrophy: pathogenesis, detection, and prognosis. Circulation, 102(4):470-479, 2000.
|
| 114 |
+
|
| 115 |
+
Maggie Makar, Ben Packer, Dan Moldovan, Davis Blalock, Yoni Halpern, and Alexander D'Amour. Causally motivated shortcut removal using auxiliary labels. In Gustau Camps-Valls, Francisco J. R. Ruiz, and Isabel Valera (eds.), Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, volume 151 of Proceedings of Machine Learning Research, pp. 739- 766. PMLR, 28-30 Mar 2022.
|
| 116 |
+
|
| 117 |
+
Niklas Pfister, Evan G. Williams, Jonas Peters, Ruedi Aebersold, and Peter Bühlmann. Stabilizing variable selection and regression. The Annals of Applied Statistics, 15(3):1220-1246, 2021. doi: 10.1214/21-AOAS1487.
|
| 118 |
+
|
| 119 |
+
Aahlad Manas Puli, Lily H Zhang, Eric Karl Oermann, and Rajesh Ranganath. Out-of-distribution generalization in the presence of nuisance-induced spurious correlations. In International Conference on Learning Representations, 2022.
|
| 120 |
+
|
| 121 |
+
Victor Veitch, Alexander D'Amour, Steve Yadlowsky, and Jacob Eisenstein. Counterfactual invariance to spurious correlations in text classification. In A. Beygelzimer, Y. Dauphin, P. Liang, and J. Wortman Vaughan (eds.), Advances in Neural Information Processing Systems, 2021.
|
| 122 |
+
|
| 123 |
+
## A APPENDIX
|
| 124 |
+
|
| 125 |
+
### A.1 Proof of Proposition 1
|
| 126 |
+
|
| 127 |
+
First, we show that for any element ${P}_{t}$ of this family, it holds that ${P}_{t}\left( {Y \mid X, V}\right) = {P}_{s}\left( {Y \mid X, V}\right)$ . Remember that by the definition of ${\mathcal{P}}_{\text{cause }}$ , we have that ${P}_{t}\left( {X \mid Y, V}\right) = {P}_{s}\left( {X \mid Y, V}\right)$ and ${P}_{t}\left( {Y \mid V}\right) = {P}_{s}\left( {Y \mid V}\right)$ . From this, it quickly follows that
|
| 128 |
+
|
| 129 |
+
$$
|
| 130 |
+
{P}_{t}\left( {X \mid V}\right) =
|
| 131 |
+
$$
|
| 132 |
+
|
| 133 |
+
$$
|
| 134 |
+
\int {P}_{t}\left( {X \mid Y, V}\right) {P}_{t}\left( {Y \mid V}\right) {dY} \tag{8}
|
| 135 |
+
$$
|
| 136 |
+
|
| 137 |
+
= $\int {P}_{s}\left( {X \mid Y, V}\right) {P}_{s}\left( {Y \mid V}\right) {dY}$(9)
|
| 138 |
+
|
| 139 |
+
=
|
| 140 |
+
|
| 141 |
+
$$
|
| 142 |
+
{P}_{s}\left( {X \mid V}\right) \tag{10}
|
| 143 |
+
$$
|
| 144 |
+
|
| 145 |
+
Then, using basic probability calculus, it follows
|
| 146 |
+
|
| 147 |
+
$$
|
| 148 |
+
{P}_{t}\left( {Y \mid X, V}\right) =
|
| 149 |
+
$$
|
| 150 |
+
|
| 151 |
+
$$
|
| 152 |
+
\frac{{P}_{t}\left( {Y, X, V}\right) }{{P}_{t}\left( {X, V}\right) } \tag{11}
|
| 153 |
+
$$
|
| 154 |
+
|
| 155 |
+
=
|
| 156 |
+
|
| 157 |
+
$$
|
| 158 |
+
\frac{{P}_{s}\left( {X \mid Y, V}\right) {P}_{t}\left( {Y \mid V}\right) {P}_{t}\left( V\right) }{{P}_{t}\left( {X \mid V}\right) {P}_{t}\left( V\right) } \tag{12}
|
| 159 |
+
$$
|
| 160 |
+
|
| 161 |
+
=
|
| 162 |
+
|
| 163 |
+
$$
|
| 164 |
+
\frac{{P}_{s}\left( {X \mid Y, V}\right) {P}_{s}\left( {Y \mid V}\right) }{{P}_{t}\left( {X \mid V}\right) } \tag{13}
|
| 165 |
+
$$
|
| 166 |
+
|
| 167 |
+
$$
|
| 168 |
+
= \frac{{P}_{s}\left( {X \mid Y, V}\right) {P}_{s}\left( {Y \mid V}\right) }{{P}_{s}\left( {X \mid V}\right) } \tag{14}
|
| 169 |
+
$$
|
| 170 |
+
|
| 171 |
+
=
|
| 172 |
+
|
| 173 |
+
$$
|
| 174 |
+
\frac{{P}_{s}\left( {X \mid Y, V}\right) {P}_{s}\left( {Y \mid V}\right) {P}_{s}\left( V\right) }{{P}_{s}\left( {X \mid V}\right) {P}_{s}\left( V\right) } \tag{15}
|
| 175 |
+
$$
|
| 176 |
+
|
| 177 |
+
=
|
| 178 |
+
|
| 179 |
+
$$
|
| 180 |
+
{P}_{s}\left( {Y \mid X, V}\right) \tag{16}
|
| 181 |
+
$$
|
| 182 |
+
|
| 183 |
+
Next, we show that for such an element ${P}_{t}$ of this family, in general ${P}_{t}\left( {Y \mid X}\right) \neq {P}_{s}\left( {Y \mid X}\right)$ . Using the above result, this is indeed easy to see when marginalising over $V$ :
|
| 184 |
+
|
| 185 |
+
$$
|
| 186 |
+
{P}_{t}\left( {Y \mid X}\right) =
|
| 187 |
+
$$
|
| 188 |
+
|
| 189 |
+
$$
|
| 190 |
+
\int {P}_{t}\left( {Y \mid X, V}\right) {P}_{t}\left( {V \mid X}\right) {dV} \tag{17}
|
| 191 |
+
$$
|
| 192 |
+
|
| 193 |
+
=
|
| 194 |
+
|
| 195 |
+
$$
|
| 196 |
+
\int {P}_{s}\left( {Y \mid X, V}\right) {P}_{t}\left( {V \mid X}\right) {dV} \tag{18}
|
| 197 |
+
$$
|
| 198 |
+
|
| 199 |
+
=
|
| 200 |
+
|
| 201 |
+
$$
|
| 202 |
+
\int {P}_{s}\left( {Y \mid X, V}\right) \frac{{P}_{t}\left( {X \mid V}\right) {P}_{t}\left( V\right) }{{P}_{t}\left( X\right) }{dV} \tag{19}
|
| 203 |
+
$$
|
| 204 |
+
|
| 205 |
+
$$
|
| 206 |
+
\int {P}_{s}\left( {Y \mid X, V}\right) {P}_{s}\left( {X \mid V}\right) \frac{{P}_{t}\left( V\right) }{{P}_{t}\left( X\right) }{dV} \tag{20}
|
| 207 |
+
$$
|
| 208 |
+
|
| 209 |
+
(21)
|
| 210 |
+
|
| 211 |
+
Since in general $\frac{{P}_{t}\left( V\right) }{{P}_{t}\left( X\right) } \neq \frac{{P}_{s}\left( V\right) }{{P}_{s}\left( X\right) }$ , this also implies that in general ${P}_{t}\left( {Y \mid X}\right) \neq {P}_{s}\left( {Y \mid X}\right)$ .
|
papers/ICLR/ICLR 2023/ICLR 2023 Workshop/ICLR 2023 Workshop TML4H/y7XveyWYzIB/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,99 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ CONSIDERATIONS FOR DISTRIBUTION SHIFT ROBUST- NESS IN HEALTH
|
| 2 |
+
|
| 3 |
+
Anonymous authors
|
| 4 |
+
|
| 5 |
+
Paper under double-blind review
|
| 6 |
+
|
| 7 |
+
§ ABSTRACT
|
| 8 |
+
|
| 9 |
+
When analyzing robustness of predictive models under distribution shift, many works focus on tackling generalization in the presence of spurious correlations. In this case, one typically makes use of covariates or environment indicators to enforce independencies in learned models to guarantee generalization under various distribution shifts. In this work, we analyze a class of distribution shifts, where such independencies are not desirable, as there is a causal association between covariates and outcomes of interest. This case is common in the health space where covariates can be causally, as opposed to spuriously, related to outcomes of interest. We formalize this setting and relate it to common distribution shift settings from the literature. We theoretically show why standard supervised learning and invariant learning will not yield robust predictors in this case, while including the causal covariates into the prediction model can recover robustness. We demonstrate our theoretical findings in practice in experiments on synthetic data.
|
| 10 |
+
|
| 11 |
+
§ 1 INTRODUCTION
|
| 12 |
+
|
| 13 |
+
In this work, we motivate how common assumptions in the domain generalization and invariant learning literature (Arjovsky et al., 2019; Ganin et al., 2016; Veitch et al., 2021) are violated in a set of broadly applicable problems, e.g. in healthcare. Invariant learning typically assumes a spurious or confounded association between outcome and covariates or auxiliary information. Hence, building a predictor that is invariant to the covariates or associated environment indicators can be shown to generalize better under distribution shift than standard empirical risk minimization. However, in many applications of interest for machine learning, e.g. in healthcare, there might not be only spurious associations between covariates and outcome, but also causal ones.
|
| 14 |
+
|
| 15 |
+
One illustrative example is body mass index (BMI), which is causally related to a host of conditions, e.g., left ventricular hypertrophy (LVH) (Lorell & Carabello, 2000). BMI is not "spurious" in the sense that it is merely associated with LVH, but can directly cause changes in left ventricular mass, which in turn can lead to LVH (Himeno et al., 1996). However, a shift in the prevalence of elevated BMI can shift the association between a signal - e.g., an electrocardiogram (ECG) - that is influenced by both BMI and LVH.
|
| 16 |
+
|
| 17 |
+
We formalize such a causal setting and show that it leads to regression in performance of machine learning models under distribution shift, that can not be mitigated with common invariant learning methods. Our contributions are the following:
|
| 18 |
+
|
| 19 |
+
* We motivate and formalize a class of problems where covariates, such as demographics or other auxiliary data causally influence the outcome of interest and explain the difference to the commonly considered confounded or spurious associations.
|
| 20 |
+
|
| 21 |
+
* For this class of problems we show theoretically and on simulated data, how distribution shifts along such causally influencing covariates cause discrepancies in performance that can not be mitigated with invariant learning methods designed for the commonly considered confounded setting.
|
| 22 |
+
|
| 23 |
+
< g r a p h i c s >
|
| 24 |
+
|
| 25 |
+
Figure 1: Causal graphs considered in this work. (a) The "confounded graph" describes a spurious/confounded association between $Y$ and $V$ , and has been considered in the ML literature (Heinze-Deml & Meinshausen, 2021; Veitch et al., 2021; Makar et al., 2022; Puli et al., 2022). This setting requires that the marginal $P\left( Y\right)$ remains invariant across distribution shifts. (b) In the "direct causal graph" (this work), the shortcut variable $V$ is a direct cause of the outcome $Y$ , shifting the marginal $P\left( Y\right)$ when the intervention variable ${I}_{V}$ shifts the marginal $P\left( V\right)$ .
|
| 26 |
+
|
| 27 |
+
§ 2 THEORY
|
| 28 |
+
|
| 29 |
+
Consider predicting outcome $Y$ (e.g., health status) from features $X$ (e.g., an ECG recording) in the presence of an auxiliary covariate $V$ (e.g., age or BMI). One source of model brittleness can be "shortcuts", or features that are predictive in the training distribution, but not predictive under relevant distribution shifts (Arjovsky et al., 2019; Geirhos et al., 2020). To cope with such instability, one may try to remove the shortcuts during learning. One common approach to shortcut removal assumes a non-causal association between the potential shortcut $V$ and the outcome to be predicted $Y$ (Heinze-Deml & Meinshausen,2021; Veitch et al.,2021; Makar et al.,2022; Puli et al.,2022), which, for instance, can arise due to a confounding covariate between $V$ and $Y$ . The goal is then to seek a predictor using only $X$ that performs well across a range of distributions. For example, Makar et al. (2022) develop a risk invariant predictor across a family of related probability distributions motivated by the graph depicted in Figure 1a, that can be simplified for our analysis to
|
| 30 |
+
|
| 31 |
+
$$
|
| 32 |
+
{\mathcal{P}}_{\text{ spur }} = \left\{ {{P}_{s}\left( {X \mid Y,V}\right) {P}_{s}\left( Y\right) {P}_{t}\left( {V \mid Y}\right) }\right\} , \tag{1}
|
| 33 |
+
$$
|
| 34 |
+
|
| 35 |
+
for a source distribution denoted by $s$ and shifted target distributions indexed by $t$ . All target distributions in this family of distributions thus factor as ${\bar{P}}_{t}\left( {X,Y,V}\right) = {P}_{s}\left( {X \mid Y,V}\right) {P}_{s}\left( Y\right) {P}_{t}\left( {V \mid Y}\right)$ , i.e., they vary only in $P\left( {V \mid Y}\right)$ from the source distribution, while $P\left( {X \mid Y,V}\right)$ and $P\left( Y\right)$ remain unchanged. Notably, assuming that $P\left( Y\right)$ remains the same across all potential shifted distributions, can be an unrealistically strong assumption in applications like healthcare. For example, we would expect the prevalence of heart diseases(Y)to be higher in an older population(V).
|
| 36 |
+
|
| 37 |
+
Instead, in this work, we consider the scenario where the shortcut variable (e.g., age or BMI) is a direct causal parent of the outcome we wish to predict (e.g., myocardial infarction in an ECG), as depicted in Figure 1b. In this scenario, we wish to form good predictions for the family of distributions
|
| 38 |
+
|
| 39 |
+
$$
|
| 40 |
+
{\mathcal{P}}_{\text{ cause }} = \left\{ {{P}_{s}\left( {X \mid Y,V}\right) {P}_{s}\left( {Y \mid V}\right) {P}_{t}\left( V\right) }\right\} . \tag{2}
|
| 41 |
+
$$
|
| 42 |
+
|
| 43 |
+
That is, we allow for changing marginal distribution of $P\left( V\right)$ , while holding the conditional distributions $P\left( {Y \mid V}\right)$ and $P\left( {X \mid Y,V}\right)$ fixed.
|
| 44 |
+
|
| 45 |
+
Using the notion of so-called stable sets from Pfister et al. (2021), one can derive from the graph in Figure 1b which sets of predictors are associated with the same conditional expectation across different interventions on $V$ by checking which sets of covariates block all paths between ${I}_{V}$ and $Y$ . Hence, in our model, to block the path ${I}_{V} \rightarrow V \rightarrow Y$ , the covariate $V$ must be included in the set of predictors. The predictive distribution derived from the source that conditions on $X$ and $V$ is then invariant across the entire family, i.e., ${P}_{s}\left( {Y \mid X,V}\right) = {P}_{t}\left( {Y \mid X,V}\right)$ , whereas the predictive distribution that only conditions on $X$ is not invariant, i.e., ${P}_{s}\left( {Y \mid X}\right) \neq {P}_{t}\left( {Y \mid X}\right)$ in general. We formalize this in the following proposition (proof in Appendix A.1).
|
| 46 |
+
|
| 47 |
+
Proposition 1 For any element ${P}_{t} \in {\mathcal{P}}_{\text{ cause }}$ as defined in Eq. (2), it holds that ${P}_{t}\left( {Y \mid X,V}\right) =$ ${P}_{s}\left( {Y \mid X,V}\right)$ . Furthermore, for such a ${P}_{t}$ , in general ${P}_{t}\left( {Y \mid X}\right) \neq {P}_{s}\left( {Y \mid X}\right)$ .
|
| 48 |
+
|
| 49 |
+
< g r a p h i c s >
|
| 50 |
+
|
| 51 |
+
Figure 2: Extended versions of causal graphs in Figure 1. (a) Graph considered in Makar et al. (2022) explicitly including invariant latent variable ${X}^{ * }$ (still leading to shifts in ${\mathcal{P}}_{\text{ spur }}$ ). Here, ${X}^{ * }$ is a latent variable that describes variation in $X$ caused by $Y$ . Recovering the predictive signal $e\left( X\right) = {X}^{ * }$ yields a predictor that is invariant across interventions ${I}_{V}$ , but one that does not use information from $V$ about $Y$ . (b) Direct graph explicitly including ${X}^{ * }$ (still leading to shifts in ${\mathcal{P}}_{\text{ cause }}$ ). In this setting, recovering ${X}^{ * }$ does not guarantee an invariant predictor across shifts due to ${I}_{V}$ , demonstrated in Section 3
|
| 52 |
+
|
| 53 |
+
Hence, empirical risk minimization (ERM) using $\{ V,X\}$ as predictors would yield a robust model with respect to ${\mathcal{P}}_{\text{ cause }}$ while ERM using $\{ X\}$ only would not.
|
| 54 |
+
|
| 55 |
+
Remark 1 Even an invariant representation that is invariant to $V$ and encodes only the information in $X$ related to $Y$ (e.g., ${X}^{ * }$ in Makar et al. (2022)) would suffer from a degradation in performance across the family ${\mathcal{P}}_{\text{ cause }}$ .
|
| 56 |
+
|
| 57 |
+
We illustrate these findings with a simulation study in the next section.
|
| 58 |
+
|
| 59 |
+
§ 3 EXPERIMENTS
|
| 60 |
+
|
| 61 |
+
To illustrate our findings above regarding the consequences of shifts in causally influencing covariates $V$ , we set up a simulation from a simple example. To allow for analyses of the behaviour of invariant methods as well, we roll out the graphs from Figure 1 similar to Makar et al. (2022) by explicitly including an unobserved variable ${X}^{ * }$ (see Figure 2). Here, ${X}^{ * } = e\left( X\right)$ for some function $e$ is assumed to be a latent variable that only contains information about $X$ that is related to $Y$ and as such is invariant to $V$ when conditioned on $Y$ .
|
| 62 |
+
|
| 63 |
+
In this extended setting (2), we define our data generating process as
|
| 64 |
+
|
| 65 |
+
$$
|
| 66 |
+
p\left( {V = 1}\right) = p \tag{3}
|
| 67 |
+
$$
|
| 68 |
+
|
| 69 |
+
$$
|
| 70 |
+
P\left( {Y = 1 \mid V = 0}\right) = {.2} \tag{4}
|
| 71 |
+
$$
|
| 72 |
+
|
| 73 |
+
$$
|
| 74 |
+
P\left( {Y = 1 \mid V = 1}\right) = {.9} \tag{5}
|
| 75 |
+
$$
|
| 76 |
+
|
| 77 |
+
$$
|
| 78 |
+
P\left( {X \mid Y = y,V = v}\right) = \mathcal{N}\left( {{\mu }_{y,v},1}\right) \tag{6}
|
| 79 |
+
$$
|
| 80 |
+
|
| 81 |
+
$$
|
| 82 |
+
P\left( {{X}^{ * } \mid Y = y}\right) = \mathcal{N}\left( {{\mu }_{y,0},1}\right) . \tag{7}
|
| 83 |
+
$$
|
| 84 |
+
|
| 85 |
+
where ${\mu }_{0,0} = - 2/3,{\mu }_{1,0} = 2/3,{\mu }_{0,1} = - {.8}$ , and ${\mu }_{1,1,} = {.8}$ . As such, it is constructed to allow to analyse shifts of the family ${\mathcal{P}}_{\text{ cause }}$ , where $P\left( V\right)$ can be shifted by varying $p$ while $P\left( {Y \mid V}\right)$ and $P\left( {X \mid Y,V}\right)$ will remain the same.
|
| 86 |
+
|
| 87 |
+
In Figure 3, we compare the performance of the predictors ${P}_{s}\left( {Y \mid X}\right) ,{P}_{s}\left( {Y \mid {X}^{ * }}\right)$ and ${P}_{s}\left( {Y \mid X,V}\right)$ on distributions where the marginal $P\left( V\right)$ has been shifted, and the source distribution marginal is ${P}_{s}\left( {V = 1}\right) = {.1}$ . Predictors are obtained in closed form, and performance metrics are calculated on a sample of size 20,000 drawn according to the source distribution. As discussed in Section 2, this shift induces a shift in $P\left( Y\right) ,P\left( {Y \mid X}\right)$ , and $P\left( {Y \mid {X}^{ * }}\right)$ , causing a degradation in performance of the predictors ${P}_{s}\left( {Y \mid X}\right)$ and ${P}_{s}\left( {Y \mid {X}^{ * }}\right)$ , but not of ${P}_{s}\left( {Y \mid X,V}\right)$ , i.e. conditioning on $V$ restores performance across distribution shifts.
|
| 88 |
+
|
| 89 |
+
< g r a p h i c s >
|
| 90 |
+
|
| 91 |
+
Figure 3: Simulation study described in Section 3. Panel (a) compares the predictive accuracy of the models ${P}_{s}\left( {Y \mid X}\right) ,{P}_{s}\left( {Y \mid {X}^{ * }}\right)$ and ${P}_{s}\left( {Y \mid X,V}\right)$ as a function of the target marginal ${P}_{t}\left( V\right) = p$ , with source ${P}_{s}\left( V\right) = {.1}$ . Only ${P}_{s}\left( {Y \mid X,V}\right)$ does not degrade in accuracy as ${P}_{t}\left( V\right)$ shifts further away from ${P}_{s}\left( V\right)$ . Panel (b) compares the predictive AUC of the same three models. Note that the predictor using ${X}^{ * },{P}_{s}\left( {Y \mid {X}^{ * }}\right)$ does achieve invariance in AUC across shifts, but not accuracy (or likelihood), and cannot make use of information about $Y$ from $V$ . Panel (c) depicts the four likelihood models (one for each combination of $Y$ and $V$ ) - note that $V = 1$ further separates the conditional distributions, making separation easier (hence the AUC goes up in Panel (b) as $p$ increases). Panel (d) depicts a $P\left( {{X}^{ * } \mid Y,V}\right)$ , which is the same across values of $V$ (unlike $P\left( {X \mid Y,V}\right)$ in Panel (c)). The overall key takeaway is the robustness of ${P}_{s}\left( {Y \mid X,V}\right)$ , i.e. the model conditioning on both $V$ and $X$ versus the lack of robustness in models conditioning only on $X$ or ${X}^{ * }$ in terms of predictive accuracy in Panel (a) (even when their AUC is robust across shifts, Panel (b)).
|
| 92 |
+
|
| 93 |
+
As a side note, note that the AUC performance of $P\left( {Y \mid {X}^{ * }}\right)$ does not degrade, though a general risk (like accuracy or log-likelihood) does degrade. This is due to the fact that shifts in $P\left( V\right)$ only influence $P\left( {Y \mid {X}^{ * }}\right)$ through the prevalence ${P}_{t}\left( Y\right) = \mathop{\sum }\limits_{{v}^{\prime }}{P}_{s}\left( {Y \mid V = {v}^{\prime }}\right) {P}_{t}\left( {V = {v}^{\prime }}\right)$ , not through ${X}^{ * }$ . The AUC metric is invariant to prevalence, but general metrics like accuracy, log-likelihood, and calibration are sensitive to prevalence. Also note that the difference in AUC performance of $P\left( {Y \mid {X}^{ * }}\right)$ and $P\left( {Y \mid X}\right)$ is due to the construction of the class conditional distributions depicted in Figures $3\mathrm{c}$ and $3\mathrm{\;d}$ .
|
| 94 |
+
|
| 95 |
+
Overall, the degradation (or robustness) of performance across shifts of the family ${\mathcal{P}}_{\text{ cause }}$ is the main illustrative point to be observed in Figure 3 and this section.
|
| 96 |
+
|
| 97 |
+
§ 4 DISCUSSION
|
| 98 |
+
|
| 99 |
+
Our theoretical findings show that for settings in which auxiliary covariates $V$ causally influence the outcome of interest $Y$ (rather than just being spuriously correlated to them), $P\left( {Y \mid X,V}\right)$ remains stable across shifts in $P\left( V\right)$ , while $P\left( {Y \mid X}\right)$ in general does not. As such, regressing $Y$ only on $X$ to learn $P\left( {Y \mid X}\right)$ (or invariant derivations thereof) will lead to predictions that are not robust to such shifts, while regressing $Y$ on $X,V$ recovers the desired robustness, as we empirically demonstrate on simulated data. The canonical next step of our analysis is to demonstrate our findings on real world data in healthcare applications, where we would expect shifts of the family ${\mathcal{P}}_{\text{ cause }}$ to occur in practice.
|
papers/ICLR/ICLR 2023/ICLR 2023 Workshop/ICLR 2023 Workshop TML4H/zZcCINENgm/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,253 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Why Deep Surgical Models Fail?: Revisiting SURGICAL ACTION TRIPLET RECOGNITION THROUGH THE LENS OF ROBUSTNESS
|
| 2 |
+
|
| 3 |
+
Anonymous authors
|
| 4 |
+
|
| 5 |
+
Paper under double-blind review
|
| 6 |
+
|
| 7 |
+
## Abstract
|
| 8 |
+
|
| 9 |
+
Surgical action triplet recognition provides a better understanding of the surgical scene. This task is of high relevance as it provides the surgeon with context-aware support and safety. The current go-to strategy for improving performance is the development of new network mechanisms. However, the performance of current state-of-the-art techniques is substantially lower than other surgical tasks. Why is this happening? This is the question that we address in this work. We present the first study to understand the failure of existing deep learning models through the lens of robustness and explainability. Firstly, we study current existing models under weak and strong $\delta$ -perturbations via an adversarial optimisation scheme. We then analyse the failure modes via feature based explanations. Our study reveals that the key to improving performance and increasing reliability is in the core and spurious attributes. Our work opens the door to more trustworthy and reliable deep learning models in surgical data science.
|
| 10 |
+
|
| 11 |
+
## 1 INTRODUCTION
|
| 12 |
+
|
| 13 |
+
Minimally Invasive Surgery (MIS) has become the gold standard for several procedures (i.e., cholecystectomy & appendectomy), as it provides better clinical outcomes including reducing blood loss, minimising trauma to the body, causing less post-operative pain and faster recovery (Velanovich, 2000; Wilson et al., 2014). Despite the benefits of MIS, surgeons lose direct vision and touch on the target, which decreases surgeon-patient transparency imposing technical challenges to the surgeon. These challenges have motivated the development of automatic techniques for the analysis of the surgical workflow (Aviles et al., 2016; Maier-Hein et al., 2017; Vercauteren et al., 2019; Nwoye et al., 2022). In particular, this work aims to address a key research problem in surgical data science-surgical recognition, which provides to the surgeon context-aware support and safety.
|
| 14 |
+
|
| 15 |
+
The majority of existing surgical recognition techniques focus on phase recognition (Blum et al., 2010; Dergachyova et al., 2016; Lo et al., 2003; Twinanda et al., 2016; Zisimopoulos et al., 2018). However, phase recognition is limited by its own definition; as it does not provide complete information on the surgical scene. We therefore consider the setting of surgical action triplet recognition, which offers a better understanding of the surgical scene. The goal of triplet recognition is to recognise the ⟨instrument, verb, target⟩ and their inherent relations. A visualisation of this task is displayed in Figure 1.
|
| 16 |
+
|
| 17 |
+
The concept behind triplet recognition has been recognised in the early works of that (Neumuth et al., 2006; Katić et al., 2014). However, it has not been until the recent introduction of richer datasets, such as CholecT40 (Nwoye et al., 2020), that the community started developing new techniques under more realistic conditions. The work of that Nwoye et al (Nwoye et al., 2020) proposed a framework called Tripnet, which was the first work to formally address surgical actions as triplets. In that work, authors proposed a 3D interaction space for learning the triplets. In more recent work, the authors of Nwoye et al. (2022) introduced two new models. The first one is a direct extension of Tripnet called Attention Tripnet, where the novelty relies on a spatial attention mechanism. In the same work, the authors introduced another model called Rendezvous (RDV) that highlights a transformer-inspired neural network.
|
| 18 |
+
|
| 19 |
+
A commonality of existing surgical action triplet recognition techniques is the development of new mechanisms for improving the network architecture. However and despite the potential improvements, the performance of existing techniques is substantially lower than other tasks in surgical sciences-for example, force estimation and navigation assisted surgery. In this work, we go contrariwise existing techniques, and tackle the surgical action triplet recognition problem from the lens of robustness and explainability.
|
| 20 |
+
|
| 21 |
+

|
| 22 |
+
|
| 23 |
+
Figure 1: Visualisation of the surgical action triplet recognition task. We consider the tasks where the instrument(I), verb(V, action), and target $(T$ , anatomical part) seek to be predicted.
|
| 24 |
+
|
| 25 |
+
In the machine learning community there is a substantial increase of interest in understanding the lack of reliability of deep learning models (e.g., Ribeiro et al. (2016); Koh & Liang (2017); Sundararajan et al. (2017); Liu et al. (2019); Yeh et al. (2019); Hsieh et al. (2020)). To understand the lack of reliability of existing deep networks, a popular family of techniques is the so-called feature based explanations via robustness analysis (Si-monyan et al., 2013; Zeiler & Fergus, 2014; Plumb et al., 2018; Wong et al., 2021; Singla & Feizi, 2021). Whilst existing techniques have extensively been evaluated for natural images tasks, there are no existing works addressing the complex problems as in action triplet recognition.
|
| 26 |
+
|
| 27 |
+
Contributions. In this work, we introduce, to the best of our knowledge, the first study to understand the failure of existing deep learning models for surgical action triplet recognition. To do this, we analyse the failures of existing state-of-the-art solutions through the lens of robustness. Specifically, we push to the limit the existing SOTA techniques for surgical action triplet recognition under weak and strong $\delta$ -perturbations. We then extensively analyse the failure modes via the evaluation criteria Robustness-S, which analyses the behaviour of the models through feature based explanations. Our study reveals the impact of core and spurious features for more robust models. Our study opens the door to more trustworthy and reliable deep learning models in surgical data science, which is imperative for MIS.
|
| 28 |
+
|
| 29 |
+
## 2 METHODOLOGY
|
| 30 |
+
|
| 31 |
+
We describe two key parts for Surgical action triplet recognition task: i) our experimental settings along with assumptions and ii) how we evaluate robustness via adversarial optimisation. The work-flow of our work is displayed in Figure 2.
|
| 32 |
+
|
| 33 |
+
### 2.1 SURGICAL ACTION TRIPLET RECOGNITION
|
| 34 |
+
|
| 35 |
+
In the surgical action triplet recognition problem, the main task is to recognise the triplet IVT, which is the composition of three components during surgery: instrument(I), verb(V), and target(T)in a given RGB image $\mathbf{x} \in {\mathbb{R}}^{H \times W \times 3}$ .
|
| 36 |
+
|
| 37 |
+
Formally, we consider a given set of samples ${\left\{ \left( {\mathbf{x}}_{n},{y}_{n}\right) \right\} }_{n = 1}^{N}$ with provided labels $\mathcal{Y} = \left\{ {0,1,..,{C}_{IVT} - }\right.$ $1\}$ for ${C}_{IVT} = {100}$ classes. We seek then to predict a function $f : \mathcal{X} \mapsto \mathcal{Y}$ such that $f$ gets a good estimate for the unseen data. That is, a given parameterised deep learning model takes the image $\mathbf{x}$ as input, and outputs a set of class-wise presence probabilities, in our case 100 classes, under the ${IVT}$ composition, ${\mathbf{Y}}_{IVT} \in {\mathbb{R}}^{100}$ , which we call it the logits of ${IVT}$ . Since there are three individual components under the triplet composition, within the training network, we also considered the individual component ${d}^{ * } \in \{ I, V, T\}$ , each with class number ${C}_{{d}^{ * }}$ (i.e. ${C}_{I} = 6,{C}_{V} = {10},{C}_{T} = {15}$ ). The logits of each component, ${\mathbf{Y}}_{{d}^{ * }} \in {\mathbb{R}}^{{C}_{{d}^{ * }}}$ , are computed and used within the network.
|
| 38 |
+
|
| 39 |
+
In current state-of-the-art (SOTA) deep models (Nwoye et al., 2020; 2022), there is a communal structure divided into three parts: i) the feature extraction backbone; ii) the individual component encoder; and iii) the triplet aggregation decoder that associate the components and output the logits of the IVT triplet. More precisely, the individual component encoder firstly concentrates on the instrument component to output Class Activation Maps (CAMs $\in {\mathbb{R}}^{H \times W \times {C}_{d}}$ ) and the logits ${\mathbf{Y}}_{\mathbf{I}}$ of the instrument classes; the CAMs are then associated with the verb and target components separately for their logits $\left( {\mathbf{Y}}_{\mathbf{V}}\right.$ and $\left. {\mathbf{Y}}_{\mathbf{T}}\right)$ to address the instrument-centric nature of the triplet.
|
| 40 |
+
|
| 41 |
+
The current SOTA techniques for surgical action triplet recognition focus on improving the components ii) & iii). However, the performance is still substantially lower than other surgical tasks. Our intuition behind such behaviour is due to the inherently complex and ambiguous conditions in MIS, which reflects the inability of the models to learn meaningful features. Our work is then based on the following modelling hypothesis.
|
| 42 |
+
|
| 43 |
+
## Hypothesis 2.1: Deep Features are key for Robustness
|
| 44 |
+
|
| 45 |
+
Deep surgical techniques for triplet recognition lacks reliability due to the ineffective features. Therefore, the key to boosting performance, improving trustworthiness and reliability, and understanding failure of deep models is in the deep features.
|
| 46 |
+
|
| 47 |
+
Following previous hypothesis, we address the questions of-why deep triplet recognition models fail? We do that by analysing the feature based explanations via robustness. To do this, we consider the current three SOTA techniques for our study: Tripnet (Nwoye et al., 2020), Attention Tripnet, and Rendezvous (Nwoye et al., 2022). Moreover, we extensively investigate the repercussion of deep features using four widely used backbones ResNet-18, ResNet-50 (He et al., 2015), DenseNet-121 (Huang et al., 2016), and Swin Transformer(Liu et al., 2021). In the next section, we detail our strategy for analysing robustness. 2.2 FEATURE
|
| 48 |
+
|
| 49 |
+

|
| 50 |
+
|
| 51 |
+
Figure 2: Illustration of the main network structure, and how the adversarial perturbation is added to measure robustness.
|
| 52 |
+
|
| 53 |
+
BASED EXPLANATIONS VIA ROBUSTNESS
|
| 54 |
+
|
| 55 |
+
## Our models of the triplet recognition output the
|
| 56 |
+
|
| 57 |
+
logits of triplets composition, we then use it to select our predicted label for the classification result. We define the model from image $\mathbf{x}$ to the predicted label $\widehat{y}$ as $f : \mathcal{X} \rightarrow \mathcal{Y}$ , where $\mathcal{X} \subset {\mathbb{R}}^{H \times W \times 3},\mathcal{Y} = \left\{ {0,1,2,\ldots ,{C}_{IVT} - 1}\right\}$ .
|
| 58 |
+
|
| 59 |
+
For each class $m \in \mathcal{Y}$ and within each given sample, we seek to recognise core and spurious attributions (Singla & Feizi, 2021; Singla et al., 2021), which definition is as follows.
|
| 60 |
+
|
| 61 |
+
- Core Attributes: they refer to the features that form a part in the object we are detecting.
|
| 62 |
+
|
| 63 |
+
B Spurious Attributes: these are the ones that not a part of the object but co-occurs with it.
|
| 64 |
+
|
| 65 |
+
How We Evaluate Robustness? The body of literature has reported several alternatives for addressing the robustness of deep networks. Our work is motivated by recent findings on perturbation based methods, where even a small perturbation can significantly affect the performance of neural nets. In particular, we consider the setting of adversarial training (Allen-Zhu & Li, 2022; Olah et al., 2018; Engstrom et al., 2019) for robustify a given deep model.
|
| 66 |
+
|
| 67 |
+
The idea behind adversarial training for robustness is to enforce a given model to maintain its performance under a given perturbation $\delta$ . This problem can be seen cast as an optimisation problem over the network parameters $\theta$ as:
|
| 68 |
+
|
| 69 |
+
$$
|
| 70 |
+
{\theta }^{ * } = \arg \mathop{\min }\limits_{\theta }{\mathbb{E}}_{\left( {\mathbf{x}, y}\right) \sim \mathcal{D}}\left\lbrack {{\mathcal{L}}_{\theta }\left( {\mathbf{x}, y}\right) }\right\rbrack . \tag{1}
|
| 71 |
+
$$
|
| 72 |
+
|
| 73 |
+
where $\mathbb{E}\left\lbrack {{\mathcal{L}}_{\theta }\left( \cdot \right) }\right\rbrack$ denotes the expected loss to the parameter $\theta$ .
|
| 74 |
+
|
| 75 |
+
One seeks to the model be resistant to any $\delta$ -perturbation. In this work, we follow a generalised adversarial training model, which reads:
|
| 76 |
+
|
| 77 |
+
Definition 2.1: Adversarial training under $\delta$
|
| 78 |
+
|
| 79 |
+
$$
|
| 80 |
+
{\theta }^{ * } = \arg \mathop{\min }\limits_{\theta }{\mathbb{E}}_{\left( {\mathbf{x}, y}\right) \sim \mathcal{D}}\left\lbrack {\mathop{\max }\limits_{{\delta \in \Delta }}{\mathcal{L}}_{\theta }\left( {\mathbf{x} + \delta , y}\right) }\right\rbrack .
|
| 81 |
+
$$
|
| 82 |
+
|
| 83 |
+
The goal is to the models do not change their performance even under the worse (strong) $\delta$ .
|
| 84 |
+
|
| 85 |
+
The machine learning literature has explored different forms of the generalised model in definition equation 2.1. For example, a better sparsity regulariser for the adversarial training as in (Xu et al., 2018). In this work, we adopt the evaluation criteria of that (Hsieh et al., 2020), where one seeks to measure the susceptibility of features to adversarial perturbations. More precisely, we can have an insight of the deep features extracted by our prediction through visualising compact set of relevant features selected by some defined explanation methods on trained models, and measuring the robustness of the models by performing adversarial attacks on the relevant or the irrelevant features.
|
| 86 |
+
|
| 87 |
+
We denote the set of all features as $U$ , and consider a general set of feature $S \subseteq U$ . Since the feature we are interested are those in the image $\mathbf{x}$ , we further denote the subset of $S$ that related to the image as ${\mathbf{x}}_{S}$ . To measure the robustness of the model, we rewrote the generalised model equation 2.1 following the evaluation criteria of that (Hsieh et al.,2020). A model on input $\mathbf{x}$ with adversarial perturbation on feature set $S$ then reads:
|
| 88 |
+
|
| 89 |
+
Definition 2.2: Adversarial $\delta$ &Robustness- $S$
|
| 90 |
+
|
| 91 |
+
$$
|
| 92 |
+
{\varepsilon }_{{\mathbf{x}}_{S}}^{ * } \mathrel{\text{:=}} \left\{ {\mathop{\min }\limits_{\mathbf{\delta }}\parallel \mathbf{\delta }{\parallel }_{p}\;\text{ s.t. }f\left( {\mathbf{x} + \mathbf{\delta }}\right) \neq y,\;{\mathbf{\delta }}_{\bar{S}} = 0}\right\} ,
|
| 93 |
+
$$
|
| 94 |
+
|
| 95 |
+
where $y$ is the ground truth label of image $\mathbf{x};\parallel \cdot {\parallel }_{p}$ denotes the adversarial perturbation norm; $\bar{S} = U \smallsetminus S$ denotes the complementary set of feature $S$ with ${\delta }_{\bar{S}} = 0$ constraining the perturbation only happens on ${\mathbf{x}}_{S}$ . We refer to ${\varepsilon }_{{\mathbf{x}}_{S}}^{ * }$ as Robustness- $\mathbf{S}$ (Hsieh et al.,2020), or the minimum adversarial perturbation norm on ${\mathbf{x}}_{S}$ .
|
| 96 |
+
|
| 97 |
+
We then denote the relevant features selected by the explanation methods as ${S}_{r} \subseteq U$ , with the irrelevant features as its complementary set $\overline{{S}_{r}} = U \smallsetminus {S}_{r}$ . Thus, the robustness on chosen feature sets $- {S}_{r}$ and $\overline{{S}_{r}}$ tested on image $\mathbf{x}$ are:
|
| 98 |
+
|
| 99 |
+
$$
|
| 100 |
+
\text{Robustness-}{S}_{r} = {\varepsilon }_{{x}_{{S}_{r}}}^{ * };\;\text{Robustness-}\overline{{S}_{r}} = {\varepsilon }_{{x}_{\overline{{S}_{r}}}}^{ * }\text{.}
|
| 101 |
+
$$
|
| 102 |
+
|
| 103 |
+
Table 1: Performance comparison for the task of Triplet recognition. The results are reported in terms of Average Precision (AP%) on the CholecT45 dataset using the official cross-validation split.
|
| 104 |
+
|
| 105 |
+
<table><tr><td colspan="2">Method</td><td colspan="3">COMPONENT DETECTION</td><td colspan="3">TRIPLET ASSOCIATION</td></tr><tr><td>BASELINE</td><td>BACKBONE</td><td>$A{P}_{l}$</td><td>${APV}$</td><td>$A{P}_{T}$</td><td>$A{P}_{IV}$</td><td>${AP}{r}_{T}$</td><td>$A{P}_{IVT}$</td></tr><tr><td rowspan="3">Tripnet</td><td>ResNet-18</td><td>${82.4} \pm {2.5}$</td><td>${54.1} \pm {2.0}$</td><td>${33.0} \pm {2.3}$</td><td>${30.6} \pm {2.6}$</td><td>${25.9} \pm {1.5}$</td><td>${21.2} \pm {1.2}$</td></tr><tr><td>ResNet-50</td><td>${85.3} \pm {1.3}$</td><td>${57.8} \pm {1.6}$</td><td>${34.7} \pm {1.9}$</td><td>${31.3} \pm {2.3}$</td><td>${27.1} \pm {2.4}$</td><td>${21.9} \pm {1.5}$</td></tr><tr><td>DenseNet-121</td><td>${86.9} \pm {1.4}$</td><td>${58.7} \pm {1.5}$</td><td>${35.6} \pm {2.8}$</td><td>${33.4} \pm {3.4}$</td><td>${27.8} \pm {1.8}$</td><td>${22.5} \pm {2.3}$</td></tr><tr><td rowspan="3">Attention Tripnet</td><td>ResNet-18</td><td>${82.2} \pm {2.6}$</td><td>${56.7} \pm {3.8}$</td><td>${34.6} \pm {2.2}$</td><td>${30.8} \pm {1.8}$</td><td>${27.4} \pm {1.3}$</td><td>${21.7} \pm {1.3}$</td></tr><tr><td>ResNet-50</td><td>${81.9} \pm {3.0}$</td><td>${56.8} \pm {1.1}$</td><td>${34.1} \pm {1.4}$</td><td>${31.5} \pm {2.2}$</td><td>${27.5} \pm {1.0}$</td><td>${21.9} \pm {1.2}$</td></tr><tr><td>DenseNet-121</td><td>${83.7} \pm {3.5}$</td><td>${57.5} \pm {3.2}$</td><td>${34.3} \pm {1.3}$</td><td>${33.1} \pm {2.4}$</td><td>${28.5} \pm {1.6}$</td><td>${22.8} \pm {1.3}$</td></tr><tr><td rowspan="4">Rendezvous</td><td>ResNet-18</td><td>${85.3} \pm {1.4}$</td><td>${58.9} \pm {2.6}$</td><td>${35.2} \pm {3.4}$</td><td>${33.6} \pm {2.6}$</td><td>${30.1} \pm {2.8}$</td><td>${24.3} \pm {2.3}$</td></tr><tr><td>ResNet-50</td><td>${85.4} \pm {1.6}$</td><td>${58.4} \pm {1.4}$</td><td>${34.7} \pm {2.4}$</td><td>${35.3} \pm {3.5}$</td><td>${30.8} \pm {2.6}$</td><td>${25.3} \pm {2.7}$</td></tr><tr><td>DenseNet-121</td><td>${88.5} \pm {2.7}$</td><td>${61.7} \pm {1.7}$</td><td>${36.7} \pm {2.1}$</td><td>${36.5} \pm {4.7}$</td><td>${32.1} \pm {2.7}$</td><td>${26.3} \pm {2.9}$</td></tr><tr><td>Swin-T</td><td>${73.6} \pm {1.9}$</td><td>${48.3} \pm {2.6}$</td><td>${29.2} \pm {1.4}$</td><td>${28.1} \pm {3.1}$</td><td>${24.7} \pm {2.0}$</td><td>${20.4} \pm {2.1}$</td></tr></table>
|
| 106 |
+
|
| 107 |
+
## 3 EXPERIMENTAL RESULTS
|
| 108 |
+
|
| 109 |
+
In this section, we describe in detail the range of experiments that we conducted to validate our methodology.
|
| 110 |
+
|
| 111 |
+
### 3.1 DATASET DESCRIPTION AND EVALUATION PROTOCOL
|
| 112 |
+
|
| 113 |
+
Dataset Description. We use CholecT45 dataset (Nwoye & Padoy, 2022) to evaluate the robustness of the three SOTA models for the Surgical Action Triplet Recognition task. Specifically, CholecT45
|
| 114 |
+
|
| 115 |
+
Table 2: Heatmaps Comparison under different feature extraction backbones. We displayed four randomly selected images in fold 3 when using the best performed weights trained and validated on folds 1,2,4 and 5 . dataset contains 45 videos with annotations including 6 classes of instrument, 10 classes of verb, and 15 classes of target (i.e. ${C}_{I} = 6,{C}_{V} = {10},{C}_{T} = {15}$ ) generating ${900}\left( {6 \times {10} \times {25}}\right)$ potential combinations for triplet labels. To maximise the clinical utility, we utilise the top-100 combinations of relevant labels, which are selected by removing a large portion of spurious combinations according to class grouping and surgical relevance rating (Nwoye et al., 2022). Each video contains around 2,000 annotated frames extracted at $1\mathrm{{fps}}$ in RGB channels, leading to a total of 90,489 recorded frames. To remove the redundant information, the frames captured after the laparoscope been taken out of the body are blacked out with value $\left\lbrack {0,0,0}\right\rbrack$ .
|
| 116 |
+
|
| 117 |
+

|
| 118 |
+
|
| 119 |
+
Table 3: Top 5 predicted Triplet classes in each of the 10 models. The top 5 is assessed by the $A{P}_{IVT}$ score.
|
| 120 |
+
|
| 121 |
+
<table><tr><td/><td colspan="3">ResNet-18</td><td colspan="4">ResNet-50</td><td colspan="4">DenseNet-121</td><td colspan="2">Swin-T</td></tr><tr><td rowspan="6">Tripnet</td><td colspan="2">Triplet</td><td>${AP}$</td><td colspan="3">Triplet</td><td>${AP}$</td><td colspan="3">Triplet</td><td>${AP}$</td><td/><td/></tr><tr><td>12:grasper grasp</td><td>specimen_bag</td><td>82.60%</td><td>17:grasper</td><td>retract</td><td>gallbladder</td><td>86.95%</td><td>17: grasper</td><td>retract</td><td>gallbladder</td><td>86.93%</td><td/><td/></tr><tr><td>17:grasper retract</td><td>gallbladder</td><td>81.04%</td><td>12:grasper</td><td>grasp</td><td>specimen_bag</td><td>80.50%</td><td>12:grasper</td><td>grasp</td><td>specimen_bag</td><td>81.45%</td><td/><td/></tr><tr><td>29.bipolar coagulate</td><td>liver</td><td>77.11%</td><td>60:hook</td><td>dissoct</td><td>gallbladder</td><td>77.15%</td><td>29:bipolar</td><td>coagulate</td><td>liver</td><td>80.19%</td><td/><td/></tr><tr><td>60 hook dissect</td><td>gallbladder</td><td>74.13%</td><td>29.bipolar</td><td>coagulate</td><td>liver</td><td>75.69%</td><td>60 hook</td><td>dissect</td><td>gallbladder</td><td>76.35%</td><td/><td/></tr><tr><td>79 clipper clip</td><td>cystic_duct</td><td>61.28%</td><td>6:grasper</td><td>grasp</td><td>cystic_plate</td><td>69.24%</td><td>79:clipper</td><td>clip</td><td>cystic_duct</td><td>67.75%</td><td/><td/></tr><tr><td rowspan="6">Attention Tripnet</td><td colspan="2">Triplet</td><td>${AP}$</td><td colspan="3">Triplet</td><td>${AP}$</td><td colspan="3">Triplet</td><td>${AP}$</td><td colspan="2"/></tr><tr><td>12.grasper grasp</td><td>specimen_bag</td><td>81.38%</td><td>17:grasper</td><td>retract</td><td>gallbladder</td><td>82.75%</td><td>17:grasper</td><td>retract</td><td>gallbladder</td><td>83.63%</td><td/><td/></tr><tr><td>17:grasper retract</td><td>gallbladder</td><td>78.70%</td><td>12:grasper</td><td>grasp</td><td>specimen_bag</td><td>78.53%</td><td>12:grasper</td><td>grasp</td><td>specimen_bag</td><td>80.01%</td><td/><td/></tr><tr><td>29:bipolar coagulate</td><td>liver</td><td>78.52%</td><td>29:bipolar</td><td>coagulate</td><td>liver</td><td>76.44%</td><td>29:bipolar</td><td>coagulate</td><td>liver</td><td>75.68%</td><td/><td/></tr><tr><td>28:bipolar coagulate</td><td>gallbladder</td><td>77.44%</td><td>60:hook</td><td>dissoct</td><td>gallbladder</td><td>71.79%</td><td>60 hook</td><td>dissect</td><td>gallbladder</td><td>75.36%</td><td/><td/></tr><tr><td>30:bipolar coagulate</td><td>omentum</td><td>77.39%</td><td>28:binolar</td><td>coagulate</td><td>gallbladder</td><td>70.68%</td><td>30:bipolar</td><td>consulate</td><td>omentum</td><td>69.49%</td><td/><td/></tr><tr><td rowspan="6">Rendezvous</td><td colspan="2">Triplet</td><td>${AP}$</td><td colspan="3">Triplet</td><td>${AP}$</td><td colspan="3">Triplet</td><td>${AP}$</td><td>Triplet</td><td>${AP}$</td></tr><tr><td>17:grasper retract</td><td>gallbladder</td><td>85.57%</td><td>30:bipolar</td><td>coagulate</td><td>omentum</td><td>91.36%</td><td>84 : irrizator</td><td>dissect</td><td>cystic pedicle</td><td>96.84%</td><td>17:grasperretractgallbladder</td><td>78.36%</td></tr><tr><td>29:bipolar coagulate</td><td>liver</td><td>83.90%</td><td>17: grasper</td><td>retract</td><td>gallbladder</td><td>86.11%</td><td>30:bipolar</td><td>coagulate</td><td>omentum</td><td>89.60%</td><td>60thookdissectgallbladder</td><td>72.57%</td></tr><tr><td>12:grasper grasp</td><td>specimen_bag</td><td>82.77%</td><td>29:bipolar</td><td>coagulate</td><td>liver</td><td>84.94%</td><td>17:grasper</td><td>retract</td><td>gallbladder</td><td>89.46%</td><td>12:graspergraspspecimen_bag</td><td>69.96%</td></tr><tr><td>30:bipolar coagulate</td><td>omentum</td><td>76.88%</td><td>12: grasper</td><td>grasp</td><td>specimen bag</td><td>81.50%</td><td>12:grasper</td><td>grasp</td><td>specimen bag</td><td>85.88%</td><td>30:bipolarcoagulateomentum</td><td>67.03%</td></tr><tr><td>60 hookdissect</td><td>gallbladder</td><td>76.49%</td><td>28:bipolar</td><td>coagulate</td><td>gallbladder</td><td>79.60%</td><td>29:bipolar</td><td>coagulate</td><td>liver</td><td>84.43%</td><td>29:bipolarcoagulateliver</td><td>66.08%</td></tr></table>
|
| 122 |
+
|
| 123 |
+
Evaluation Protocol. The triplet action recognition is evaluated by the average precision(AP) metric. Our models can directly output the predictions of triplet class $A{P}_{IVT}$ . Instead, $A{P}_{d}$ where $d \in \{ I, V, T,{IV},{IT}\}$ cannot be predicted explicitly. Then we obtain the final predictions of $d \in$ $\{ I, V, T,{IV},{IT}\}$ components according to (Nwoye &Padoy,2022; Nwoye et al.,2022):
|
| 124 |
+
|
| 125 |
+
$$
|
| 126 |
+
{Y}_{d}{}^{k} = \mathop{\max }\limits_{m}\left\{ {{\mathbf{Y}}_{IVT}{}^{m}}\right\} ,\;\forall m \in \left\{ {0,1..,{C}_{IVT}}\right\} \text{ s.t. }{h}_{d}\left( m\right) = k,
|
| 127 |
+
$$
|
| 128 |
+
|
| 129 |
+
where we calculate the probability of class $k \in \left\{ {0,1,..,{C}_{d} - 1}\right\}$ under component $d$ ; and ${h}_{d}\left( \cdot \right)$ maps the class $m$ from ${IVT}$ triplet compositions to the class under component $d$ . In our robustness analysis, the main evaluation criteria is the robustness subject to the selected feature set $\left( {S}_{r}\right.$ and $\left. \overline{{S}_{r}}\right)$ on each backbone using the formula in equation 2.2.
|
| 130 |
+
|
| 131 |
+
### 3.2 IMPLEMENTATION DETAILS
|
| 132 |
+
|
| 133 |
+
We evaluate the model performance based on five-fold cross-validation, where we split 45 full videos into 5 equal folds. The testing set is selected from these 5 folds, and we treat the remaining 4 folds as the training set. Moreover, 5 videos from the 36 training set videos are selected as validation set during training.
|
| 134 |
+
|
| 135 |
+
The models are trained using the Stochastic Gradient Descent (SGD) optimiser. The feature extraction backbones are initialised with ImageNet pre-trained weights. Both linear and exponential decay of learning rate are used during training, with initial learning rates as $\left\{ {1{e}^{-2},1{e}^{-2},1{e}^{-2}}\right\}$ for backbone, encoder and decoder parts respectively. We set the batch size as 32 , and epoch which performs the best among all recorded epochs up to ${AP}$ score saturation on validation set in the specified k-fold. To reduce computational load, the input images and corresponding segmentation masks are resized from 256 $\times$ 448 to 8 $\times$ 14. For fair comparison, we ran all SOTA models (following all suggested protocols from the official repository) under the same conditions and using the official cross-validation split of the CholecT45 dataset (Nwoye & Padoy, 2022).
|
| 136 |
+
|
| 137 |
+
### 3.3 EVALUATION ON DOWNSTREAM TASKS
|
| 138 |
+
|
| 139 |
+
In this section, we carefully analyse the current SOTA techniques for triplet recognition from the feature based explainability lens.
|
| 140 |
+
|
| 141 |
+
C. Results on Triplet Recognition with Cross-Validation. As first part of our analysis, we investigate the performance limitation on current SOTA techniques, and emphasise how such limitation is linked to the lack of reliable features. The results are reported in Table 1. In a closer look at the results, we observe that ResNet-18, in general, performs the worst among the compared backbones. However, we can observe that for one case, component analysis, it performs better than ResNet-50 under Tripnet Attention baseline. The intuition being such behaviour is that the MIS setting relies on ambiguous condition and, in some cases, some frames might contain higher spurious features that are better captured by it. We remark that the mean and standard-deviation in Table 1 are calculated from the 5 folds in each combination of backbone and baseline.
|
| 142 |
+
|
| 143 |
+
We also observe that ResNet-50 performs better than ResNet-18 due to the deeper feature extraction. The best performance, for both the tasks-component detection and triplet association, is reported by DenseNet-121. The intuition behind the performance gain is that DenseNet-121 somehow mitigates the issue of the limitation of the capability representation. This is because ResNet type networks are limited by the identity shortcut that stabilises training. These results support our modelling hypothesis that the key of performance is the robustness of the deep features.
|
| 144 |
+
|
| 145 |
+
A key finding in our results is that whilst existing SOTA techniques (Nwoye & Padoy, 2022; Nwoye et al., 2022) are devoted to developing new network mechanisms, one can observe that a substantial performance improvement when improving the feature extraction. Moreover and unlike other surgical tasks, current techniques for triplet recognition are limited in performance. Why is this happening? Our results showed that the key is in the reliable features (linked to robustness); as enforcing more meaningful features, through several backbones, a significant performance improvement over all SOTA techniques is observed.
|
| 146 |
+
|
| 147 |
+
To further support our previous findings, we also ran a set of experiments using the trending principle of Transformers. More precisely, an non CNN backbone—the tiny Swin Transformer (Swin-T) (Liu et al.,2021) has also been tested on the Rendezvous, which has rather low ${AP}$ scores on all of the 6 components in oppose to the $3\mathrm{{CNN}}$ backbones. This could be led by the shifted windows in the Swin-T, it is true that the shifted windows largely reduced the computational cost, but this could lead to bias feature attribute within bounding boxes, the incoherent spreading can be seen clearly in the visualisation of detected relevant features in Swin-T in Figure 3 (a).
|
| 148 |
+
|
| 149 |
+
In Table 1 we displayed the average results over all classes but-what behaviour can be observed from the per-class performance? It can be seen from Table 3 that though the best 5 predicted classes are different in each model, the predicted compositions seem clinically sensible supporting our previous discussion. In addition, the top 1 per-class ${AP}$ score is significantly higher in DenseNet-121 with Rendezvous.
|
| 150 |
+
|
| 151 |
+
C Visualisation Results. To interpret features is far from being trivial. To address this issue, we provide a human-like comparison via heatmaps in Table 2. The implementation of the heatmaps is adapted from (Zhou et al., 2016). The displayed outputs reflect what the model is focusing based on the extracted features. These results support our hypothesis that deep features are the key in making correct predictions over any new network mechanism.
|
| 152 |
+
|
| 153 |
+
We observed that in the worst performed backbone-Swin-T, the feature been extracted are mostly spread across the images, however, the ones that concentrate on core attributes are not though performed the best. In the best performed DenseNet-121, a reasonable amount of attention are also been paid to spurious attributes; this can be seen more directly in our later discussion on robustness visualisation Figure 3.
|
| 154 |
+
|
| 155 |
+
The reported probability on the predicted label emphasises again the outstanding performance of DenseNet-121 backbone; in the sense that, the higher the probability for the correct label the better, the lower it is for incorrect prediction the better.
|
| 156 |
+
|
| 157 |
+
C Why Surgical Triplet Recognition Models Fail? Robustness and Interpretability. We further support our findings through the lens of robustness. We use as evaluation criteria Robustness- ${S}_{r}$ and Robustness- ${S}_{r}$ with different explanation methods: vanilla gradient (Grad) (Shrikumar et al.,2017) and integrated gradient (IG) (Sundararajan et al., 2017). The results are in Table 4 & Figure 3.
|
| 158 |
+
|
| 159 |
+
Table 4: Robustness measured on 400 examples (i.e. images) randomly selected from the images in the fold 3 videos with exactly 1 labeled triplet. Top 25 percent of relevant ${S}_{r}$ or irrelevant $\overline{{S}_{r}}$ features are selected from 2 explanation methods Grad and IG. We perform attacks on the selected 25 percent.
|
| 160 |
+
|
| 161 |
+
<table><tr><td rowspan="2">ATTACKED FEATURES</td><td rowspan="2">EXPLANATION METHODS</td><td colspan="4">BACKBONES (ON RENDEZVOUS)</td></tr><tr><td>ResNet-18</td><td>ResNet-50</td><td>DenseNet-121</td><td>Swin-T</td></tr><tr><td rowspan="2">Robustness- ${S}_{r}$</td><td>Grad</td><td>2.599687</td><td>2.651435</td><td>3.287798</td><td>1.778592</td></tr><tr><td>IG</td><td>2.621901</td><td>2.686064</td><td>3.319311</td><td>1.777737</td></tr><tr><td rowspan="2">Robustness- ${S}_{r}$</td><td>Grad</td><td>2.517404</td><td>2.608013</td><td>3.188270</td><td>1.750599</td></tr><tr><td>IG</td><td>2.515343</td><td>2.603118</td><td>3.187848</td><td>1.749097</td></tr></table>
|
| 162 |
+
|
| 163 |
+
#### 3.3.1 COMPARISON BETWEEN DIFFERENT BACKBONES
|
| 164 |
+
|
| 165 |
+
In Table 4, we show the robustness results with top 25% attacked features on the average over 400 frames randomly chosen with exactly 1 labeled triplet. On one hand, we observe that the DenseNet-121 backbone consistently outperforms other network architectures on both evaluation criteria Robustness- ${S}_{r}$ and Robustness- $\overline{{S}_{r}}$ . This suggests that DenseNet-121 backbone does capture different explanation characteristics which ignored by other network backbones. On the other hand, our results are supported by the finding in (Hsieh et al., 2020), as IG performs better than Grad; and the attack on relevant features yields lower robustness than perturbing the same percentage of irrelevant features.
|
| 166 |
+
|
| 167 |
+
#### 3.3.2 ROBUSTNESS EXPLANATION FOR SPECIFIC IMAGES
|
| 168 |
+
|
| 169 |
+
To more objectively evaluate the robustness explanation for specific images, we show: (a) Visualisation of important features,(b) Robustness- ${S}_{r}$ ,(c) Robustness against the percentage of Top features, and (d) Robustness- ${S}_{r}$ in Figure 3. In Figure 3 (a), we visualise the Top 15% features (with yellow dots) by Grad and IG, respectively, and overlay it on manually labelled region containing instrument (in red) and target (in green). We observe that the best performed backbone (can be seen from the robustness comparison curves in Figure 3 (c)) on the specific image is the one that not only pays attention to core attributes, but also the spurious attribute. In the image VID08-000188, the best performed model is ResNet-18, which shows the ambiguous condition on individual images. In a closer look at Figure 3 (a), a small portion of the most relevant feature extracted by ResNet-18 is spread not on the close surrounding of the object area. This importance of spurious attribute is further highlighted in image VID18-001156. We observe that DenseNet-121 provides the most robust result highlighting relevant features within the tissue region and across tool tip. The worst performed model-ResNet-18 merely treated the core attributes as relevant.
|
| 170 |
+
|
| 171 |
+

|
| 172 |
+
|
| 173 |
+
Figure 3: The set of figures shows robustness analysis on randomly selected images with a. the visualisation of the Top 15 percent of important features selected by the 2 explanation methods-Grad and IG; b. (/d.) the trends showing the robustness measured on the relevant ${S}_{r}$ (/irrelevant $\overline{{S}_{r}}$ ) features been selected by the 2 explanation methods against the percentage of Top features been defined as relevant; c. the comparison of the robustness across the 4 backbones embedded in Rendezvous baseline.
|
| 174 |
+
|
| 175 |
+
The relevant role of spurious attributes can be explained by the nature of the triplet, which consists a verb component that is not the physical object. Overall, we observe that reliable deep features are the key for robust models in triplet recognition. Moreover, we observe, unlike existing works of robustness against spurious features, that both core and spurious attributes are key for the prediction.
|
| 176 |
+
|
| 177 |
+
## 4 CONCLUSION
|
| 178 |
+
|
| 179 |
+
We present the first work to understand the failure of existing deep learning models for the task of triplet recognition. We provided an extensive analysis through the lens of robustness. The significance of our work lies on understanding and addressing the key issues associated with the substantially limited in performance of existing techniques. Our work offers a step forward to more trustworthy and reliable models.
|
| 180 |
+
|
| 181 |
+
REFERENCES
|
| 182 |
+
|
| 183 |
+
Zeyuan Allen-Zhu and Yuanzhi Li. Feature purification: How adversarial training performs robust deep learning. In 2021 IEEE 62nd Annual Symposium on Foundations of Computer Science (FOCS), pp. 977-988. IEEE, 2022.
|
| 184 |
+
|
| 185 |
+
Angelica I Aviles, Samar M Alsaleh, James K Hahn, and Alicia Casals. Towards retrieving force feedback in robotic-assisted surgery: A supervised neuro-recurrent-vision approach. IEEE transactions on haptics, 10(3):431-443, 2016.
|
| 186 |
+
|
| 187 |
+
Tobias Blum, Hubertus Feußner, and Nassir Navab. Modeling and segmentation of surgical work-flow from laparoscopic video. In International conference on medical image computing and computer-assisted intervention, pp. 400-407. Springer, 2010.
|
| 188 |
+
|
| 189 |
+
Olga Dergachyova, David Bouget, Arnaud Huaulmé, Xavier Morandi, and Pierre Jannin. Automatic data-driven real-time segmentation and recognition of surgical workflow. International journal of computer assisted radiology and surgery, 11(6):1081-1089, 2016.
|
| 190 |
+
|
| 191 |
+
Logan Engstrom, Andrew Ilyas, Shibani Santurkar, Dimitris Tsipras, Brandon Tran, and Alek-sander Madry. Adversarial robustness as a prior for learned representations. arXiv preprint arXiv:1906.00945, 2019.
|
| 192 |
+
|
| 193 |
+
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition,2015. URL https://arxiv.org/abs/1512.03385.
|
| 194 |
+
|
| 195 |
+
Cheng-Yu Hsieh, Chih-Kuan Yeh, Xuanqing Liu, Pradeep Ravikumar, Seungyeon Kim, Sanjiv Kumar, and Cho-Jui Hsieh. Evaluations and methods for explanation through robustness analysis. arXiv preprint arXiv:2006.00442, 2020.
|
| 196 |
+
|
| 197 |
+
Gao Huang, Zhuang Liu, Laurens van der Maaten, and Kilian Q. Weinberger. Densely connected convolutional networks, 2016. URL https://arxiv.org/abs/1608.06993.
|
| 198 |
+
|
| 199 |
+
Darko Katić, Anna-Laura Wekerle, Fabian Gärtner, Hannes Kenngott, Beat Peter Müller-Stich, Rüdiger Dillmann, and Stefanie Speidel. Knowledge-driven formalization of laparoscopic surgeries for rule-based intraoperative context-aware assistance. In International Conference on Information Processing in Computer-Assisted Interventions, pp. 158-167. Springer, 2014.
|
| 200 |
+
|
| 201 |
+
Pang Wei Koh and Percy Liang. Understanding black-box predictions via influence functions. In International conference on machine learning, pp. 1885-1894. PMLR, 2017.
|
| 202 |
+
|
| 203 |
+
Lihao Liu, Qi Dou, Hao Chen, Jing Qin, and Pheng-Ann Heng. Multi-task deep model with margin ranking loss for lung nodule analysis. IEEE transactions on medical imaging, 39(3):718-728, 2019.
|
| 204 |
+
|
| 205 |
+
Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, and Baining Guo. Swin transformer: Hierarchical vision transformer using shifted windows, 2021. URL https: //arxiv.org/abs/2103.14030.
|
| 206 |
+
|
| 207 |
+
Benny PL Lo, Ara Darzi, and Guang-Zhong Yang. Episode classification for the analysis of tissue/instrument interaction with multiple visual cues. In International conference on medical image computing and computer-assisted intervention, pp. 230-237. Springer, 2003.
|
| 208 |
+
|
| 209 |
+
Lena Maier-Hein, Swaroop Vedula, Stefanie Speidel, Nassir Navab, Ron Kikinis, Adrian Park, Matthias Eisenmann, Hubertus Feussner, Germain Forestier, Stamatia Giannarou, et al. Surgical data science: enabling next-generation surgery. arXiv preprint arXiv:1701.06482, 2017.
|
| 210 |
+
|
| 211 |
+
Thomas Neumuth, Gero Strauß, Jürgen Meixensberger, Heinz U Lemke, and Oliver Burgert. Acquisition of process descriptions from surgical interventions. In International conference on database and expert systems applications, pp. 602-611. Springer, 2006.
|
| 212 |
+
|
| 213 |
+
Chinedu Innocent Nwoye and Nicolas Padoy. Data splits and metrics for method benchmarking on surgical action triplet datasets. arXiv preprint arXiv:2204.05235, 2022.
|
| 214 |
+
|
| 215 |
+
Chinedu Innocent Nwoye, Cristians Gonzalez, Tong Yu, Pietro Mascagni, Didier Mutter, Jacques Marescaux, and Nicolas Padoy. Recognition of instrument-tissue interactions in endoscopic videos via action triplets. In International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI), pp. 364-374. Springer, 2020.
|
| 216 |
+
|
| 217 |
+
Chinedu Innocent Nwoye, Tong Yu, Cristians Gonzalez, Barbara Seeliger, Pietro Mascagni, Didier Mutter, Jacques Marescaux, and Nicolas Padoy. Rendezvous: Attention mechanisms for the recognition of surgical action triplets in endoscopic videos. Medical Image Analysis, 78:102433, 2022.
|
| 218 |
+
|
| 219 |
+
Chris Olah, Arvind Satyanarayan, Ian Johnson, Shan Carter, Ludwig Schubert, Katherine Ye, and Alexander Mordvintsev. The building blocks of interpretability. Distill, 3(3):e10, 2018.
|
| 220 |
+
|
| 221 |
+
Gregory Plumb, Denali Molitor, and Ameet S Talwalkar. Model agnostic supervised local explanations. Advances in neural information processing systems, 31, 2018.
|
| 222 |
+
|
| 223 |
+
Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. Why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pp. 1135-1144, 2016.
|
| 224 |
+
|
| 225 |
+
Avanti Shrikumar, Peyton Greenside, and Anshul Kundaje. Learning important features through propagating activation differences. In International conference on machine learning, pp. 3145- 3153. PMLR, 2017.
|
| 226 |
+
|
| 227 |
+
Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. Deep inside convolutional networks: Vi-sualising image classification models and saliency maps. arXiv preprint arXiv:1312.6034, 2013.
|
| 228 |
+
|
| 229 |
+
Sahil Singla and Soheil Feizi. Salient imagenet: How to discover spurious features in deep learning? In International Conference on Learning Representations, 2021.
|
| 230 |
+
|
| 231 |
+
Sahil Singla, Besmira Nushi, Shital Shah, Ece Kamar, and Eric Horvitz. Understanding failures of deep networks via robust feature extraction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 12853-12862, 2021.
|
| 232 |
+
|
| 233 |
+
Mukund Sundararajan, Ankur Taly, and Qiqi Yan. Axiomatic attribution for deep networks. In International conference on machine learning, pp. 3319-3328. PMLR, 2017.
|
| 234 |
+
|
| 235 |
+
Andru P Twinanda, Sherif Shehata, Didier Mutter, Jacques Marescaux, Michel De Mathelin, and Nicolas Padoy. Endonet: a deep architecture for recognition tasks on laparoscopic videos. IEEE transactions on medical imaging, 36(1):86-97, 2016.
|
| 236 |
+
|
| 237 |
+
Vic Velanovich. Laparoscopic vs open surgery. Surgical endoscopy, 14(1):16-21, 2000.
|
| 238 |
+
|
| 239 |
+
Tom Vercauteren, Mathias Unberath, Nicolas Padoy, and Nassir Navab. Cai4cai: the rise of contextual artificial intelligence in computer-assisted interventions. Proceedings of the IEEE, 108(1): 198-214, 2019.
|
| 240 |
+
|
| 241 |
+
Erik B Wilson, Hossein Bagshahi, and Vicky D Woodruff. Overview of general advantages, limitations, and strategies. In Robotics in general surgery, pp. 17-22. Springer, 2014.
|
| 242 |
+
|
| 243 |
+
Eric Wong, Shibani Santurkar, and Aleksander Madry. Leveraging sparse linear layers for de-buggable deep networks. In International Conference on Machine Learning, pp. 11205-11216. PMLR, 2021.
|
| 244 |
+
|
| 245 |
+
Kaidi Xu, Sijia Liu, Pu Zhao, Pin-Yu Chen, Huan Zhang, Quanfu Fan, Deniz Erdogmus, Yanzhi Wang, and Xue Lin. Structured adversarial attack: Towards general implementation and better interpretability. arXiv preprint arXiv:1808.01664, 2018.
|
| 246 |
+
|
| 247 |
+
Chih-Kuan Yeh, Cheng-Yu Hsieh, Arun Suggala, David I Inouye, and Pradeep K Ravikumar. On the (in) fidelity and sensitivity of explanations. Advances in Neural Information Processing Systems, 32, 2019.
|
| 248 |
+
|
| 249 |
+
Matthew D Zeiler and Rob Fergus. Visualizing and understanding convolutional networks. In European conference on computer vision, pp. 818-833. Springer, 2014.
|
| 250 |
+
|
| 251 |
+
Bolei Zhou, Aditya Khosla, Agata Lapedriza, Aude Oliva, and Antonio Torralba. Learning deep features for discriminative localization. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2921-2929, 2016.
|
| 252 |
+
|
| 253 |
+
Odysseas Zisimopoulos, Evangello Flouty, Imanol Luengo, Petros Giataganas, Jean Nehme, Andre Chow, and Danail Stoyanov. Deepphase: surgical phase recognition in cataracts videos. In International conference on medical image computing and computer-assisted intervention, pp. 265-272. Springer, 2018.
|
papers/ICLR/ICLR 2023/ICLR 2023 Workshop/ICLR 2023 Workshop TML4H/zZcCINENgm/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,293 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ WHY DEEP SURGICAL MODELS FAIL?: REVISITING SURGICAL ACTION TRIPLET RECOGNITION THROUGH THE LENS OF ROBUSTNESS
|
| 2 |
+
|
| 3 |
+
Anonymous authors
|
| 4 |
+
|
| 5 |
+
Paper under double-blind review
|
| 6 |
+
|
| 7 |
+
§ ABSTRACT
|
| 8 |
+
|
| 9 |
+
Surgical action triplet recognition provides a better understanding of the surgical scene. This task is of high relevance as it provides the surgeon with context-aware support and safety. The current go-to strategy for improving performance is the development of new network mechanisms. However, the performance of current state-of-the-art techniques is substantially lower than other surgical tasks. Why is this happening? This is the question that we address in this work. We present the first study to understand the failure of existing deep learning models through the lens of robustness and explainability. Firstly, we study current existing models under weak and strong $\delta$ -perturbations via an adversarial optimisation scheme. We then analyse the failure modes via feature based explanations. Our study reveals that the key to improving performance and increasing reliability is in the core and spurious attributes. Our work opens the door to more trustworthy and reliable deep learning models in surgical data science.
|
| 10 |
+
|
| 11 |
+
§ 1 INTRODUCTION
|
| 12 |
+
|
| 13 |
+
Minimally Invasive Surgery (MIS) has become the gold standard for several procedures (i.e., cholecystectomy & appendectomy), as it provides better clinical outcomes including reducing blood loss, minimising trauma to the body, causing less post-operative pain and faster recovery (Velanovich, 2000; Wilson et al., 2014). Despite the benefits of MIS, surgeons lose direct vision and touch on the target, which decreases surgeon-patient transparency imposing technical challenges to the surgeon. These challenges have motivated the development of automatic techniques for the analysis of the surgical workflow (Aviles et al., 2016; Maier-Hein et al., 2017; Vercauteren et al., 2019; Nwoye et al., 2022). In particular, this work aims to address a key research problem in surgical data science-surgical recognition, which provides to the surgeon context-aware support and safety.
|
| 14 |
+
|
| 15 |
+
The majority of existing surgical recognition techniques focus on phase recognition (Blum et al., 2010; Dergachyova et al., 2016; Lo et al., 2003; Twinanda et al., 2016; Zisimopoulos et al., 2018). However, phase recognition is limited by its own definition; as it does not provide complete information on the surgical scene. We therefore consider the setting of surgical action triplet recognition, which offers a better understanding of the surgical scene. The goal of triplet recognition is to recognise the ⟨instrument, verb, target⟩ and their inherent relations. A visualisation of this task is displayed in Figure 1.
|
| 16 |
+
|
| 17 |
+
The concept behind triplet recognition has been recognised in the early works of that (Neumuth et al., 2006; Katić et al., 2014). However, it has not been until the recent introduction of richer datasets, such as CholecT40 (Nwoye et al., 2020), that the community started developing new techniques under more realistic conditions. The work of that Nwoye et al (Nwoye et al., 2020) proposed a framework called Tripnet, which was the first work to formally address surgical actions as triplets. In that work, authors proposed a 3D interaction space for learning the triplets. In more recent work, the authors of Nwoye et al. (2022) introduced two new models. The first one is a direct extension of Tripnet called Attention Tripnet, where the novelty relies on a spatial attention mechanism. In the same work, the authors introduced another model called Rendezvous (RDV) that highlights a transformer-inspired neural network.
|
| 18 |
+
|
| 19 |
+
A commonality of existing surgical action triplet recognition techniques is the development of new mechanisms for improving the network architecture. However and despite the potential improvements, the performance of existing techniques is substantially lower than other tasks in surgical sciences-for example, force estimation and navigation assisted surgery. In this work, we go contrariwise existing techniques, and tackle the surgical action triplet recognition problem from the lens of robustness and explainability.
|
| 20 |
+
|
| 21 |
+
< g r a p h i c s >
|
| 22 |
+
|
| 23 |
+
Figure 1: Visualisation of the surgical action triplet recognition task. We consider the tasks where the instrument(I), verb(V, action), and target $(T$ , anatomical part) seek to be predicted.
|
| 24 |
+
|
| 25 |
+
In the machine learning community there is a substantial increase of interest in understanding the lack of reliability of deep learning models (e.g., Ribeiro et al. (2016); Koh & Liang (2017); Sundararajan et al. (2017); Liu et al. (2019); Yeh et al. (2019); Hsieh et al. (2020)). To understand the lack of reliability of existing deep networks, a popular family of techniques is the so-called feature based explanations via robustness analysis (Si-monyan et al., 2013; Zeiler & Fergus, 2014; Plumb et al., 2018; Wong et al., 2021; Singla & Feizi, 2021). Whilst existing techniques have extensively been evaluated for natural images tasks, there are no existing works addressing the complex problems as in action triplet recognition.
|
| 26 |
+
|
| 27 |
+
Contributions. In this work, we introduce, to the best of our knowledge, the first study to understand the failure of existing deep learning models for surgical action triplet recognition. To do this, we analyse the failures of existing state-of-the-art solutions through the lens of robustness. Specifically, we push to the limit the existing SOTA techniques for surgical action triplet recognition under weak and strong $\delta$ -perturbations. We then extensively analyse the failure modes via the evaluation criteria Robustness-S, which analyses the behaviour of the models through feature based explanations. Our study reveals the impact of core and spurious features for more robust models. Our study opens the door to more trustworthy and reliable deep learning models in surgical data science, which is imperative for MIS.
|
| 28 |
+
|
| 29 |
+
§ 2 METHODOLOGY
|
| 30 |
+
|
| 31 |
+
We describe two key parts for Surgical action triplet recognition task: i) our experimental settings along with assumptions and ii) how we evaluate robustness via adversarial optimisation. The work-flow of our work is displayed in Figure 2.
|
| 32 |
+
|
| 33 |
+
§ 2.1 SURGICAL ACTION TRIPLET RECOGNITION
|
| 34 |
+
|
| 35 |
+
In the surgical action triplet recognition problem, the main task is to recognise the triplet IVT, which is the composition of three components during surgery: instrument(I), verb(V), and target(T)in a given RGB image $\mathbf{x} \in {\mathbb{R}}^{H \times W \times 3}$ .
|
| 36 |
+
|
| 37 |
+
Formally, we consider a given set of samples ${\left\{ \left( {\mathbf{x}}_{n},{y}_{n}\right) \right\} }_{n = 1}^{N}$ with provided labels $\mathcal{Y} = \left\{ {0,1,..,{C}_{IVT} - }\right.$ $1\}$ for ${C}_{IVT} = {100}$ classes. We seek then to predict a function $f : \mathcal{X} \mapsto \mathcal{Y}$ such that $f$ gets a good estimate for the unseen data. That is, a given parameterised deep learning model takes the image $\mathbf{x}$ as input, and outputs a set of class-wise presence probabilities, in our case 100 classes, under the ${IVT}$ composition, ${\mathbf{Y}}_{IVT} \in {\mathbb{R}}^{100}$ , which we call it the logits of ${IVT}$ . Since there are three individual components under the triplet composition, within the training network, we also considered the individual component ${d}^{ * } \in \{ I,V,T\}$ , each with class number ${C}_{{d}^{ * }}$ (i.e. ${C}_{I} = 6,{C}_{V} = {10},{C}_{T} = {15}$ ). The logits of each component, ${\mathbf{Y}}_{{d}^{ * }} \in {\mathbb{R}}^{{C}_{{d}^{ * }}}$ , are computed and used within the network.
|
| 38 |
+
|
| 39 |
+
In current state-of-the-art (SOTA) deep models (Nwoye et al., 2020; 2022), there is a communal structure divided into three parts: i) the feature extraction backbone; ii) the individual component encoder; and iii) the triplet aggregation decoder that associate the components and output the logits of the IVT triplet. More precisely, the individual component encoder firstly concentrates on the instrument component to output Class Activation Maps (CAMs $\in {\mathbb{R}}^{H \times W \times {C}_{d}}$ ) and the logits ${\mathbf{Y}}_{\mathbf{I}}$ of the instrument classes; the CAMs are then associated with the verb and target components separately for their logits $\left( {\mathbf{Y}}_{\mathbf{V}}\right.$ and $\left. {\mathbf{Y}}_{\mathbf{T}}\right)$ to address the instrument-centric nature of the triplet.
|
| 40 |
+
|
| 41 |
+
The current SOTA techniques for surgical action triplet recognition focus on improving the components ii) & iii). However, the performance is still substantially lower than other surgical tasks. Our intuition behind such behaviour is due to the inherently complex and ambiguous conditions in MIS, which reflects the inability of the models to learn meaningful features. Our work is then based on the following modelling hypothesis.
|
| 42 |
+
|
| 43 |
+
§ HYPOTHESIS 2.1: DEEP FEATURES ARE KEY FOR ROBUSTNESS
|
| 44 |
+
|
| 45 |
+
Deep surgical techniques for triplet recognition lacks reliability due to the ineffective features. Therefore, the key to boosting performance, improving trustworthiness and reliability, and understanding failure of deep models is in the deep features.
|
| 46 |
+
|
| 47 |
+
Following previous hypothesis, we address the questions of-why deep triplet recognition models fail? We do that by analysing the feature based explanations via robustness. To do this, we consider the current three SOTA techniques for our study: Tripnet (Nwoye et al., 2020), Attention Tripnet, and Rendezvous (Nwoye et al., 2022). Moreover, we extensively investigate the repercussion of deep features using four widely used backbones ResNet-18, ResNet-50 (He et al., 2015), DenseNet-121 (Huang et al., 2016), and Swin Transformer(Liu et al., 2021). In the next section, we detail our strategy for analysing robustness. 2.2 FEATURE
|
| 48 |
+
|
| 49 |
+
< g r a p h i c s >
|
| 50 |
+
|
| 51 |
+
Figure 2: Illustration of the main network structure, and how the adversarial perturbation is added to measure robustness.
|
| 52 |
+
|
| 53 |
+
BASED EXPLANATIONS VIA ROBUSTNESS
|
| 54 |
+
|
| 55 |
+
§ OUR MODELS OF THE TRIPLET RECOGNITION OUTPUT THE
|
| 56 |
+
|
| 57 |
+
logits of triplets composition, we then use it to select our predicted label for the classification result. We define the model from image $\mathbf{x}$ to the predicted label $\widehat{y}$ as $f : \mathcal{X} \rightarrow \mathcal{Y}$ , where $\mathcal{X} \subset {\mathbb{R}}^{H \times W \times 3},\mathcal{Y} = \left\{ {0,1,2,\ldots ,{C}_{IVT} - 1}\right\}$ .
|
| 58 |
+
|
| 59 |
+
For each class $m \in \mathcal{Y}$ and within each given sample, we seek to recognise core and spurious attributions (Singla & Feizi, 2021; Singla et al., 2021), which definition is as follows.
|
| 60 |
+
|
| 61 |
+
* Core Attributes: they refer to the features that form a part in the object we are detecting.
|
| 62 |
+
|
| 63 |
+
B Spurious Attributes: these are the ones that not a part of the object but co-occurs with it.
|
| 64 |
+
|
| 65 |
+
How We Evaluate Robustness? The body of literature has reported several alternatives for addressing the robustness of deep networks. Our work is motivated by recent findings on perturbation based methods, where even a small perturbation can significantly affect the performance of neural nets. In particular, we consider the setting of adversarial training (Allen-Zhu & Li, 2022; Olah et al., 2018; Engstrom et al., 2019) for robustify a given deep model.
|
| 66 |
+
|
| 67 |
+
The idea behind adversarial training for robustness is to enforce a given model to maintain its performance under a given perturbation $\delta$ . This problem can be seen cast as an optimisation problem over the network parameters $\theta$ as:
|
| 68 |
+
|
| 69 |
+
$$
|
| 70 |
+
{\theta }^{ * } = \arg \mathop{\min }\limits_{\theta }{\mathbb{E}}_{\left( {\mathbf{x},y}\right) \sim \mathcal{D}}\left\lbrack {{\mathcal{L}}_{\theta }\left( {\mathbf{x},y}\right) }\right\rbrack . \tag{1}
|
| 71 |
+
$$
|
| 72 |
+
|
| 73 |
+
where $\mathbb{E}\left\lbrack {{\mathcal{L}}_{\theta }\left( \cdot \right) }\right\rbrack$ denotes the expected loss to the parameter $\theta$ .
|
| 74 |
+
|
| 75 |
+
One seeks to the model be resistant to any $\delta$ -perturbation. In this work, we follow a generalised adversarial training model, which reads:
|
| 76 |
+
|
| 77 |
+
Definition 2.1: Adversarial training under $\delta$
|
| 78 |
+
|
| 79 |
+
$$
|
| 80 |
+
{\theta }^{ * } = \arg \mathop{\min }\limits_{\theta }{\mathbb{E}}_{\left( {\mathbf{x},y}\right) \sim \mathcal{D}}\left\lbrack {\mathop{\max }\limits_{{\delta \in \Delta }}{\mathcal{L}}_{\theta }\left( {\mathbf{x} + \delta ,y}\right) }\right\rbrack .
|
| 81 |
+
$$
|
| 82 |
+
|
| 83 |
+
The goal is to the models do not change their performance even under the worse (strong) $\delta$ .
|
| 84 |
+
|
| 85 |
+
The machine learning literature has explored different forms of the generalised model in definition equation 2.1. For example, a better sparsity regulariser for the adversarial training as in (Xu et al., 2018). In this work, we adopt the evaluation criteria of that (Hsieh et al., 2020), where one seeks to measure the susceptibility of features to adversarial perturbations. More precisely, we can have an insight of the deep features extracted by our prediction through visualising compact set of relevant features selected by some defined explanation methods on trained models, and measuring the robustness of the models by performing adversarial attacks on the relevant or the irrelevant features.
|
| 86 |
+
|
| 87 |
+
We denote the set of all features as $U$ , and consider a general set of feature $S \subseteq U$ . Since the feature we are interested are those in the image $\mathbf{x}$ , we further denote the subset of $S$ that related to the image as ${\mathbf{x}}_{S}$ . To measure the robustness of the model, we rewrote the generalised model equation 2.1 following the evaluation criteria of that (Hsieh et al.,2020). A model on input $\mathbf{x}$ with adversarial perturbation on feature set $S$ then reads:
|
| 88 |
+
|
| 89 |
+
Definition 2.2: Adversarial $\delta$ &Robustness- $S$
|
| 90 |
+
|
| 91 |
+
$$
|
| 92 |
+
{\varepsilon }_{{\mathbf{x}}_{S}}^{ * } \mathrel{\text{ := }} \left\{ {\mathop{\min }\limits_{\mathbf{\delta }}\parallel \mathbf{\delta }{\parallel }_{p}\;\text{ s.t. }f\left( {\mathbf{x} + \mathbf{\delta }}\right) \neq y,\;{\mathbf{\delta }}_{\bar{S}} = 0}\right\} ,
|
| 93 |
+
$$
|
| 94 |
+
|
| 95 |
+
where $y$ is the ground truth label of image $\mathbf{x};\parallel \cdot {\parallel }_{p}$ denotes the adversarial perturbation norm; $\bar{S} = U \smallsetminus S$ denotes the complementary set of feature $S$ with ${\delta }_{\bar{S}} = 0$ constraining the perturbation only happens on ${\mathbf{x}}_{S}$ . We refer to ${\varepsilon }_{{\mathbf{x}}_{S}}^{ * }$ as Robustness- $\mathbf{S}$ (Hsieh et al.,2020), or the minimum adversarial perturbation norm on ${\mathbf{x}}_{S}$ .
|
| 96 |
+
|
| 97 |
+
We then denote the relevant features selected by the explanation methods as ${S}_{r} \subseteq U$ , with the irrelevant features as its complementary set $\overline{{S}_{r}} = U \smallsetminus {S}_{r}$ . Thus, the robustness on chosen feature sets $- {S}_{r}$ and $\overline{{S}_{r}}$ tested on image $\mathbf{x}$ are:
|
| 98 |
+
|
| 99 |
+
$$
|
| 100 |
+
\text{ Robustness- }{S}_{r} = {\varepsilon }_{{x}_{{S}_{r}}}^{ * };\;\text{ Robustness- }\overline{{S}_{r}} = {\varepsilon }_{{x}_{\overline{{S}_{r}}}}^{ * }\text{ . }
|
| 101 |
+
$$
|
| 102 |
+
|
| 103 |
+
Table 1: Performance comparison for the task of Triplet recognition. The results are reported in terms of Average Precision (AP%) on the CholecT45 dataset using the official cross-validation split.
|
| 104 |
+
|
| 105 |
+
max width=
|
| 106 |
+
|
| 107 |
+
2|c|Method 3|c|COMPONENT DETECTION 3|c|TRIPLET ASSOCIATION
|
| 108 |
+
|
| 109 |
+
1-8
|
| 110 |
+
BASELINE BACKBONE $A{P}_{l}$ ${APV}$ $A{P}_{T}$ $A{P}_{IV}$ ${AP}{r}_{T}$ $A{P}_{IVT}$
|
| 111 |
+
|
| 112 |
+
1-8
|
| 113 |
+
3*Tripnet ResNet-18 ${82.4} \pm {2.5}$ ${54.1} \pm {2.0}$ ${33.0} \pm {2.3}$ ${30.6} \pm {2.6}$ ${25.9} \pm {1.5}$ ${21.2} \pm {1.2}$
|
| 114 |
+
|
| 115 |
+
2-8
|
| 116 |
+
ResNet-50 ${85.3} \pm {1.3}$ ${57.8} \pm {1.6}$ ${34.7} \pm {1.9}$ ${31.3} \pm {2.3}$ ${27.1} \pm {2.4}$ ${21.9} \pm {1.5}$
|
| 117 |
+
|
| 118 |
+
2-8
|
| 119 |
+
DenseNet-121 ${86.9} \pm {1.4}$ ${58.7} \pm {1.5}$ ${35.6} \pm {2.8}$ ${33.4} \pm {3.4}$ ${27.8} \pm {1.8}$ ${22.5} \pm {2.3}$
|
| 120 |
+
|
| 121 |
+
1-8
|
| 122 |
+
3*Attention Tripnet ResNet-18 ${82.2} \pm {2.6}$ ${56.7} \pm {3.8}$ ${34.6} \pm {2.2}$ ${30.8} \pm {1.8}$ ${27.4} \pm {1.3}$ ${21.7} \pm {1.3}$
|
| 123 |
+
|
| 124 |
+
2-8
|
| 125 |
+
ResNet-50 ${81.9} \pm {3.0}$ ${56.8} \pm {1.1}$ ${34.1} \pm {1.4}$ ${31.5} \pm {2.2}$ ${27.5} \pm {1.0}$ ${21.9} \pm {1.2}$
|
| 126 |
+
|
| 127 |
+
2-8
|
| 128 |
+
DenseNet-121 ${83.7} \pm {3.5}$ ${57.5} \pm {3.2}$ ${34.3} \pm {1.3}$ ${33.1} \pm {2.4}$ ${28.5} \pm {1.6}$ ${22.8} \pm {1.3}$
|
| 129 |
+
|
| 130 |
+
1-8
|
| 131 |
+
4*Rendezvous ResNet-18 ${85.3} \pm {1.4}$ ${58.9} \pm {2.6}$ ${35.2} \pm {3.4}$ ${33.6} \pm {2.6}$ ${30.1} \pm {2.8}$ ${24.3} \pm {2.3}$
|
| 132 |
+
|
| 133 |
+
2-8
|
| 134 |
+
ResNet-50 ${85.4} \pm {1.6}$ ${58.4} \pm {1.4}$ ${34.7} \pm {2.4}$ ${35.3} \pm {3.5}$ ${30.8} \pm {2.6}$ ${25.3} \pm {2.7}$
|
| 135 |
+
|
| 136 |
+
2-8
|
| 137 |
+
DenseNet-121 ${88.5} \pm {2.7}$ ${61.7} \pm {1.7}$ ${36.7} \pm {2.1}$ ${36.5} \pm {4.7}$ ${32.1} \pm {2.7}$ ${26.3} \pm {2.9}$
|
| 138 |
+
|
| 139 |
+
2-8
|
| 140 |
+
Swin-T ${73.6} \pm {1.9}$ ${48.3} \pm {2.6}$ ${29.2} \pm {1.4}$ ${28.1} \pm {3.1}$ ${24.7} \pm {2.0}$ ${20.4} \pm {2.1}$
|
| 141 |
+
|
| 142 |
+
1-8
|
| 143 |
+
|
| 144 |
+
§ 3 EXPERIMENTAL RESULTS
|
| 145 |
+
|
| 146 |
+
In this section, we describe in detail the range of experiments that we conducted to validate our methodology.
|
| 147 |
+
|
| 148 |
+
§ 3.1 DATASET DESCRIPTION AND EVALUATION PROTOCOL
|
| 149 |
+
|
| 150 |
+
Dataset Description. We use CholecT45 dataset (Nwoye & Padoy, 2022) to evaluate the robustness of the three SOTA models for the Surgical Action Triplet Recognition task. Specifically, CholecT45
|
| 151 |
+
|
| 152 |
+
Table 2: Heatmaps Comparison under different feature extraction backbones. We displayed four randomly selected images in fold 3 when using the best performed weights trained and validated on folds 1,2,4 and 5 . dataset contains 45 videos with annotations including 6 classes of instrument, 10 classes of verb, and 15 classes of target (i.e. ${C}_{I} = 6,{C}_{V} = {10},{C}_{T} = {15}$ ) generating ${900}\left( {6 \times {10} \times {25}}\right)$ potential combinations for triplet labels. To maximise the clinical utility, we utilise the top-100 combinations of relevant labels, which are selected by removing a large portion of spurious combinations according to class grouping and surgical relevance rating (Nwoye et al., 2022). Each video contains around 2,000 annotated frames extracted at $1\mathrm{{fps}}$ in RGB channels, leading to a total of 90,489 recorded frames. To remove the redundant information, the frames captured after the laparoscope been taken out of the body are blacked out with value $\left\lbrack {0,0,0}\right\rbrack$ .
|
| 153 |
+
|
| 154 |
+
< g r a p h i c s >
|
| 155 |
+
|
| 156 |
+
Table 3: Top 5 predicted Triplet classes in each of the 10 models. The top 5 is assessed by the $A{P}_{IVT}$ score.
|
| 157 |
+
|
| 158 |
+
max width=
|
| 159 |
+
|
| 160 |
+
X 3|c|ResNet-18 4|c|ResNet-50 4|c|DenseNet-121 2|c|Swin-T
|
| 161 |
+
|
| 162 |
+
1-14
|
| 163 |
+
6*Tripnet 2|c|Triplet ${AP}$ 3|c|Triplet ${AP}$ 3|c|Triplet ${AP}$ X X
|
| 164 |
+
|
| 165 |
+
2-14
|
| 166 |
+
12:grasper grasp specimen_bag 82.60% 17:grasper retract gallbladder 86.95% 17: grasper retract gallbladder 86.93% X X
|
| 167 |
+
|
| 168 |
+
2-14
|
| 169 |
+
17:grasper retract gallbladder 81.04% 12:grasper grasp specimen_bag 80.50% 12:grasper grasp specimen_bag 81.45% X X
|
| 170 |
+
|
| 171 |
+
2-14
|
| 172 |
+
29.bipolar coagulate liver 77.11% 60:hook dissoct gallbladder 77.15% 29:bipolar coagulate liver 80.19% X X
|
| 173 |
+
|
| 174 |
+
2-14
|
| 175 |
+
60 hook dissect gallbladder 74.13% 29.bipolar coagulate liver 75.69% 60 hook dissect gallbladder 76.35% X X
|
| 176 |
+
|
| 177 |
+
2-14
|
| 178 |
+
79 clipper clip cystic_duct 61.28% 6:grasper grasp cystic_plate 69.24% 79:clipper clip cystic_duct 67.75% X X
|
| 179 |
+
|
| 180 |
+
1-14
|
| 181 |
+
6*Attention Tripnet 2|c|Triplet ${AP}$ 3|c|Triplet ${AP}$ 3|c|Triplet ${AP}$ 2|c|X
|
| 182 |
+
|
| 183 |
+
2-14
|
| 184 |
+
12.grasper grasp specimen_bag 81.38% 17:grasper retract gallbladder 82.75% 17:grasper retract gallbladder 83.63% X X
|
| 185 |
+
|
| 186 |
+
2-14
|
| 187 |
+
17:grasper retract gallbladder 78.70% 12:grasper grasp specimen_bag 78.53% 12:grasper grasp specimen_bag 80.01% X X
|
| 188 |
+
|
| 189 |
+
2-14
|
| 190 |
+
29:bipolar coagulate liver 78.52% 29:bipolar coagulate liver 76.44% 29:bipolar coagulate liver 75.68% X X
|
| 191 |
+
|
| 192 |
+
2-14
|
| 193 |
+
28:bipolar coagulate gallbladder 77.44% 60:hook dissoct gallbladder 71.79% 60 hook dissect gallbladder 75.36% X X
|
| 194 |
+
|
| 195 |
+
2-14
|
| 196 |
+
30:bipolar coagulate omentum 77.39% 28:binolar coagulate gallbladder 70.68% 30:bipolar consulate omentum 69.49% X X
|
| 197 |
+
|
| 198 |
+
1-14
|
| 199 |
+
6*Rendezvous 2|c|Triplet ${AP}$ 3|c|Triplet ${AP}$ 3|c|Triplet ${AP}$ Triplet ${AP}$
|
| 200 |
+
|
| 201 |
+
2-14
|
| 202 |
+
17:grasper retract gallbladder 85.57% 30:bipolar coagulate omentum 91.36% 84 : irrizator dissect cystic pedicle 96.84% 17:grasperretractgallbladder 78.36%
|
| 203 |
+
|
| 204 |
+
2-14
|
| 205 |
+
29:bipolar coagulate liver 83.90% 17: grasper retract gallbladder 86.11% 30:bipolar coagulate omentum 89.60% 60thookdissectgallbladder 72.57%
|
| 206 |
+
|
| 207 |
+
2-14
|
| 208 |
+
12:grasper grasp specimen_bag 82.77% 29:bipolar coagulate liver 84.94% 17:grasper retract gallbladder 89.46% 12:graspergraspspecimen_bag 69.96%
|
| 209 |
+
|
| 210 |
+
2-14
|
| 211 |
+
30:bipolar coagulate omentum 76.88% 12: grasper grasp specimen bag 81.50% 12:grasper grasp specimen bag 85.88% 30:bipolarcoagulateomentum 67.03%
|
| 212 |
+
|
| 213 |
+
2-14
|
| 214 |
+
60 hookdissect gallbladder 76.49% 28:bipolar coagulate gallbladder 79.60% 29:bipolar coagulate liver 84.43% 29:bipolarcoagulateliver 66.08%
|
| 215 |
+
|
| 216 |
+
1-14
|
| 217 |
+
|
| 218 |
+
Evaluation Protocol. The triplet action recognition is evaluated by the average precision(AP) metric. Our models can directly output the predictions of triplet class $A{P}_{IVT}$ . Instead, $A{P}_{d}$ where $d \in \{ I,V,T,{IV},{IT}\}$ cannot be predicted explicitly. Then we obtain the final predictions of $d \in$ $\{ I,V,T,{IV},{IT}\}$ components according to (Nwoye &Padoy,2022; Nwoye et al.,2022):
|
| 219 |
+
|
| 220 |
+
$$
|
| 221 |
+
{Y}_{d}{}^{k} = \mathop{\max }\limits_{m}\left\{ {{\mathbf{Y}}_{IVT}{}^{m}}\right\} ,\;\forall m \in \left\{ {0,1..,{C}_{IVT}}\right\} \text{ s.t. }{h}_{d}\left( m\right) = k,
|
| 222 |
+
$$
|
| 223 |
+
|
| 224 |
+
where we calculate the probability of class $k \in \left\{ {0,1,..,{C}_{d} - 1}\right\}$ under component $d$ ; and ${h}_{d}\left( \cdot \right)$ maps the class $m$ from ${IVT}$ triplet compositions to the class under component $d$ . In our robustness analysis, the main evaluation criteria is the robustness subject to the selected feature set $\left( {S}_{r}\right.$ and $\left. \overline{{S}_{r}}\right)$ on each backbone using the formula in equation 2.2.
|
| 225 |
+
|
| 226 |
+
§ 3.2 IMPLEMENTATION DETAILS
|
| 227 |
+
|
| 228 |
+
We evaluate the model performance based on five-fold cross-validation, where we split 45 full videos into 5 equal folds. The testing set is selected from these 5 folds, and we treat the remaining 4 folds as the training set. Moreover, 5 videos from the 36 training set videos are selected as validation set during training.
|
| 229 |
+
|
| 230 |
+
The models are trained using the Stochastic Gradient Descent (SGD) optimiser. The feature extraction backbones are initialised with ImageNet pre-trained weights. Both linear and exponential decay of learning rate are used during training, with initial learning rates as $\left\{ {1{e}^{-2},1{e}^{-2},1{e}^{-2}}\right\}$ for backbone, encoder and decoder parts respectively. We set the batch size as 32, and epoch which performs the best among all recorded epochs up to ${AP}$ score saturation on validation set in the specified k-fold. To reduce computational load, the input images and corresponding segmentation masks are resized from 256 $\times$ 448 to 8 $\times$ 14. For fair comparison, we ran all SOTA models (following all suggested protocols from the official repository) under the same conditions and using the official cross-validation split of the CholecT45 dataset (Nwoye & Padoy, 2022).
|
| 231 |
+
|
| 232 |
+
§ 3.3 EVALUATION ON DOWNSTREAM TASKS
|
| 233 |
+
|
| 234 |
+
In this section, we carefully analyse the current SOTA techniques for triplet recognition from the feature based explainability lens.
|
| 235 |
+
|
| 236 |
+
C. Results on Triplet Recognition with Cross-Validation. As first part of our analysis, we investigate the performance limitation on current SOTA techniques, and emphasise how such limitation is linked to the lack of reliable features. The results are reported in Table 1. In a closer look at the results, we observe that ResNet-18, in general, performs the worst among the compared backbones. However, we can observe that for one case, component analysis, it performs better than ResNet-50 under Tripnet Attention baseline. The intuition being such behaviour is that the MIS setting relies on ambiguous condition and, in some cases, some frames might contain higher spurious features that are better captured by it. We remark that the mean and standard-deviation in Table 1 are calculated from the 5 folds in each combination of backbone and baseline.
|
| 237 |
+
|
| 238 |
+
We also observe that ResNet-50 performs better than ResNet-18 due to the deeper feature extraction. The best performance, for both the tasks-component detection and triplet association, is reported by DenseNet-121. The intuition behind the performance gain is that DenseNet-121 somehow mitigates the issue of the limitation of the capability representation. This is because ResNet type networks are limited by the identity shortcut that stabilises training. These results support our modelling hypothesis that the key of performance is the robustness of the deep features.
|
| 239 |
+
|
| 240 |
+
A key finding in our results is that whilst existing SOTA techniques (Nwoye & Padoy, 2022; Nwoye et al., 2022) are devoted to developing new network mechanisms, one can observe that a substantial performance improvement when improving the feature extraction. Moreover and unlike other surgical tasks, current techniques for triplet recognition are limited in performance. Why is this happening? Our results showed that the key is in the reliable features (linked to robustness); as enforcing more meaningful features, through several backbones, a significant performance improvement over all SOTA techniques is observed.
|
| 241 |
+
|
| 242 |
+
To further support our previous findings, we also ran a set of experiments using the trending principle of Transformers. More precisely, an non CNN backbone—the tiny Swin Transformer (Swin-T) (Liu et al.,2021) has also been tested on the Rendezvous, which has rather low ${AP}$ scores on all of the 6 components in oppose to the $3\mathrm{{CNN}}$ backbones. This could be led by the shifted windows in the Swin-T, it is true that the shifted windows largely reduced the computational cost, but this could lead to bias feature attribute within bounding boxes, the incoherent spreading can be seen clearly in the visualisation of detected relevant features in Swin-T in Figure 3 (a).
|
| 243 |
+
|
| 244 |
+
In Table 1 we displayed the average results over all classes but-what behaviour can be observed from the per-class performance? It can be seen from Table 3 that though the best 5 predicted classes are different in each model, the predicted compositions seem clinically sensible supporting our previous discussion. In addition, the top 1 per-class ${AP}$ score is significantly higher in DenseNet-121 with Rendezvous.
|
| 245 |
+
|
| 246 |
+
C Visualisation Results. To interpret features is far from being trivial. To address this issue, we provide a human-like comparison via heatmaps in Table 2. The implementation of the heatmaps is adapted from (Zhou et al., 2016). The displayed outputs reflect what the model is focusing based on the extracted features. These results support our hypothesis that deep features are the key in making correct predictions over any new network mechanism.
|
| 247 |
+
|
| 248 |
+
We observed that in the worst performed backbone-Swin-T, the feature been extracted are mostly spread across the images, however, the ones that concentrate on core attributes are not though performed the best. In the best performed DenseNet-121, a reasonable amount of attention are also been paid to spurious attributes; this can be seen more directly in our later discussion on robustness visualisation Figure 3.
|
| 249 |
+
|
| 250 |
+
The reported probability on the predicted label emphasises again the outstanding performance of DenseNet-121 backbone; in the sense that, the higher the probability for the correct label the better, the lower it is for incorrect prediction the better.
|
| 251 |
+
|
| 252 |
+
C Why Surgical Triplet Recognition Models Fail? Robustness and Interpretability. We further support our findings through the lens of robustness. We use as evaluation criteria Robustness- ${S}_{r}$ and Robustness- ${S}_{r}$ with different explanation methods: vanilla gradient (Grad) (Shrikumar et al.,2017) and integrated gradient (IG) (Sundararajan et al., 2017). The results are in Table 4 & Figure 3.
|
| 253 |
+
|
| 254 |
+
Table 4: Robustness measured on 400 examples (i.e. images) randomly selected from the images in the fold 3 videos with exactly 1 labeled triplet. Top 25 percent of relevant ${S}_{r}$ or irrelevant $\overline{{S}_{r}}$ features are selected from 2 explanation methods Grad and IG. We perform attacks on the selected 25 percent.
|
| 255 |
+
|
| 256 |
+
max width=
|
| 257 |
+
|
| 258 |
+
2*ATTACKED FEATURES 2*EXPLANATION METHODS 4|c|BACKBONES (ON RENDEZVOUS)
|
| 259 |
+
|
| 260 |
+
3-6
|
| 261 |
+
ResNet-18 ResNet-50 DenseNet-121 Swin-T
|
| 262 |
+
|
| 263 |
+
1-6
|
| 264 |
+
2*Robustness- ${S}_{r}$ Grad 2.599687 2.651435 3.287798 1.778592
|
| 265 |
+
|
| 266 |
+
2-6
|
| 267 |
+
IG 2.621901 2.686064 3.319311 1.777737
|
| 268 |
+
|
| 269 |
+
1-6
|
| 270 |
+
2*Robustness- ${S}_{r}$ Grad 2.517404 2.608013 3.188270 1.750599
|
| 271 |
+
|
| 272 |
+
2-6
|
| 273 |
+
IG 2.515343 2.603118 3.187848 1.749097
|
| 274 |
+
|
| 275 |
+
1-6
|
| 276 |
+
|
| 277 |
+
§ 3.3.1 COMPARISON BETWEEN DIFFERENT BACKBONES
|
| 278 |
+
|
| 279 |
+
In Table 4, we show the robustness results with top 25% attacked features on the average over 400 frames randomly chosen with exactly 1 labeled triplet. On one hand, we observe that the DenseNet-121 backbone consistently outperforms other network architectures on both evaluation criteria Robustness- ${S}_{r}$ and Robustness- $\overline{{S}_{r}}$ . This suggests that DenseNet-121 backbone does capture different explanation characteristics which ignored by other network backbones. On the other hand, our results are supported by the finding in (Hsieh et al., 2020), as IG performs better than Grad; and the attack on relevant features yields lower robustness than perturbing the same percentage of irrelevant features.
|
| 280 |
+
|
| 281 |
+
§ 3.3.2 ROBUSTNESS EXPLANATION FOR SPECIFIC IMAGES
|
| 282 |
+
|
| 283 |
+
To more objectively evaluate the robustness explanation for specific images, we show: (a) Visualisation of important features,(b) Robustness- ${S}_{r}$ ,(c) Robustness against the percentage of Top features, and (d) Robustness- ${S}_{r}$ in Figure 3. In Figure 3 (a), we visualise the Top 15% features (with yellow dots) by Grad and IG, respectively, and overlay it on manually labelled region containing instrument (in red) and target (in green). We observe that the best performed backbone (can be seen from the robustness comparison curves in Figure 3 (c)) on the specific image is the one that not only pays attention to core attributes, but also the spurious attribute. In the image VID08-000188, the best performed model is ResNet-18, which shows the ambiguous condition on individual images. In a closer look at Figure 3 (a), a small portion of the most relevant feature extracted by ResNet-18 is spread not on the close surrounding of the object area. This importance of spurious attribute is further highlighted in image VID18-001156. We observe that DenseNet-121 provides the most robust result highlighting relevant features within the tissue region and across tool tip. The worst performed model-ResNet-18 merely treated the core attributes as relevant.
|
| 284 |
+
|
| 285 |
+
< g r a p h i c s >
|
| 286 |
+
|
| 287 |
+
Figure 3: The set of figures shows robustness analysis on randomly selected images with a. the visualisation of the Top 15 percent of important features selected by the 2 explanation methods-Grad and IG; b. (/d.) the trends showing the robustness measured on the relevant ${S}_{r}$ (/irrelevant $\overline{{S}_{r}}$ ) features been selected by the 2 explanation methods against the percentage of Top features been defined as relevant; c. the comparison of the robustness across the 4 backbones embedded in Rendezvous baseline.
|
| 288 |
+
|
| 289 |
+
The relevant role of spurious attributes can be explained by the nature of the triplet, which consists a verb component that is not the physical object. Overall, we observe that reliable deep features are the key for robust models in triplet recognition. Moreover, we observe, unlike existing works of robustness against spurious features, that both core and spurious attributes are key for the prediction.
|
| 290 |
+
|
| 291 |
+
§ 4 CONCLUSION
|
| 292 |
+
|
| 293 |
+
We present the first work to understand the failure of existing deep learning models for the task of triplet recognition. We provided an extensive analysis through the lens of robustness. The significance of our work lies on understanding and addressing the key issues associated with the substantially limited in performance of existing techniques. Our work offers a step forward to more trustworthy and reliable models.
|
papers/ICML/ICML 2020/ICML 2020 Workshop/ICML 2020 Workshop SAS/3IKKBxByalk/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,344 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Adversarial representation learning for private speech generation
|
| 2 |
+
|
| 3 |
+
David Ericsson ${}^{*{12}}$ Adam Östberg ${}^{*{12}}$ Edvin Listo Zec ${}^{2}$ John Martinsson ${}^{2}$ Olof Mogren ${}^{2}$
|
| 4 |
+
|
| 5 |
+
## Abstract
|
| 6 |
+
|
| 7 |
+
As more data is collected in various settings across organizations, companies, and countries, there has been an increase in the demand of user privacy. Developing privacy preserving methods for data analytics is thus an important area of research. In this work we present a model based on generative adversarial networks (GANs) that learns to obfuscate specific sensitive attributes in speech data. We train a model that learns to hide sensitive information in the data, while preserving the meaning in the utterance. The model is trained in two steps: first to filter sensitive information in the spectrogram domain, and then to generate new and private information independent of the filtered one. The model is based on a U-Net CNN that takes mel-spectrograms as input. A MelGAN is used to invert the spectrograms back to raw audio waveforms. We show that it is possible to hide sensitive information such as gender by generating new data, trained adversarially to maintain utility and realism.
|
| 8 |
+
|
| 9 |
+
## 1. Introduction
|
| 10 |
+
|
| 11 |
+
With greater availability of computing power and large datasets, machine learning methods are increasingly being used to gain insights and make decisions based on data. While providing valuable insights, the methods may extract sensitive information which the provider of the data did not intend to disclose. An example of this is digital voice assistants. The user provides commands by speaking, and the speech is recorded through a microphone. A speech processing algorithm infers the spoken contents and executes the commands accordingly. However, it has been shown that such state-of-the-art methods may infer other sensitive attributes as well, such as intention, gender, emotional state, identity and many more (Srivastava et al., 2019). This raises the question of how to learn representations of data to such applications, which are useful for the intended purpose while respecting the privacy of people.
|
| 12 |
+
|
| 13 |
+
Speakers' identities can often be inferred based on features such as timbre, pitch, and speaker style. Voice morphing techniques focus on making it difficult to infer information from these attributes by altering properties such as pitch and intensity. However, this often limit the utility of the signal, by altering intonation or variability. Voice conversion approaches instead aim to mimic a specific speaker. In contrast, this paper aims at modelling a distribution over plausible speakers, given the current input signal, and while hiding sensitive attributes.
|
| 14 |
+
|
| 15 |
+
In this paper, we approach the task of privacy-ensuring voice transformations using an adversarial learning set-up. Generative adversarial networks (GANs) were proposed as tractable generative models (Goodfellow et al., 2014), but have also been adapted to transform data and to provide privacy in the image domain (Huang et al., 2018). We build on these findings, and propose PCMelGAN, a two-step GAN set-up similar to from (Martinsson et al., 2020), that works in the mel-spectrogram domain. The set-up consists of a filter module which removes sensitive information, and a generator module which adds synthetic information in its place. The proposed method can successfully obfuscate sensitive attributes in speech data and generates realistic speech independent of the sensitive input attribute. Our results for censoring the gender attribute on the AudioMNIST dataset, demonstrate that the method can maintain a high level of utility, i.e. retain qualities such as intonation and content, while obtaining strong privacy.
|
| 16 |
+
|
| 17 |
+
In our experiments, the filter module makes it difficult for an adversary to infer the gender of the speaker, and the generator module randomly assigns a synthetic value for the gender attribute which is used when generating the output. However, the proposed method is designed to be able to censor any attribute of a categorical nature. The proposed solution is agnostic to the downstream task, with the objective to make the data as private as possible given a distortion constraint.
|
| 18 |
+
|
| 19 |
+
---
|
| 20 |
+
|
| 21 |
+
*Equal contribution ${}^{1}$ Chalmers University of Technology, Gothenburg, Sweden ${}^{2}$ RISE Research Institutes of Sweden. Correspondence to: David Ericsson <daverics@chalmers.se>, Adam Ostberg <adamostberg@hotmail.com>, Edvin Listo Zec <ed-vin.listo.zec@ri.se>.
|
| 22 |
+
|
| 23 |
+
Published at the workshop on Self-supervision in Audio and Speech at the ${37}^{\text{th }}$ International Conference on Machine Learning, Vienna, Austria. Copyright 2020 by the author(s).
|
| 24 |
+
|
| 25 |
+
---
|
| 26 |
+
|
| 27 |
+
## 2. Related work
|
| 28 |
+
|
| 29 |
+
Adversarial representation learning. Research within adversarial learning aims to train two or more models simultaneously with conflicting objective functions. One network which is trained on the main task, and one adversary network that is trained to identify the other network's output. Within the image domain, adversarial learning has had a large success in a wide variety of tasks since the introduction of generative adversarial networks (GANs) (Goodfellow et al., 2014). Examples of such tasks are image-to-image transformations (Isola et al., 2017), and synthesis of facial expressions and human pose (Song et al., 2017; Tang et al., 2019).
|
| 30 |
+
|
| 31 |
+
Much less work with GANs has been done related to speech and audio. (Pascual et al., 2017) introduce SEGAN (speech enhancement GAN) and thus seem to be the first ones to apply GANs to the task of speech generation and enhancement. The authors train a model end-to-end working on the raw-audio signal directly. (Higuchi et al., 2017; Qin & Jiang, 2018) use adversarial learning to perform speech enhancement for automatic speech recognition (ASR). (Donahue et al., 2018) study the benefit of GAN-based speech enhancement for ASR by extending SEGAN to operate on a time-frequency.
|
| 32 |
+
|
| 33 |
+
While these works are applying GANs to tackle the challenges within speech, they are limited to a supervised setting. The two most notable works in an unsupervised setting are (Donahue et al., 2019) and (Engel et al., 2019). (Donahue et al., 2019) focus on learning representations in an adversarial manner in order to synthesize audio data both on waveform and spectrogram level, but still show that it is a challenging task, concluding that most perceptually-informed spectrograms are non-invertible.
|
| 34 |
+
|
| 35 |
+
Intermediate speech representations. It is challenging to work on raw waveforms when modeling audio data, due to a high temporal resolution but also a complex relationship between short-term and long-term dependencies. This leads to most work being done on a lower-dimensional representation domain, usually a spectrogram. Two common intermediate speech representations are aligned linguistic features (Oord et al., 2016) and mel-spectrograms (Shen et al., 2018; Gibiansky et al., 2017). The mel scale is a nonlinear frequency scale that is linear in terms of human perception. It has the benefit of emphasizing differences in lower frequencies, which are important to humans. At the same time, it puts less weight on high frequency details, that typically consists of different bursts of noise which are not needed to be as distinguishable. (Engel et al., 2019) trains a GAN to synthesize magnitute-phase spectrograms of note records for different musical instruments. (Kumar et al., 2019) tackle the problem of non-invertible spectrograms by introducing MelGAN: a fully convolutional model designed
|
| 36 |
+
|
| 37 |
+
## to invert mel-spectrograms to raw waveforms.
|
| 38 |
+
|
| 39 |
+
Adversarial representation learning for privacy. Adversarial representation learning has also been studied as a method of preserving privacy. More specifically, it has been used with the goal of hiding sensitive attributes under some utility constraint. This work has mainly focused on images and/or videos, and some tasks related to text data (Zhang et al., 2018; Xie et al., 2017; Beutel et al., 2017; Raval et al., 2017).
|
| 40 |
+
|
| 41 |
+
To our knowledge, (Srivastava et al., 2019) are the first ones to apply privacy related adversarial representation learning to audio data. The authors study the problem of protecting the speaker identity of a person based on an encoded representation of their speech. The encoder is trained for an automatic speech recognition (ASR) task. While the authors manage to hide the speaker identity to some extent, their method also relies on knowing labels for the downstream task.
|
| 42 |
+
|
| 43 |
+
In the works of (Edwards & Storkey, 2016; Huang et al., 2018) and (Martinsson et al., 2020), the authors apply adversarial representation learning to censor images, without using any downstream task labels.
|
| 44 |
+
|
| 45 |
+
Voice conversion. Voice conversion algorithms aim to learn a function that maps acoustic features from a source-speaker $X$ to a target-speaker $Y$ . Some notable works on this involving GANs are (Hsu et al., 2017; Pasini, 2019; Kameoka et al., 2018; Kaneko et al., 2019). Similar to (Kameoka et al., 2018), we do not require any parallel utterances, transcriptions, or time alignment for the speech generation part. (Qian et al., 2018; Aloufi et al., 2019) use voice conversion to study privacy in speech. However, these works differ from our by having a target speaker to which they convert the voice of the input speakers to.
|
| 46 |
+
|
| 47 |
+
## 3. Problem setting
|
| 48 |
+
|
| 49 |
+
### 3.1. Private conditional GAN
|
| 50 |
+
|
| 51 |
+
Private conditional GAN (PCGAN) (Martinsson et al., 2020) is a model that builds upon the generative adversarial privacy (GAP) framework described by (Huang et al., 2017; Huang et al., 2018). Both works study adversarial representation learning for obfuscating sensitive attributes in images. The authors of PCGAN show that by adding a generator to the filter model in the GAP framework strengthens privacy while maintaining utility. The filter network obfuscates the sensitive attribute $s$ in the image, and the objective of the generator is to take the filtered image ${\mathbf{x}}^{\prime }$ as input and generate a new synthetic instance of the sensitive attribute ${s}^{\prime }$ in it, independent of the original $s$ .
|
| 52 |
+
|
| 53 |
+
The filter and the generator networks are trained against their respective discriminators ${\mathcal{D}}_{\mathcal{F}}$ and ${\mathcal{D}}_{\mathcal{G}}$ in an adversarial set up. The discriminator ${\mathcal{D}}_{\mathcal{F}}$ is trained to predict $s$ in the transformed image ${\mathbf{x}}^{\prime }$ , while the filter $\mathcal{F}$ is trained to transform images that fools the discriminator. The training objective of the filter can be described with the following minimax setup:
|
| 54 |
+
|
| 55 |
+
$$
|
| 56 |
+
\mathop{\min }\limits_{\mathcal{F}}\mathop{\max }\limits_{{\mathcal{D}}_{\mathcal{F}}}{\mathbb{E}}_{\mathbf{x},{\mathbf{z}}_{1}}\left\lbrack {{\ell }_{\mathcal{F}}\left( {{\mathcal{D}}_{\mathcal{F}}\left( {\mathcal{F}\left( {\mathbf{x},{\mathbf{z}}_{1}}\right) , s}\right) }\right\rbrack }\right. \tag{1}
|
| 57 |
+
$$
|
| 58 |
+
|
| 59 |
+
$$
|
| 60 |
+
\text{s.t.}{\mathbb{E}}_{\mathbf{x},{\mathbf{z}}_{1}}\left\lbrack {d\left( {\mathcal{F}\left( {\mathbf{x},{\mathbf{z}}_{1}}\right) ,\mathbf{x}}\right) }\right\rbrack \leq {\varepsilon }_{1}
|
| 61 |
+
$$
|
| 62 |
+
|
| 63 |
+
where ${\varepsilon }_{1} \geq 0$ denotes the allowed distortion in the transformation performed by the filter.
|
| 64 |
+
|
| 65 |
+
The purpose of the generator $\mathcal{G}$ is to generate a synthetic ${s}^{\prime }$ , independent of the original $s$ . Its discriminator, ${\mathcal{D}}_{\mathcal{G}}$ , takes as input a real image or an image generated by $\mathcal{G}$ , and is trained to predict $s$ in the first case, and to predict the "fake" in the second, as in the semi-supervised learning setup in (Salimans et al., 2016).
|
| 66 |
+
|
| 67 |
+
This setup is defined with the following minimax game:
|
| 68 |
+
|
| 69 |
+
$$
|
| 70 |
+
\mathop{\min }\limits_{\mathcal{G}}\mathop{\max }\limits_{{\mathcal{D}}_{\mathcal{G}}}{\mathbb{E}}_{\mathbf{x},{s}^{\prime },{\mathbf{z}}_{1},{\mathbf{z}}_{2}}\left\lbrack {{\ell }_{\mathcal{G}}\left( {{\mathcal{D}}_{\mathcal{G}}\left( {\mathcal{G}\left( {\mathcal{F}\left( {\mathbf{x},{\mathbf{z}}_{1}}\right) ,{s}^{\prime },{\mathbf{z}}_{2}}\right) }\right) ,\text{ fake }}\right) }\right\rbrack
|
| 71 |
+
$$
|
| 72 |
+
|
| 73 |
+
$$
|
| 74 |
+
+ {\mathbb{E}}_{\mathbf{x}}\left\lbrack {{\ell }_{\mathcal{G}}\left( {{\mathcal{D}}_{\mathcal{G}}\left( {\mathbf{x};{\mathcal{D}}_{\mathcal{G}}}\right) , s}\right) }\right\rbrack \tag{2}
|
| 75 |
+
$$
|
| 76 |
+
|
| 77 |
+
$$
|
| 78 |
+
\text{s.t.}{\mathbb{E}}_{\mathbf{x},{s}^{\prime },{\mathbf{z}}_{1},{\mathbf{z}}_{2}}\left\lbrack {d\left( {\mathcal{G}\left( {\mathcal{F}\left( {\mathbf{x},{\mathbf{z}}_{1}}\right) ,{s}^{\prime },{\mathbf{z}}_{2}}\right) ,\mathbf{x}}\right) }\right\rbrack \leq {\varepsilon }_{2}
|
| 79 |
+
$$
|
| 80 |
+
|
| 81 |
+
where ${\varepsilon }_{2} \geq 0$ is the allowed distortion in the transformation performed by the generator.
|
| 82 |
+
|
| 83 |
+
### 3.2. MelGAN
|
| 84 |
+
|
| 85 |
+
MelGAN is a non-autoregressive feed-forward convolutional model which is trained to learn to invert mel-spectrograms to raw waveforms (Kumar et al., 2019). The MelGAN generator consists of a stack of transposed convolutional layers, and the model uses three different discriminators which each operate at different resolutions on the raw audio. The discriminators are trained using a hinge loss version (Lim & Ye, 2017) of the original GAN objective. The generator is trained using the original GAN objective, combined with a feature matching loss (Larsen et al., 2015), which minimizes the L1 distance between the discriminator feature maps of real and synthetic audio.
|
| 86 |
+
|
| 87 |
+
For each layer $i$ , let ${\mathcal{D}}_{k}^{\left( i\right) }\left( \cdot \right)$ denote the output from the $k$ th discriminator. The feature matching loss is computed as ${\mathcal{L}}_{\mathrm{{FM}}}\left( {\mathcal{G},{\mathcal{D}}_{k}}\right) =$ ${\mathbb{E}}_{\mathbf{x},\mathbf{m}}\left\lbrack {\mathop{\sum }\limits_{i}\frac{1}{{N}_{i}}{\begin{Vmatrix}{\mathcal{D}}_{k}^{\left( i\right) }\left( \mathbf{x}\right) - {\mathcal{D}}_{k}^{\left( i\right) }\left( \mathcal{G}\left( \mathbf{m}\right) \right) \end{Vmatrix}}_{1}}\right\rbrack \;$ where ${N}_{i}$ is the number of output units in layer $i,\mathbf{x}$ is the raw audio signal and $\mathbf{m}$ is its corresponding mel-spectrogram. The training objectives for the discriminators are then formulated as:
|
| 88 |
+
|
| 89 |
+
$$
|
| 90 |
+
\mathop{\min }\limits_{{\mathcal{D}}_{k}}\left( {{\mathbb{E}}_{\mathbf{x}}\left\lbrack {\min \left( {0,1 - {\mathcal{D}}_{k}\left( \mathbf{x}\right) }\right) }\right\rbrack }\right. \tag{3}
|
| 91 |
+
$$
|
| 92 |
+
|
| 93 |
+
$$
|
| 94 |
+
\left. {+{\mathbb{E}}_{\mathbf{m},\mathbf{z}}\left\lbrack {\min \left( {0,1 + {\mathcal{D}}_{k}\left( {\mathcal{G}\left( {\mathbf{m},\mathbf{z}}\right) }\right) }\right) }\right\rbrack }\right) \text{.}
|
| 95 |
+
$$
|
| 96 |
+
|
| 97 |
+
The generator objective is:
|
| 98 |
+
|
| 99 |
+
$$
|
| 100 |
+
\mathop{\min }\limits_{\mathcal{G}}{\mathbb{E}}_{\mathbf{m},\mathbf{z}}\left\lbrack {\mathop{\sum }\limits_{{k = 1}}^{3} - {\mathcal{D}}_{k}\left( {\mathcal{G}\left( {\mathbf{m},\mathbf{z}}\right) }\right) }\right\rbrack + \gamma \mathop{\sum }\limits_{{k = 1}}^{3}{\mathcal{L}}_{\mathrm{{FM}}}\left( {\mathcal{G},{\mathcal{D}}_{k}}\right) ,
|
| 101 |
+
$$
|
| 102 |
+
|
| 103 |
+
(4)
|
| 104 |
+
|
| 105 |
+
where $\gamma$ is a hyperparameter controlling the balance between the feature matching and fooling the discriminators.
|
| 106 |
+
|
| 107 |
+
#### 3.3.Our contribution
|
| 108 |
+
|
| 109 |
+
Notation. Let $s \in \{ 0,1\}$ be a binary sensitive attribute, and ${s}^{\prime } \sim \mathcal{U}\{ 0,1\}$ . Let $\mathbf{z} \in \mathcal{Z}$ be a noise vector, $\mathbf{x} \in \mathcal{X}$ a raw waveform and $\mathbf{m} \in \mathcal{M}$ a mel-spectrogram representation of $\mathbf{x}$ . Let $\mathcal{D}$ be a discriminator, $\mathcal{F} : \mathcal{M} \times {\mathcal{Z}}_{1} \rightarrow {\mathcal{M}}^{\prime }$ a filter network and $\mathcal{G} : {\mathcal{M}}^{\prime } \times {\mathcal{Z}}_{2} \rightarrow {\mathcal{M}}^{\prime \prime }$ a generator. Let ${\mathcal{X}}^{\prime }$ and ${\mathcal{X}}^{\prime \prime }$ denote the MelGAN inverted sets of ${\mathcal{M}}^{\prime }$ and ${\mathcal{M}}^{\prime \prime }$ . Each $\mathbf{x}$ is paired with a sensitive attribute: $\left( {{\mathbf{x}}_{i},{s}_{i}}\right)$ . Each sample $\left( {{\mathbf{x}}_{i},{s}_{i}}\right)$ has a corresponding utility attribute ${u}_{i}$ , only used for evaluation. In our case this is the spoken digit in the recording, i.e. ${u}_{i} \in \{ 0,\ldots ,9\}$ .
|
| 110 |
+
|
| 111 |
+
In this work we combine PCGAN and MelGAN to adversar-ially learn private representations of speech data, and name our model PCMelGAN. The whole pipeline is shown in Figure 1. The speech recording $\mathbf{x}$ is mapped to a mel-spectrogram $\mathbf{m}$ . PCGAN, with its filter and generator modules $\mathcal{F}$ and $\mathcal{G}$ , is trained to ensure privacy in the mel-spectrogram. We use a pre-trained MelGAN to invert the mel-spectrogram output of our model ${\mathbf{m}}^{\prime \prime } \in {\mathcal{M}}^{\prime \prime }$ to a raw waveform $\mathbf{x} \in {\mathcal{X}}^{\prime \prime }$ .
|
| 112 |
+
|
| 113 |
+
We implement $\mathcal{F}$ and $\mathcal{G}$ using a U-Net architecture similar to (Martinsson et al.,2020). For ${\mathcal{D}}_{\mathcal{F}}$ and ${\mathcal{D}}_{\mathcal{G}}$ we use the AlexNet architecture (Krizhevsky et al., 2012) as used in (Becker et al., 2018) for gender classification in the spectrogram domain. We use categorical cross entropy as loss functions denoted by ${\ell }_{\mathcal{F}}$ and ${\ell }_{\mathcal{G}}$ . The L1-norm is used as the distortion measure $d$ . The constrained optimization problem is reformulated as an unconstrained one by relaxing it using the quadratic penalty method (Nocedal & Wright, 2006). The distortion constraint is denoted by $\varepsilon$ and the penalty parameter by $\lambda$ . The parameters are updated using Adam (Kingma & Ba, 2014).
|
| 114 |
+
|
| 115 |
+
As a baseline comparison, we use PCMelGAN where the generator module is excluded. Thus we can directly see how much the generator module adds to the privacy task.
|
| 116 |
+
|
| 117 |
+

|
| 118 |
+
|
| 119 |
+
Figure 1. Schematic diagram of our model: PCMelGAN.
|
| 120 |
+
|
| 121 |
+
## 4. Experiments
|
| 122 |
+
|
| 123 |
+
### 4.1. Data
|
| 124 |
+
|
| 125 |
+
We use the AudioMNIST dataset to conduct our experiments (Becker et al., 2018). AudioMNIST consists of 30,000 audio recordings of approximately 9.5 hours of spoken digits (0-9) in English. Each digit it repeated 50 times for each of the 60 different speakers. The audio files have a sampling frequency of ${48}\mathrm{{kHz}}$ and are saved in a 16 bit integer format. The audio recordings are also labeled with information such as age, gender, origin and accent of all speakers were collected.
|
| 126 |
+
|
| 127 |
+
In this paper, we use 10,000 samples as a training set and 2,000 samples as a test set. For the training set, we randomly sample speakers such that it consists of 10 female and 10 male speakers. Similarly, the test set consists of 2 female and 2 male speakers. We downsample the recordings to 8 $\mathrm{{kHz}}$ and use zero padding to get an equal length of 8192 for each recording.
|
| 128 |
+
|
| 129 |
+
### 4.2. Data-driven implementation
|
| 130 |
+
|
| 131 |
+
To encourage reproducibility, we make our code publicly available ${}^{1}$ . The model is trained end-to-end, with the hy-perparameters ${\eta }_{{\mathcal{D}}_{\mathcal{F}}},{\eta }_{{\mathcal{D}}_{\mathcal{G}}} = {0.0004},{\eta }_{\mathcal{F}},{\eta }_{\mathcal{G}} = {0.0004},\lambda =$ ${10}^{2},\varepsilon \in \{ {0.005},{0.01},{0.05},{0.1}\}$ and $\left( {{\beta }_{1},{\beta }_{2}}\right) = \left( {{0.5},{0.9}}\right)$ . During training, $\mathbf{m}$ is computed using the short-time Fourier transform with a window size of 1024, a hop length of 256 and 80 mel bins. We normalize and clip the spectrograms to $\left\lbrack {-1,1}\right\rbrack$ as in (Donahue et al.,2019), with the exception that the normalization is performed on the whole spectrogram as opposed to for each frequency bin.
|
| 132 |
+
|
| 133 |
+
### 4.3. Evaluation
|
| 134 |
+
|
| 135 |
+
For each configuration of hyperparameters, we train the model using five different random seeds for 1000 epochs on a NVIDIA V100 GPU. We evaluate the experiments both in the spectrogram and in the raw waveform domain. In each domain, we train digit and gender classifiers on the corresponding training sets, ${\mathcal{X}}_{\text{train }}$ and ${\mathcal{M}}_{\text{train }}$ . The classifiers that predict gender are used as a privacy measure, and the classifiers that predict spoken digits are used as a utility measure. We evaluate the fixed classifiers on ${\mathcal{M}}_{\text{test }}^{\prime }$ and ${\mathcal{M}}_{\text{test }}^{\prime \prime }$ , to directly compare the added benefit by a generator module on-top of the filter.
|
| 136 |
+
|
| 137 |
+
We also measure the quality of the generated audio using Fréchet Inception Distance (FID) (Heusel et al., 2017). FID is frequently used to measure the quality of GAN-generated images. Since we are interested in measuring generated audio quality, we replace the commonly used Inception v3 network with an AudioNet (Becker et al., 2018) digit classifier using the features from the last convolutional layer.
|
| 138 |
+
|
| 139 |
+
## 5. Results
|
| 140 |
+
|
| 141 |
+
Quantitative results. In Table 1 the mean accuracy and standard deviation of the fixed classifiers on the test set is shown over five runs in the spectrogram and audio domain, respectively. Privacy is measured by the accuracy of the fixed classifier predicting the original gender ${s}_{i}$ , where an accuracy close to ${50}\%$ corresponds to more privacy. Utility is measured by the accuracy of the fixed classifier predicting the digit ${u}_{i}$ , where a higher accuracy corresponds to greater utility.
|
| 142 |
+
|
| 143 |
+
Table 1. The spectrogram classifiers' mean accuracy and standard deviation on the test sets ${\mathcal{M}}_{\text{test }}^{\prime }$ and ${\mathcal{M}}_{\text{test }}^{\prime \prime }$ (top) and on ${\mathcal{X}}_{\text{test }}^{\prime }$ and ${\mathcal{X}}_{\text{test }}^{\prime \prime }$ (bottom) for varying values of $\varepsilon$ . For privacy (gender) an accuracy close to ${50}\%$ is better. For utility (digit), a higher accuracy is better.
|
| 144 |
+
|
| 145 |
+
<table><tr><td rowspan="2">Dist. $\varepsilon$</td><td colspan="2">Privacy</td><td colspan="2">Utility</td></tr><tr><td>Baseline</td><td>PCMelGAN</td><td>Baseline</td><td>PCMelGAN</td></tr><tr><td>0.005</td><td>${49.9} \pm {2.2}$</td><td>${48.7} \pm {2.4}$</td><td>${84.1} \pm {2.8}$</td><td>${81.1} \pm {3.7}$</td></tr><tr><td>0.01</td><td>${55.0} \pm {4.7}$</td><td>${50.9} \pm {1.4}$</td><td>${79.9} \pm {4.3}$</td><td>${78.8} \pm {7.8}$</td></tr><tr><td>0.05</td><td>${61.3} \pm {10.2}$</td><td>${51.0} \pm {0.7}$</td><td>${80.9} \pm {8.2}$</td><td>${54.7} \pm {23.8}$</td></tr><tr><td>0.1</td><td>${48.9} \pm {1.0}$</td><td>${49.8} \pm {0.5}$</td><td>${29.1} \pm {7.5}$</td><td>${15.1} \pm {5.4}$</td></tr><tr><td>0.005</td><td>${52.2} \pm {3.6}$</td><td>${49.1} \pm {1.6}$</td><td>${36.8} \pm {4.0}$</td><td>${49.4} \pm {9.8}$</td></tr><tr><td>0.01</td><td>${53.2} \pm {3.2}$</td><td>${51.3} \pm {1.6}$</td><td>${34.3} \pm {8.5}$</td><td>${49.2} \pm {8.6}$</td></tr><tr><td>0.05</td><td>${61.5} \pm {8.1}$</td><td>${51.2} \pm {0.7}$</td><td>${28.0} \pm {15.8}$</td><td>${31.3} \pm {10.3}$</td></tr><tr><td>0.1</td><td>${51.0} \pm {1.3}$</td><td>${49.6} \pm {0.4}$</td><td>${11.4} \pm {1.7}$</td><td>${15.8} \pm {2.3}$</td></tr></table>
|
| 146 |
+
|
| 147 |
+
In Table 2, FID scores are shown for our model working in the audio domain. In figure 3, a recording of a woman saying "zero" is shown, together with the baseline (filter) and PCMelGAN generating a male and a female spectrogram.
|
| 148 |
+
|
| 149 |
+
Table 2. The mean FID-score and standard deviation of the test sets ${\mathcal{X}}_{\text{test }}^{\prime }$ and ${\mathcal{X}}_{\text{test }}^{\prime \prime }$ for different $\varepsilon$ . A lower value corresponds to more realistic audio.
|
| 150 |
+
|
| 151 |
+
<table><tr><td rowspan="2">Dist. E</td><td colspan="2">FID Audio</td></tr><tr><td>Baseline</td><td>PCMelgan</td></tr><tr><td>0.005</td><td>${20.17} \pm {4.04}$</td><td>${10.12} \pm {3.15}$</td></tr><tr><td>0.01</td><td>${27.27} \pm {4.50}$</td><td>${10.02} \pm {2.27}$</td></tr><tr><td>0.05</td><td>${29.59} \pm {5.77}$</td><td>${20.22} \pm {4.87}$</td></tr><tr><td>0.1</td><td>${41.50} \pm {3.49}$</td><td>${22.32} \pm {5.20}$</td></tr></table>
|
| 152 |
+
|
| 153 |
+
Qualitative results. We provide samples from the AudioM-NIST test set that were transformed by our model ${}^{2}$ . The shared folder contains original sound clips and their corresponding transformed versions.
|
| 154 |
+
|
| 155 |
+
---
|
| 156 |
+
|
| 157 |
+
${}^{2}$ https://www.dropbox.com/sh/
|
| 158 |
+
|
| 159 |
+
oangx84ibhzodhs/AAAfG-PBW4Ne8KwdipAmKFy1a? dl=0
|
| 160 |
+
|
| 161 |
+
'https://github.com/daverics/pcmelgan
|
| 162 |
+
|
| 163 |
+
---
|
| 164 |
+
|
| 165 |
+

|
| 166 |
+
|
| 167 |
+
Figure 2. Privacy vs utility trade-off for the baseline and PCMelGAN for varying $\varepsilon$ . Orange and blue points correspond to evaluating the fixed classifiers for digits and gender on the spectrogram datasets ${\mathcal{M}}_{\text{test }}^{\prime }$ and ${\mathcal{M}}_{\text{test }}^{\prime \prime }$ (left), and raw waveform datasets ${\mathcal{X}}_{\text{test }}^{\prime }$ and ${\mathcal{X}}_{\text{test }}^{\prime \prime }$ (right). Lower right corner is better.
|
| 168 |
+
|
| 169 |
+

|
| 170 |
+
|
| 171 |
+
Figure 3. Spectrograms of saying "zero". The original recording of a female (top left), transformed ones from the baseline (top right), and our model of a sampled male (bottom left) and a sampled female (bottom right).
|
| 172 |
+
|
| 173 |
+
## 6. Discussion
|
| 174 |
+
|
| 175 |
+
Table 1 (top) and Figure 2 (left) demonstrate that the proposed method achieves strong privacy while working on the mel-spectrogram domain, and retains a strong utility preservation. We notice in Table 1 (bottom left) and in Figure 2 (right) that the proposed method is able to provide privacy in the audio domain, but to a loss of utility. However, when comparing to the baseline, we see that generating a synthetic $s$ both increases utility and ensures privacy. In the spectrogram domain, the filter model seems to be enough to obtain both privacy and utility. In both the spectrogram domain and the audio domain, the proposed approach achieves high privacy. We assume that the privacy will suffer from having a stricter distortion budget $\varepsilon$ , but this was not observed in the experiments. While a quick sanity check with $\varepsilon = {10}^{-5}$ resulted in the model learning the identity map (with no additional privacy), more experiments need to be carried out to detect when privacy starts to deteriorate with lower $\varepsilon$ . It is worth noting that for some $\varepsilon$ we have a large standard deviation. We hypothesize that this could be improved by using more diverse data, and future work should include evaluating the proposed method on longer sentences.
|
| 176 |
+
|
| 177 |
+
In Table 2 we noticed that our model obtains substantially better FID scores than the baseline in the audio domain. We conclude that adding the synthetic sample of the sensitive attribute improves the realism and fidelity of the speech signal. We observe this also from listening to the generated sounds (see qualitative results above).
|
| 178 |
+
|
| 179 |
+
## 7. Conclusions
|
| 180 |
+
|
| 181 |
+
In this work we have proposed an adversarially trained model that learns to make speech data private. We do this by first filtering a sensitive attribute, and then generating a new, independent sensitive attribute. We formulate this as an unconstrained optimization problem with a distortion budget. This is done in the spectrogram domain, and we use a pre-trained MelGAN to invert the generated mel-spectrogram back to a raw waveform. We compare our model with the baseline of just censoring the attribute, and show that we gain both privacy and utility by generating a new sensitive attribute in the audio domain.
|
| 182 |
+
|
| 183 |
+
References
|
| 184 |
+
|
| 185 |
+
Aloufi, R., Haddadi, H., and Boyle, D. Emotionless: Privacy-preserving speech analysis for voice assistants. arXiv preprint arXiv:1908.03632, 2019.
|
| 186 |
+
|
| 187 |
+
Becker, S., Ackermann, M., Lapuschkin, S., Müller, K.- R., and Samek, W. Interpreting and explaining deep neural networks for classification of audio signals. CoRR, abs/1807.03418, 2018.
|
| 188 |
+
|
| 189 |
+
Beutel, A., Chen, J., Zhao, Z., and Chi, E. H. Data decisions and theoretical implications when adversarially learning fair representations. arXiv preprint arXiv:1707.00075, 2017.
|
| 190 |
+
|
| 191 |
+
Donahue, C., Li, B., and Prabhavalkar, R. Exploring speech enhancement with generative adversarial networks for robust speech recognition. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 5024-5028. IEEE, 2018.
|
| 192 |
+
|
| 193 |
+
Donahue, C., McAuley, J., and Puckette, M. Adversarial audio synthesis. In International Conference on Learning Representations, 2019. URL https://openreview.net/forum?id=ByMVTsR5KQ.
|
| 194 |
+
|
| 195 |
+
Edwards, H. and Storkey, A. J. Censoring representations with an adversary. In 4th International Conference on Learning Representations, ICLR 2016, San Juan, Puerto Rico, May 2-4, 2016, Conference Track Proceedings, 2016.
|
| 196 |
+
|
| 197 |
+
Engel, J., Agrawal, K. K., Chen, S., Gulrajani, I., Donahue, C., and Roberts, A. GANSynth: Adversarial neural audio synthesis. In International Conference on Learning Representations, 2019. URL https://openreview.net/forum?id=H1xQVn09FX.
|
| 198 |
+
|
| 199 |
+
Gibiansky, A., Arik, S., Diamos, G., Miller, J., Peng, K., Ping, W., Raiman, J., and Zhou, Y. Deep voice 2: Multi-speaker neural text-to-speech. In Advances in neural information processing systems, pp. 2962-2970, 2017.
|
| 200 |
+
|
| 201 |
+
Goodfellow, I. J., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., and Bengio, Y. Generative adversarial networks, 2014.
|
| 202 |
+
|
| 203 |
+
Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., and Hochreiter, S. Gans trained by a two time-scale update rule converge to a local nash equilibrium, 2017.
|
| 204 |
+
|
| 205 |
+
Higuchi, T., Kinoshita, K., Delcroix, M., and Nakatani, T. Adversarial training for data-driven speech enhancement without parallel corpus. In 2017 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU), pp. 40-47. IEEE, 2017.
|
| 206 |
+
|
| 207 |
+
Hsu, C.-C., Hwang, H.-T., Wu, Y.-C., Tsao, Y., and Wang, H.-M. Voice conversion from unaligned corpora using variational autoencoding wasserstein generative adversarial networks. arXiv preprint arXiv:1704.00849, 2017.
|
| 208 |
+
|
| 209 |
+
Huang, C., Kairouz, P., Chen, X., Sankar, L., and Rajagopal, R. Context-aware generative adversarial privacy. Entropy, 19(12), 2017. ISSN 1099-4300. doi: 10.3390/e19120656. URL https://www.mdpi.com/1099-4300/19/ ${12}/{656}$ .
|
| 210 |
+
|
| 211 |
+
Huang, C., Kairouz, P., and Sankar, L. Generative adversarial privacy: A data-driven approach to information-theoretic privacy. In 2018 52nd Asilomar Conference on Signals, Systems, and Computers, pp. 2162-2166, Oct 2018. doi: 10.1109/ACSSC.2018.8645532.
|
| 212 |
+
|
| 213 |
+
Isola, P., Zhu, J., Zhou, T., and Efros, A. A. Image-to-image translation with conditional adversarial networks. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5967-5976, July 2017. doi: 10.1109/CVPR.2017.632.
|
| 214 |
+
|
| 215 |
+
Kameoka, H., Kaneko, T., Tanaka, K., and Hojo, N. Stargan-vc: Non-parallel many-to-many voice conversion using star generative adversarial networks. In 2018 IEEE Spoken Language Technology Workshop (SLT), pp. 266-273. IEEE, 2018.
|
| 216 |
+
|
| 217 |
+
Kaneko, T., Kameoka, H., Tanaka, K., and Hojo, N. Stargan-vc2: Rethinking conditional methods for stargan-based voice conversion. arXiv preprint arXiv:1907.12279, 2019.
|
| 218 |
+
|
| 219 |
+
Kingma, D. P. and Ba, J. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
|
| 220 |
+
|
| 221 |
+
Krizhevsky, A., Sutskever, I., and Hinton, G. E. Imagenet classification with deep convolutional neural networks. In Pereira, F., Burges, C. J. C., Bottou, L., and Weinberger, K. Q. (eds.), Advances in Neural Information Processing Systems 25, pp. 1097-1105. Curran Associates, Inc., 2012.
|
| 222 |
+
|
| 223 |
+
Kumar, K., Kumar, R., de Boissiere, T., Gestin, L., Teoh, W. Z., Sotelo, J., de Brébisson, A., Bengio, Y., and Courville, A. C. Melgan: Generative adversarial networks for conditional waveform synthesis. In Advances in Neural Information Processing Systems, pp. 14881- 14892, 2019.
|
| 224 |
+
|
| 225 |
+
Larsen, A. B. L., Sønderby, S. K., and Winther, O. Autoen-coding beyond pixels using a learned similarity metric. CoRR, abs/1512.09300, 2015. URL http://arxiv.org/abs/1512.09300.
|
| 226 |
+
|
| 227 |
+
Lim, J. H. and Ye, J. C. Geometric gan, 2017.
|
| 228 |
+
|
| 229 |
+
Martinsson, J., Listo Zec, E., Gillblad, D., and Mogren, O. Adversarial representation learning for synthetic replacement of sensitive data. CoRR, abs/2006.08039, 2020. URL https://arxiv.org/abs/2006.08039.
|
| 230 |
+
|
| 231 |
+
Nocedal, J. and Wright, S. J. Numerical Optimization. Springer, New York, NY, USA, second edition, 2006.
|
| 232 |
+
|
| 233 |
+
Oord, A. v. d., Dieleman, S., Zen, H., Simonyan, K., Vinyals, O., Graves, A., Kalchbrenner, N., Senior, A., and Kavukcuoglu, K. Wavenet: A generative model for raw audio. arXiv preprint arXiv:1609.03499, 2016.
|
| 234 |
+
|
| 235 |
+
Pascual, S., Bonafonte, A., and Serrà, J. Segan: Speech enhancement generative adversarial network. In Proc. Interspeech 2017, pp. 3642-3646, 2017. doi: 10.21437/ Interspeech.2017-1428. URL http://dx.doi.org/ 10.21437/Interspeech.2017-1428.
|
| 236 |
+
|
| 237 |
+
Pasini, M. Melgan-vc: Voice conversion and audio style transfer on arbitrarily long samples using spectrograms. arXiv preprint arXiv:1910.03713, 2019.
|
| 238 |
+
|
| 239 |
+
Qian, J., Du, H., Hou, J., Chen, L., Jung, T., and Li, X.-Y. Hidebehind: Enjoy voice input with voiceprint unclon-ability and anonymity. In Proceedings of the 16th ACM Conference on Embedded Networked Sensor Systems, pp. 82-94, 2018.
|
| 240 |
+
|
| 241 |
+
Qin, S. and Jiang, T. Improved wasserstein conditional generative adversarial network speech enhancement. EURASIP Journal on Wireless Communications and Networking, 2018(1):181, 2018.
|
| 242 |
+
|
| 243 |
+
Raval, N., Machanavajjhala, A., and Cox, L. P. Protecting visual secrets using adversarial nets. In 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 1329-1332. IEEE, 2017.
|
| 244 |
+
|
| 245 |
+
Salimans, T., Goodfellow, I., Zaremba, W., Cheung, V., Radford, A., Chen, X., and Chen, X. Improved techniques for training gans. In Lee, D. D., Sugiyama, M., Luxburg, U. V., Guyon, I., and Garnett, R. (eds.), Advances in Neural Information Processing Systems 29, pp. 2234- 2242. Curran Associates, Inc., 2016.
|
| 246 |
+
|
| 247 |
+
Shen, J., Pang, R., Weiss, R. J., Schuster, M., Jaitly, N., Yang, Z., Chen, Z., Zhang, Y., Wang, Y., Skerrv-Ryan, R., et al. Natural tts synthesis by conditioning wavenet on mel spectrogram predictions. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 4779-4783. IEEE, 2018.
|
| 248 |
+
|
| 249 |
+
Song, L., Lu, Z., He, R., Sun, Z., and Tan, T. Geometry guided adversarial facial expression synthesis. CoRR, abs/1712.03474, 2017. URL http://arxiv.org/ abs/1712.03474.
|
| 250 |
+
|
| 251 |
+
Srivastava, B. M. L., Bellet, A., Tommasi, M., and Vincent, E. Privacy-preserving adversarial representation learning in asr: Reality or illusion? Interspeech 2019, Sep 2019. doi: 10.21437/ interspeech.2019-2415. URL http://dx.doi.org/ 10.21437/Interspeech.2019-2415.
|
| 252 |
+
|
| 253 |
+
Tang, H., Xu, D., Liu, G., Wang, W., Sebe, N., and Yan, Y. Cycle in cycle generative adversarial networks for keypoint-guided image generation. In Proceedings of the 27th ACM International Conference on Multimedia, MM '19, pp. 2052-2060, New York, NY, USA, 2019. Association for Computing Machinery. ISBN 9781450368896. doi: 10.1145/3343031.3350980. URL https://doi.org/10.1145/3343031.3350980.
|
| 254 |
+
|
| 255 |
+
Xie, Q., Dai, Z., Du, Y., Hovy, E., and Neubig, G. Controllable invariance through adversarial feature learning. In Advances in Neural Information Processing Systems, pp. 585-596, 2017.
|
| 256 |
+
|
| 257 |
+
Zhang, B. H., Lemoine, B., and Mitchell, M. Mitigating unwanted biases with adversarial learning. In Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, pp. 335-340. ACM, 2018.
|
| 258 |
+
|
| 259 |
+
## Supplementary
|
| 260 |
+
|
| 261 |
+
Algorithm 1 PCMelGAN
|
| 262 |
+
|
| 263 |
+
Input: dataset ${\mathcal{X}}_{\text{train }}$ , learning rate $\eta$ , penalty $\lambda$ , distor-
|
| 264 |
+
|
| 265 |
+
---
|
| 266 |
+
|
| 267 |
+
tion constant $\varepsilon$
|
| 268 |
+
|
| 269 |
+
repeat
|
| 270 |
+
|
| 271 |
+
Draw $n$ samples uniformly at random from the dataset
|
| 272 |
+
|
| 273 |
+
$\left( {{x}_{1},{s}_{1}}\right) ,\ldots ,\left( {{x}_{n},{s}_{n}}\right) \sim {\mathcal{X}}_{\text{train }}$
|
| 274 |
+
|
| 275 |
+
Compute mel-spectrogram and normalize
|
| 276 |
+
|
| 277 |
+
${\mathbf{m}}_{i} = \mathcal{S}\mathcal{T}\mathcal{F}\mathcal{T}\left( {\mathbf{x}}_{i}\right) \forall i = 1,\ldots , n$
|
| 278 |
+
|
| 279 |
+
Draw $n$ samples from the noise distribution
|
| 280 |
+
|
| 281 |
+
${\mathbf{z}}_{1}^{\left( 1\right) },\ldots ,{\mathbf{z}}_{n}^{\left( 1\right) } \sim \mathcal{N}\left( {0,1}\right)$
|
| 282 |
+
|
| 283 |
+
${\mathbf{z}}_{1}^{\left( 2\right) },\ldots ,{\mathbf{z}}_{n}^{\left( 2\right) } \sim \mathcal{N}\left( {0,1}\right)$
|
| 284 |
+
|
| 285 |
+
Draw $n$ samples from the synthetic distribution
|
| 286 |
+
|
| 287 |
+
$$
|
| 288 |
+
{s}_{1}^{\prime },\ldots ,{s}_{n}^{\prime } \sim \mathcal{U}\{ 0,1\}
|
| 289 |
+
$$
|
| 290 |
+
|
| 291 |
+
Compute the censored and synthetic data
|
| 292 |
+
|
| 293 |
+
$$
|
| 294 |
+
{\mathbf{m}}_{i}^{\prime } = \mathcal{F}\left( {{\mathbf{m}}_{i},{\mathbf{z}}_{i}^{\left( 1\right) };{\mathbf{\theta }}_{\mathcal{F}}}\right) \forall i = 1,\ldots , n
|
| 295 |
+
$$
|
| 296 |
+
|
| 297 |
+
$$
|
| 298 |
+
{\mathbf{m}}_{i}^{\prime \prime } = \mathcal{G}\left( {{\mathbf{m}}_{i}^{\prime },{s}_{i}^{\prime },{\mathbf{z}}_{i}^{\left( 2\right) };{\mathbf{\theta }}_{\mathcal{G}}}\right) \forall i = 1,\ldots , n
|
| 299 |
+
$$
|
| 300 |
+
|
| 301 |
+
Compute filter and generator loss
|
| 302 |
+
|
| 303 |
+
$$
|
| 304 |
+
{\mathcal{L}}_{\mathcal{F}}\left( {\mathbf{\theta }}_{\mathcal{F}}\right) = - \frac{1}{n}\mathop{\sum }\limits_{{i = 1}}^{n}\ell \left( {{\mathcal{D}}_{\mathcal{F}}\left( {{\mathbf{m}}_{i}^{\prime };{\mathbf{\theta }}_{{\mathcal{D}}_{\mathcal{F}}}}\right) ,{s}_{i}}\right)
|
| 305 |
+
$$
|
| 306 |
+
|
| 307 |
+
$$
|
| 308 |
+
+ \lambda \max {\left( \frac{1}{n}\mathop{\sum }\limits_{{i = 1}}^{n}d\left( {\mathbf{m}}_{i}^{\prime },{\mathbf{m}}_{i}\right) - \varepsilon ,0\right) }^{2}
|
| 309 |
+
$$
|
| 310 |
+
|
| 311 |
+
$$
|
| 312 |
+
{\mathcal{L}}_{\mathcal{G}}\left( {\mathbf{\theta }}_{\mathcal{G}}\right) = \frac{1}{n}\mathop{\sum }\limits_{{i = 1}}^{n}\ell \left( {{\mathcal{D}}_{\mathcal{G}}\left( {{\mathbf{m}}_{i}^{\prime \prime };{\mathbf{\theta }}_{{\mathcal{D}}_{\mathcal{G}}}}\right) ,{s}_{i}}\right)
|
| 313 |
+
$$
|
| 314 |
+
|
| 315 |
+
$$
|
| 316 |
+
+ \lambda \max {\left( \frac{1}{n}\mathop{\sum }\limits_{{i = 1}}^{n}d\left( {\mathbf{m}}_{i}^{\prime \prime },{\mathbf{m}}_{i}\right) - \varepsilon ,0\right) }^{2}
|
| 317 |
+
$$
|
| 318 |
+
|
| 319 |
+
Update filter and generator parameters
|
| 320 |
+
|
| 321 |
+
${\mathbf{\theta }}_{\mathcal{F}} \leftarrow \operatorname{Adam}\left( {{\mathbf{\theta }}_{\mathcal{F}};{\eta }_{\mathcal{F}},{\beta }_{1},{\beta }_{2}}\right)$
|
| 322 |
+
|
| 323 |
+
${\mathbf{\theta }}_{\mathcal{G}} \leftarrow \operatorname{Adam}\left( {{\mathbf{\theta }}_{\mathcal{G}};{\eta }_{\mathcal{G}},{\beta }_{1},{\beta }_{2}}\right)$
|
| 324 |
+
|
| 325 |
+
Compute discriminator losses
|
| 326 |
+
|
| 327 |
+
$$
|
| 328 |
+
{\mathcal{L}}_{{\mathcal{D}}_{\mathcal{F}}}\left( {\mathbf{\theta }}_{{\mathcal{D}}_{\mathcal{F}}}\right) = \frac{1}{n}\mathop{\sum }\limits_{{i = 1}}^{n}\ell \left( {{\mathcal{D}}_{\mathcal{F}}\left( {{\mathbf{m}}_{i}^{\prime };{\mathbf{\theta }}_{{\mathcal{D}}_{\mathcal{F}}}}\right) ,{s}_{i}}\right)
|
| 329 |
+
$$
|
| 330 |
+
|
| 331 |
+
${\mathcal{L}}_{{\mathcal{D}}_{\mathcal{G}}}\left( {\mathbf{\theta }}_{{\mathcal{D}}_{\mathcal{G}}}\right) = \frac{1}{n}\mathop{\sum }\limits_{{i = 1}}^{n}\ell \left( {{\mathcal{D}}_{\mathcal{G}}\left( {{\mathbf{m}}_{i}^{\prime \prime };{\mathbf{\theta }}_{{\mathcal{D}}_{\mathcal{G}}}}\right) ,\text{ fake }}\right)$
|
| 332 |
+
|
| 333 |
+
$$
|
| 334 |
+
+ \frac{1}{n}\mathop{\sum }\limits_{{i = 1}}^{n}\ell \left( {{\mathcal{D}}_{\mathcal{G}}\left( {{\mathbf{m}}_{i};{\mathbf{\theta }}_{{\mathcal{D}}_{\mathcal{G}}}}\right) ,{s}_{i}}\right)
|
| 335 |
+
$$
|
| 336 |
+
|
| 337 |
+
Update discriminator parameters
|
| 338 |
+
|
| 339 |
+
${\mathbf{\theta }}_{\mathcal{D}} \leftarrow \operatorname{Adam}\left( {{\mathbf{\theta }}_{\mathcal{D}};{\eta }_{\mathcal{D}},{\beta }_{1},{\beta }_{2}}\right)$
|
| 340 |
+
|
| 341 |
+
until termination criterion is met
|
| 342 |
+
|
| 343 |
+
---
|
| 344 |
+
|
papers/ICML/ICML 2020/ICML 2020 Workshop/ICML 2020 Workshop SAS/3IKKBxByalk/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,223 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ ADVERSARIAL REPRESENTATION LEARNING FOR PRIVATE SPEECH GENERATION
|
| 2 |
+
|
| 3 |
+
David Ericsson ${}^{*{12}}$ Adam Östberg ${}^{*{12}}$ Edvin Listo Zec ${}^{2}$ John Martinsson ${}^{2}$ Olof Mogren ${}^{2}$
|
| 4 |
+
|
| 5 |
+
§ ABSTRACT
|
| 6 |
+
|
| 7 |
+
As more data is collected in various settings across organizations, companies, and countries, there has been an increase in the demand of user privacy. Developing privacy preserving methods for data analytics is thus an important area of research. In this work we present a model based on generative adversarial networks (GANs) that learns to obfuscate specific sensitive attributes in speech data. We train a model that learns to hide sensitive information in the data, while preserving the meaning in the utterance. The model is trained in two steps: first to filter sensitive information in the spectrogram domain, and then to generate new and private information independent of the filtered one. The model is based on a U-Net CNN that takes mel-spectrograms as input. A MelGAN is used to invert the spectrograms back to raw audio waveforms. We show that it is possible to hide sensitive information such as gender by generating new data, trained adversarially to maintain utility and realism.
|
| 8 |
+
|
| 9 |
+
§ 1. INTRODUCTION
|
| 10 |
+
|
| 11 |
+
With greater availability of computing power and large datasets, machine learning methods are increasingly being used to gain insights and make decisions based on data. While providing valuable insights, the methods may extract sensitive information which the provider of the data did not intend to disclose. An example of this is digital voice assistants. The user provides commands by speaking, and the speech is recorded through a microphone. A speech processing algorithm infers the spoken contents and executes the commands accordingly. However, it has been shown that such state-of-the-art methods may infer other sensitive attributes as well, such as intention, gender, emotional state, identity and many more (Srivastava et al., 2019). This raises the question of how to learn representations of data to such applications, which are useful for the intended purpose while respecting the privacy of people.
|
| 12 |
+
|
| 13 |
+
Speakers' identities can often be inferred based on features such as timbre, pitch, and speaker style. Voice morphing techniques focus on making it difficult to infer information from these attributes by altering properties such as pitch and intensity. However, this often limit the utility of the signal, by altering intonation or variability. Voice conversion approaches instead aim to mimic a specific speaker. In contrast, this paper aims at modelling a distribution over plausible speakers, given the current input signal, and while hiding sensitive attributes.
|
| 14 |
+
|
| 15 |
+
In this paper, we approach the task of privacy-ensuring voice transformations using an adversarial learning set-up. Generative adversarial networks (GANs) were proposed as tractable generative models (Goodfellow et al., 2014), but have also been adapted to transform data and to provide privacy in the image domain (Huang et al., 2018). We build on these findings, and propose PCMelGAN, a two-step GAN set-up similar to from (Martinsson et al., 2020), that works in the mel-spectrogram domain. The set-up consists of a filter module which removes sensitive information, and a generator module which adds synthetic information in its place. The proposed method can successfully obfuscate sensitive attributes in speech data and generates realistic speech independent of the sensitive input attribute. Our results for censoring the gender attribute on the AudioMNIST dataset, demonstrate that the method can maintain a high level of utility, i.e. retain qualities such as intonation and content, while obtaining strong privacy.
|
| 16 |
+
|
| 17 |
+
In our experiments, the filter module makes it difficult for an adversary to infer the gender of the speaker, and the generator module randomly assigns a synthetic value for the gender attribute which is used when generating the output. However, the proposed method is designed to be able to censor any attribute of a categorical nature. The proposed solution is agnostic to the downstream task, with the objective to make the data as private as possible given a distortion constraint.
|
| 18 |
+
|
| 19 |
+
*Equal contribution ${}^{1}$ Chalmers University of Technology, Gothenburg, Sweden ${}^{2}$ RISE Research Institutes of Sweden. Correspondence to: David Ericsson <daverics@chalmers.se>, Adam Ostberg <adamostberg@hotmail.com>, Edvin Listo Zec <ed-vin.listo.zec@ri.se>.
|
| 20 |
+
|
| 21 |
+
Published at the workshop on Self-supervision in Audio and Speech at the ${37}^{\text{ th }}$ International Conference on Machine Learning, Vienna, Austria. Copyright 2020 by the author(s).
|
| 22 |
+
|
| 23 |
+
§ 2. RELATED WORK
|
| 24 |
+
|
| 25 |
+
Adversarial representation learning. Research within adversarial learning aims to train two or more models simultaneously with conflicting objective functions. One network which is trained on the main task, and one adversary network that is trained to identify the other network's output. Within the image domain, adversarial learning has had a large success in a wide variety of tasks since the introduction of generative adversarial networks (GANs) (Goodfellow et al., 2014). Examples of such tasks are image-to-image transformations (Isola et al., 2017), and synthesis of facial expressions and human pose (Song et al., 2017; Tang et al., 2019).
|
| 26 |
+
|
| 27 |
+
Much less work with GANs has been done related to speech and audio. (Pascual et al., 2017) introduce SEGAN (speech enhancement GAN) and thus seem to be the first ones to apply GANs to the task of speech generation and enhancement. The authors train a model end-to-end working on the raw-audio signal directly. (Higuchi et al., 2017; Qin & Jiang, 2018) use adversarial learning to perform speech enhancement for automatic speech recognition (ASR). (Donahue et al., 2018) study the benefit of GAN-based speech enhancement for ASR by extending SEGAN to operate on a time-frequency.
|
| 28 |
+
|
| 29 |
+
While these works are applying GANs to tackle the challenges within speech, they are limited to a supervised setting. The two most notable works in an unsupervised setting are (Donahue et al., 2019) and (Engel et al., 2019). (Donahue et al., 2019) focus on learning representations in an adversarial manner in order to synthesize audio data both on waveform and spectrogram level, but still show that it is a challenging task, concluding that most perceptually-informed spectrograms are non-invertible.
|
| 30 |
+
|
| 31 |
+
Intermediate speech representations. It is challenging to work on raw waveforms when modeling audio data, due to a high temporal resolution but also a complex relationship between short-term and long-term dependencies. This leads to most work being done on a lower-dimensional representation domain, usually a spectrogram. Two common intermediate speech representations are aligned linguistic features (Oord et al., 2016) and mel-spectrograms (Shen et al., 2018; Gibiansky et al., 2017). The mel scale is a nonlinear frequency scale that is linear in terms of human perception. It has the benefit of emphasizing differences in lower frequencies, which are important to humans. At the same time, it puts less weight on high frequency details, that typically consists of different bursts of noise which are not needed to be as distinguishable. (Engel et al., 2019) trains a GAN to synthesize magnitute-phase spectrograms of note records for different musical instruments. (Kumar et al., 2019) tackle the problem of non-invertible spectrograms by introducing MelGAN: a fully convolutional model designed
|
| 32 |
+
|
| 33 |
+
§ TO INVERT MEL-SPECTROGRAMS TO RAW WAVEFORMS.
|
| 34 |
+
|
| 35 |
+
Adversarial representation learning for privacy. Adversarial representation learning has also been studied as a method of preserving privacy. More specifically, it has been used with the goal of hiding sensitive attributes under some utility constraint. This work has mainly focused on images and/or videos, and some tasks related to text data (Zhang et al., 2018; Xie et al., 2017; Beutel et al., 2017; Raval et al., 2017).
|
| 36 |
+
|
| 37 |
+
To our knowledge, (Srivastava et al., 2019) are the first ones to apply privacy related adversarial representation learning to audio data. The authors study the problem of protecting the speaker identity of a person based on an encoded representation of their speech. The encoder is trained for an automatic speech recognition (ASR) task. While the authors manage to hide the speaker identity to some extent, their method also relies on knowing labels for the downstream task.
|
| 38 |
+
|
| 39 |
+
In the works of (Edwards & Storkey, 2016; Huang et al., 2018) and (Martinsson et al., 2020), the authors apply adversarial representation learning to censor images, without using any downstream task labels.
|
| 40 |
+
|
| 41 |
+
Voice conversion. Voice conversion algorithms aim to learn a function that maps acoustic features from a source-speaker $X$ to a target-speaker $Y$ . Some notable works on this involving GANs are (Hsu et al., 2017; Pasini, 2019; Kameoka et al., 2018; Kaneko et al., 2019). Similar to (Kameoka et al., 2018), we do not require any parallel utterances, transcriptions, or time alignment for the speech generation part. (Qian et al., 2018; Aloufi et al., 2019) use voice conversion to study privacy in speech. However, these works differ from our by having a target speaker to which they convert the voice of the input speakers to.
|
| 42 |
+
|
| 43 |
+
§ 3. PROBLEM SETTING
|
| 44 |
+
|
| 45 |
+
§ 3.1. PRIVATE CONDITIONAL GAN
|
| 46 |
+
|
| 47 |
+
Private conditional GAN (PCGAN) (Martinsson et al., 2020) is a model that builds upon the generative adversarial privacy (GAP) framework described by (Huang et al., 2017; Huang et al., 2018). Both works study adversarial representation learning for obfuscating sensitive attributes in images. The authors of PCGAN show that by adding a generator to the filter model in the GAP framework strengthens privacy while maintaining utility. The filter network obfuscates the sensitive attribute $s$ in the image, and the objective of the generator is to take the filtered image ${\mathbf{x}}^{\prime }$ as input and generate a new synthetic instance of the sensitive attribute ${s}^{\prime }$ in it, independent of the original $s$ .
|
| 48 |
+
|
| 49 |
+
The filter and the generator networks are trained against their respective discriminators ${\mathcal{D}}_{\mathcal{F}}$ and ${\mathcal{D}}_{\mathcal{G}}$ in an adversarial set up. The discriminator ${\mathcal{D}}_{\mathcal{F}}$ is trained to predict $s$ in the transformed image ${\mathbf{x}}^{\prime }$ , while the filter $\mathcal{F}$ is trained to transform images that fools the discriminator. The training objective of the filter can be described with the following minimax setup:
|
| 50 |
+
|
| 51 |
+
$$
|
| 52 |
+
\mathop{\min }\limits_{\mathcal{F}}\mathop{\max }\limits_{{\mathcal{D}}_{\mathcal{F}}}{\mathbb{E}}_{\mathbf{x},{\mathbf{z}}_{1}}\left\lbrack {{\ell }_{\mathcal{F}}\left( {{\mathcal{D}}_{\mathcal{F}}\left( {\mathcal{F}\left( {\mathbf{x},{\mathbf{z}}_{1}}\right) ,s}\right) }\right\rbrack }\right. \tag{1}
|
| 53 |
+
$$
|
| 54 |
+
|
| 55 |
+
$$
|
| 56 |
+
\text{ s.t. }{\mathbb{E}}_{\mathbf{x},{\mathbf{z}}_{1}}\left\lbrack {d\left( {\mathcal{F}\left( {\mathbf{x},{\mathbf{z}}_{1}}\right) ,\mathbf{x}}\right) }\right\rbrack \leq {\varepsilon }_{1}
|
| 57 |
+
$$
|
| 58 |
+
|
| 59 |
+
where ${\varepsilon }_{1} \geq 0$ denotes the allowed distortion in the transformation performed by the filter.
|
| 60 |
+
|
| 61 |
+
The purpose of the generator $\mathcal{G}$ is to generate a synthetic ${s}^{\prime }$ , independent of the original $s$ . Its discriminator, ${\mathcal{D}}_{\mathcal{G}}$ , takes as input a real image or an image generated by $\mathcal{G}$ , and is trained to predict $s$ in the first case, and to predict the "fake" in the second, as in the semi-supervised learning setup in (Salimans et al., 2016).
|
| 62 |
+
|
| 63 |
+
This setup is defined with the following minimax game:
|
| 64 |
+
|
| 65 |
+
$$
|
| 66 |
+
\mathop{\min }\limits_{\mathcal{G}}\mathop{\max }\limits_{{\mathcal{D}}_{\mathcal{G}}}{\mathbb{E}}_{\mathbf{x},{s}^{\prime },{\mathbf{z}}_{1},{\mathbf{z}}_{2}}\left\lbrack {{\ell }_{\mathcal{G}}\left( {{\mathcal{D}}_{\mathcal{G}}\left( {\mathcal{G}\left( {\mathcal{F}\left( {\mathbf{x},{\mathbf{z}}_{1}}\right) ,{s}^{\prime },{\mathbf{z}}_{2}}\right) }\right) ,\text{ fake }}\right) }\right\rbrack
|
| 67 |
+
$$
|
| 68 |
+
|
| 69 |
+
$$
|
| 70 |
+
+ {\mathbb{E}}_{\mathbf{x}}\left\lbrack {{\ell }_{\mathcal{G}}\left( {{\mathcal{D}}_{\mathcal{G}}\left( {\mathbf{x};{\mathcal{D}}_{\mathcal{G}}}\right) ,s}\right) }\right\rbrack \tag{2}
|
| 71 |
+
$$
|
| 72 |
+
|
| 73 |
+
$$
|
| 74 |
+
\text{ s.t. }{\mathbb{E}}_{\mathbf{x},{s}^{\prime },{\mathbf{z}}_{1},{\mathbf{z}}_{2}}\left\lbrack {d\left( {\mathcal{G}\left( {\mathcal{F}\left( {\mathbf{x},{\mathbf{z}}_{1}}\right) ,{s}^{\prime },{\mathbf{z}}_{2}}\right) ,\mathbf{x}}\right) }\right\rbrack \leq {\varepsilon }_{2}
|
| 75 |
+
$$
|
| 76 |
+
|
| 77 |
+
where ${\varepsilon }_{2} \geq 0$ is the allowed distortion in the transformation performed by the generator.
|
| 78 |
+
|
| 79 |
+
§ 3.2. MELGAN
|
| 80 |
+
|
| 81 |
+
MelGAN is a non-autoregressive feed-forward convolutional model which is trained to learn to invert mel-spectrograms to raw waveforms (Kumar et al., 2019). The MelGAN generator consists of a stack of transposed convolutional layers, and the model uses three different discriminators which each operate at different resolutions on the raw audio. The discriminators are trained using a hinge loss version (Lim & Ye, 2017) of the original GAN objective. The generator is trained using the original GAN objective, combined with a feature matching loss (Larsen et al., 2015), which minimizes the L1 distance between the discriminator feature maps of real and synthetic audio.
|
| 82 |
+
|
| 83 |
+
For each layer $i$ , let ${\mathcal{D}}_{k}^{\left( i\right) }\left( \cdot \right)$ denote the output from the $k$ th discriminator. The feature matching loss is computed as ${\mathcal{L}}_{\mathrm{{FM}}}\left( {\mathcal{G},{\mathcal{D}}_{k}}\right) =$ ${\mathbb{E}}_{\mathbf{x},\mathbf{m}}\left\lbrack {\mathop{\sum }\limits_{i}\frac{1}{{N}_{i}}{\begin{Vmatrix}{\mathcal{D}}_{k}^{\left( i\right) }\left( \mathbf{x}\right) - {\mathcal{D}}_{k}^{\left( i\right) }\left( \mathcal{G}\left( \mathbf{m}\right) \right) \end{Vmatrix}}_{1}}\right\rbrack \;$ where ${N}_{i}$ is the number of output units in layer $i,\mathbf{x}$ is the raw audio signal and $\mathbf{m}$ is its corresponding mel-spectrogram. The training objectives for the discriminators are then formulated as:
|
| 84 |
+
|
| 85 |
+
$$
|
| 86 |
+
\mathop{\min }\limits_{{\mathcal{D}}_{k}}\left( {{\mathbb{E}}_{\mathbf{x}}\left\lbrack {\min \left( {0,1 - {\mathcal{D}}_{k}\left( \mathbf{x}\right) }\right) }\right\rbrack }\right. \tag{3}
|
| 87 |
+
$$
|
| 88 |
+
|
| 89 |
+
$$
|
| 90 |
+
\left. {+{\mathbb{E}}_{\mathbf{m},\mathbf{z}}\left\lbrack {\min \left( {0,1 + {\mathcal{D}}_{k}\left( {\mathcal{G}\left( {\mathbf{m},\mathbf{z}}\right) }\right) }\right) }\right\rbrack }\right) \text{ . }
|
| 91 |
+
$$
|
| 92 |
+
|
| 93 |
+
The generator objective is:
|
| 94 |
+
|
| 95 |
+
$$
|
| 96 |
+
\mathop{\min }\limits_{\mathcal{G}}{\mathbb{E}}_{\mathbf{m},\mathbf{z}}\left\lbrack {\mathop{\sum }\limits_{{k = 1}}^{3} - {\mathcal{D}}_{k}\left( {\mathcal{G}\left( {\mathbf{m},\mathbf{z}}\right) }\right) }\right\rbrack + \gamma \mathop{\sum }\limits_{{k = 1}}^{3}{\mathcal{L}}_{\mathrm{{FM}}}\left( {\mathcal{G},{\mathcal{D}}_{k}}\right) ,
|
| 97 |
+
$$
|
| 98 |
+
|
| 99 |
+
(4)
|
| 100 |
+
|
| 101 |
+
where $\gamma$ is a hyperparameter controlling the balance between the feature matching and fooling the discriminators.
|
| 102 |
+
|
| 103 |
+
§ 3.3.OUR CONTRIBUTION
|
| 104 |
+
|
| 105 |
+
Notation. Let $s \in \{ 0,1\}$ be a binary sensitive attribute, and ${s}^{\prime } \sim \mathcal{U}\{ 0,1\}$ . Let $\mathbf{z} \in \mathcal{Z}$ be a noise vector, $\mathbf{x} \in \mathcal{X}$ a raw waveform and $\mathbf{m} \in \mathcal{M}$ a mel-spectrogram representation of $\mathbf{x}$ . Let $\mathcal{D}$ be a discriminator, $\mathcal{F} : \mathcal{M} \times {\mathcal{Z}}_{1} \rightarrow {\mathcal{M}}^{\prime }$ a filter network and $\mathcal{G} : {\mathcal{M}}^{\prime } \times {\mathcal{Z}}_{2} \rightarrow {\mathcal{M}}^{\prime \prime }$ a generator. Let ${\mathcal{X}}^{\prime }$ and ${\mathcal{X}}^{\prime \prime }$ denote the MelGAN inverted sets of ${\mathcal{M}}^{\prime }$ and ${\mathcal{M}}^{\prime \prime }$ . Each $\mathbf{x}$ is paired with a sensitive attribute: $\left( {{\mathbf{x}}_{i},{s}_{i}}\right)$ . Each sample $\left( {{\mathbf{x}}_{i},{s}_{i}}\right)$ has a corresponding utility attribute ${u}_{i}$ , only used for evaluation. In our case this is the spoken digit in the recording, i.e. ${u}_{i} \in \{ 0,\ldots ,9\}$ .
|
| 106 |
+
|
| 107 |
+
In this work we combine PCGAN and MelGAN to adversar-ially learn private representations of speech data, and name our model PCMelGAN. The whole pipeline is shown in Figure 1. The speech recording $\mathbf{x}$ is mapped to a mel-spectrogram $\mathbf{m}$ . PCGAN, with its filter and generator modules $\mathcal{F}$ and $\mathcal{G}$ , is trained to ensure privacy in the mel-spectrogram. We use a pre-trained MelGAN to invert the mel-spectrogram output of our model ${\mathbf{m}}^{\prime \prime } \in {\mathcal{M}}^{\prime \prime }$ to a raw waveform $\mathbf{x} \in {\mathcal{X}}^{\prime \prime }$ .
|
| 108 |
+
|
| 109 |
+
We implement $\mathcal{F}$ and $\mathcal{G}$ using a U-Net architecture similar to (Martinsson et al.,2020). For ${\mathcal{D}}_{\mathcal{F}}$ and ${\mathcal{D}}_{\mathcal{G}}$ we use the AlexNet architecture (Krizhevsky et al., 2012) as used in (Becker et al., 2018) for gender classification in the spectrogram domain. We use categorical cross entropy as loss functions denoted by ${\ell }_{\mathcal{F}}$ and ${\ell }_{\mathcal{G}}$ . The L1-norm is used as the distortion measure $d$ . The constrained optimization problem is reformulated as an unconstrained one by relaxing it using the quadratic penalty method (Nocedal & Wright, 2006). The distortion constraint is denoted by $\varepsilon$ and the penalty parameter by $\lambda$ . The parameters are updated using Adam (Kingma & Ba, 2014).
|
| 110 |
+
|
| 111 |
+
As a baseline comparison, we use PCMelGAN where the generator module is excluded. Thus we can directly see how much the generator module adds to the privacy task.
|
| 112 |
+
|
| 113 |
+
< g r a p h i c s >
|
| 114 |
+
|
| 115 |
+
Figure 1. Schematic diagram of our model: PCMelGAN.
|
| 116 |
+
|
| 117 |
+
§ 4. EXPERIMENTS
|
| 118 |
+
|
| 119 |
+
§ 4.1. DATA
|
| 120 |
+
|
| 121 |
+
We use the AudioMNIST dataset to conduct our experiments (Becker et al., 2018). AudioMNIST consists of 30,000 audio recordings of approximately 9.5 hours of spoken digits (0-9) in English. Each digit it repeated 50 times for each of the 60 different speakers. The audio files have a sampling frequency of ${48}\mathrm{{kHz}}$ and are saved in a 16 bit integer format. The audio recordings are also labeled with information such as age, gender, origin and accent of all speakers were collected.
|
| 122 |
+
|
| 123 |
+
In this paper, we use 10,000 samples as a training set and 2,000 samples as a test set. For the training set, we randomly sample speakers such that it consists of 10 female and 10 male speakers. Similarly, the test set consists of 2 female and 2 male speakers. We downsample the recordings to 8 $\mathrm{{kHz}}$ and use zero padding to get an equal length of 8192 for each recording.
|
| 124 |
+
|
| 125 |
+
§ 4.2. DATA-DRIVEN IMPLEMENTATION
|
| 126 |
+
|
| 127 |
+
To encourage reproducibility, we make our code publicly available ${}^{1}$ . The model is trained end-to-end, with the hy-perparameters ${\eta }_{{\mathcal{D}}_{\mathcal{F}}},{\eta }_{{\mathcal{D}}_{\mathcal{G}}} = {0.0004},{\eta }_{\mathcal{F}},{\eta }_{\mathcal{G}} = {0.0004},\lambda =$ ${10}^{2},\varepsilon \in \{ {0.005},{0.01},{0.05},{0.1}\}$ and $\left( {{\beta }_{1},{\beta }_{2}}\right) = \left( {{0.5},{0.9}}\right)$ . During training, $\mathbf{m}$ is computed using the short-time Fourier transform with a window size of 1024, a hop length of 256 and 80 mel bins. We normalize and clip the spectrograms to $\left\lbrack {-1,1}\right\rbrack$ as in (Donahue et al.,2019), with the exception that the normalization is performed on the whole spectrogram as opposed to for each frequency bin.
|
| 128 |
+
|
| 129 |
+
§ 4.3. EVALUATION
|
| 130 |
+
|
| 131 |
+
For each configuration of hyperparameters, we train the model using five different random seeds for 1000 epochs on a NVIDIA V100 GPU. We evaluate the experiments both in the spectrogram and in the raw waveform domain. In each domain, we train digit and gender classifiers on the corresponding training sets, ${\mathcal{X}}_{\text{ train }}$ and ${\mathcal{M}}_{\text{ train }}$ . The classifiers that predict gender are used as a privacy measure, and the classifiers that predict spoken digits are used as a utility measure. We evaluate the fixed classifiers on ${\mathcal{M}}_{\text{ test }}^{\prime }$ and ${\mathcal{M}}_{\text{ test }}^{\prime \prime }$ , to directly compare the added benefit by a generator module on-top of the filter.
|
| 132 |
+
|
| 133 |
+
We also measure the quality of the generated audio using Fréchet Inception Distance (FID) (Heusel et al., 2017). FID is frequently used to measure the quality of GAN-generated images. Since we are interested in measuring generated audio quality, we replace the commonly used Inception v3 network with an AudioNet (Becker et al., 2018) digit classifier using the features from the last convolutional layer.
|
| 134 |
+
|
| 135 |
+
§ 5. RESULTS
|
| 136 |
+
|
| 137 |
+
Quantitative results. In Table 1 the mean accuracy and standard deviation of the fixed classifiers on the test set is shown over five runs in the spectrogram and audio domain, respectively. Privacy is measured by the accuracy of the fixed classifier predicting the original gender ${s}_{i}$ , where an accuracy close to ${50}\%$ corresponds to more privacy. Utility is measured by the accuracy of the fixed classifier predicting the digit ${u}_{i}$ , where a higher accuracy corresponds to greater utility.
|
| 138 |
+
|
| 139 |
+
Table 1. The spectrogram classifiers' mean accuracy and standard deviation on the test sets ${\mathcal{M}}_{\text{ test }}^{\prime }$ and ${\mathcal{M}}_{\text{ test }}^{\prime \prime }$ (top) and on ${\mathcal{X}}_{\text{ test }}^{\prime }$ and ${\mathcal{X}}_{\text{ test }}^{\prime \prime }$ (bottom) for varying values of $\varepsilon$ . For privacy (gender) an accuracy close to ${50}\%$ is better. For utility (digit), a higher accuracy is better.
|
| 140 |
+
|
| 141 |
+
max width=
|
| 142 |
+
|
| 143 |
+
2*Dist. $\varepsilon$ 2|c|Privacy 2|c|Utility
|
| 144 |
+
|
| 145 |
+
2-5
|
| 146 |
+
Baseline PCMelGAN Baseline PCMelGAN
|
| 147 |
+
|
| 148 |
+
1-5
|
| 149 |
+
0.005 ${49.9} \pm {2.2}$ ${48.7} \pm {2.4}$ ${84.1} \pm {2.8}$ ${81.1} \pm {3.7}$
|
| 150 |
+
|
| 151 |
+
1-5
|
| 152 |
+
0.01 ${55.0} \pm {4.7}$ ${50.9} \pm {1.4}$ ${79.9} \pm {4.3}$ ${78.8} \pm {7.8}$
|
| 153 |
+
|
| 154 |
+
1-5
|
| 155 |
+
0.05 ${61.3} \pm {10.2}$ ${51.0} \pm {0.7}$ ${80.9} \pm {8.2}$ ${54.7} \pm {23.8}$
|
| 156 |
+
|
| 157 |
+
1-5
|
| 158 |
+
0.1 ${48.9} \pm {1.0}$ ${49.8} \pm {0.5}$ ${29.1} \pm {7.5}$ ${15.1} \pm {5.4}$
|
| 159 |
+
|
| 160 |
+
1-5
|
| 161 |
+
0.005 ${52.2} \pm {3.6}$ ${49.1} \pm {1.6}$ ${36.8} \pm {4.0}$ ${49.4} \pm {9.8}$
|
| 162 |
+
|
| 163 |
+
1-5
|
| 164 |
+
0.01 ${53.2} \pm {3.2}$ ${51.3} \pm {1.6}$ ${34.3} \pm {8.5}$ ${49.2} \pm {8.6}$
|
| 165 |
+
|
| 166 |
+
1-5
|
| 167 |
+
0.05 ${61.5} \pm {8.1}$ ${51.2} \pm {0.7}$ ${28.0} \pm {15.8}$ ${31.3} \pm {10.3}$
|
| 168 |
+
|
| 169 |
+
1-5
|
| 170 |
+
0.1 ${51.0} \pm {1.3}$ ${49.6} \pm {0.4}$ ${11.4} \pm {1.7}$ ${15.8} \pm {2.3}$
|
| 171 |
+
|
| 172 |
+
1-5
|
| 173 |
+
|
| 174 |
+
In Table 2, FID scores are shown for our model working in the audio domain. In figure 3, a recording of a woman saying "zero" is shown, together with the baseline (filter) and PCMelGAN generating a male and a female spectrogram.
|
| 175 |
+
|
| 176 |
+
Table 2. The mean FID-score and standard deviation of the test sets ${\mathcal{X}}_{\text{ test }}^{\prime }$ and ${\mathcal{X}}_{\text{ test }}^{\prime \prime }$ for different $\varepsilon$ . A lower value corresponds to more realistic audio.
|
| 177 |
+
|
| 178 |
+
max width=
|
| 179 |
+
|
| 180 |
+
2*Dist. E 2|c|FID Audio
|
| 181 |
+
|
| 182 |
+
2-3
|
| 183 |
+
Baseline PCMelgan
|
| 184 |
+
|
| 185 |
+
1-3
|
| 186 |
+
0.005 ${20.17} \pm {4.04}$ ${10.12} \pm {3.15}$
|
| 187 |
+
|
| 188 |
+
1-3
|
| 189 |
+
0.01 ${27.27} \pm {4.50}$ ${10.02} \pm {2.27}$
|
| 190 |
+
|
| 191 |
+
1-3
|
| 192 |
+
0.05 ${29.59} \pm {5.77}$ ${20.22} \pm {4.87}$
|
| 193 |
+
|
| 194 |
+
1-3
|
| 195 |
+
0.1 ${41.50} \pm {3.49}$ ${22.32} \pm {5.20}$
|
| 196 |
+
|
| 197 |
+
1-3
|
| 198 |
+
|
| 199 |
+
Qualitative results. We provide samples from the AudioM-NIST test set that were transformed by our model ${}^{2}$ . The shared folder contains original sound clips and their corresponding transformed versions.
|
| 200 |
+
|
| 201 |
+
${}^{2}$ https://www.dropbox.com/sh/
|
| 202 |
+
|
| 203 |
+
oangx84ibhzodhs/AAAfG-PBW4Ne8KwdipAmKFy1a? dl=0
|
| 204 |
+
|
| 205 |
+
'https://github.com/daverics/pcmelgan
|
| 206 |
+
|
| 207 |
+
< g r a p h i c s >
|
| 208 |
+
|
| 209 |
+
Figure 2. Privacy vs utility trade-off for the baseline and PCMelGAN for varying $\varepsilon$ . Orange and blue points correspond to evaluating the fixed classifiers for digits and gender on the spectrogram datasets ${\mathcal{M}}_{\text{ test }}^{\prime }$ and ${\mathcal{M}}_{\text{ test }}^{\prime \prime }$ (left), and raw waveform datasets ${\mathcal{X}}_{\text{ test }}^{\prime }$ and ${\mathcal{X}}_{\text{ test }}^{\prime \prime }$ (right). Lower right corner is better.
|
| 210 |
+
|
| 211 |
+
< g r a p h i c s >
|
| 212 |
+
|
| 213 |
+
Figure 3. Spectrograms of saying "zero". The original recording of a female (top left), transformed ones from the baseline (top right), and our model of a sampled male (bottom left) and a sampled female (bottom right).
|
| 214 |
+
|
| 215 |
+
§ 6. DISCUSSION
|
| 216 |
+
|
| 217 |
+
Table 1 (top) and Figure 2 (left) demonstrate that the proposed method achieves strong privacy while working on the mel-spectrogram domain, and retains a strong utility preservation. We notice in Table 1 (bottom left) and in Figure 2 (right) that the proposed method is able to provide privacy in the audio domain, but to a loss of utility. However, when comparing to the baseline, we see that generating a synthetic $s$ both increases utility and ensures privacy. In the spectrogram domain, the filter model seems to be enough to obtain both privacy and utility. In both the spectrogram domain and the audio domain, the proposed approach achieves high privacy. We assume that the privacy will suffer from having a stricter distortion budget $\varepsilon$ , but this was not observed in the experiments. While a quick sanity check with $\varepsilon = {10}^{-5}$ resulted in the model learning the identity map (with no additional privacy), more experiments need to be carried out to detect when privacy starts to deteriorate with lower $\varepsilon$ . It is worth noting that for some $\varepsilon$ we have a large standard deviation. We hypothesize that this could be improved by using more diverse data, and future work should include evaluating the proposed method on longer sentences.
|
| 218 |
+
|
| 219 |
+
In Table 2 we noticed that our model obtains substantially better FID scores than the baseline in the audio domain. We conclude that adding the synthetic sample of the sensitive attribute improves the realism and fidelity of the speech signal. We observe this also from listening to the generated sounds (see qualitative results above).
|
| 220 |
+
|
| 221 |
+
§ 7. CONCLUSIONS
|
| 222 |
+
|
| 223 |
+
In this work we have proposed an adversarially trained model that learns to make speech data private. We do this by first filtering a sensitive attribute, and then generating a new, independent sensitive attribute. We formulate this as an unconstrained optimization problem with a distortion budget. This is done in the spectrogram domain, and we use a pre-trained MelGAN to invert the generated mel-spectrogram back to a raw waveform. We compare our model with the baseline of just censoring the attribute, and show that we gain both privacy and utility by generating a new sensitive attribute in the audio domain.
|
papers/ICML/ICML 2020/ICML 2020 Workshop/ICML 2020 Workshop SAS/7jxwhNDM0Uv/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,301 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# COALA: Co-Aligned Autoencoders for Learning Semantically Enriched Audio Representations
|
| 2 |
+
|
| 3 |
+
Xavier Favory ${}^{ * }{}^{1}$ Konstantinos Drossos ${}^{ * }{}^{2}$ Tuomas Virtanen ${}^{2}$ Xavier Serra ${}^{1}$
|
| 4 |
+
|
| 5 |
+
## Abstract
|
| 6 |
+
|
| 7 |
+
Audio representation learning based on deep neural networks (DNNs) emerged as an alternative approach to hand-crafted features. For achieving high performance, DNNs often need a large amount of annotated data which can be difficult and costly to obtain. In this paper, we propose a method for learning audio representations, aligning the learned latent representations of audio and associated tags. Aligning is done by maximizing the agreement of the latent representations of audio and tags, using a contrastive loss. The result is an audio embedding model which reflects acoustic and semantic characteristics of sounds. We evaluate the quality of our embedding model, measuring its performance as a feature extractor on three different tasks (namely, sound event recognition, and music genre and musical instrument classification), and investigate what type of characteristics the model captures. Our results are promising, sometimes in par with the state-of-the-art in the considered tasks and the embeddings produced with our method are well correlated with some acoustic descriptors.
|
| 8 |
+
|
| 9 |
+
## 1. Introduction
|
| 10 |
+
|
| 11 |
+
Legacy audio-based machine learning models were trained using sets of handcrafted features, carefully designed by relying on psychoacoustics and signal processing expert knowledge. Recent approaches are based on learning such features directly from the data, usually by employing deep learning (DL) models (Bengio et al., 2013; Hershey et al., 2017; Pons et al., 2017a), often making use of manually annotated datasets that are tied to specific applications (Tzane-takis & Cook, 2002; Marchand & Peeters, 2016; Salamon et al., 2014). Achieving high performance with DL-based methods and models, often requires sufficient labeled data which can be difficult and costly to obtain, especially for audio signals (Favory et al., 2018). As a way to lift the restrictions imposed by the limited amount of audio data, different published works employ transfer learning on tasks were only small datasets are available (Yosinski et al., 2014; Choi et al., 2017). Usually in such a scenario, an embedding model is first optimized on a supervised task for which a large amount of data is available. Then, this embedding model is used as a pre-trained feature extractor, to extract input features that are used to optimize another model on a different task, where a limited amount of data is available (Van Den Oord et al., 2014; Choi et al., 2017; Pons & Serra, 2019a; Alonso-Jiménez et al., 2020).
|
| 12 |
+
|
| 13 |
+
Recent approaches adopt self-supervised learning, aiming to learn audio representations on a large set of unlabeled multimedia data, e.g. by exploiting audio and visual correspondences (Aytar et al., 2016; Arandjelovic & Zisserman, 2017). Such approaches have the advantage of not requiring manual labelling of large amount of data, and have been successful for learning audio features that can be used in training simple, but competitive classifiers (Cramer et al., 2019). Different approaches focus on learning audio representations by employing a task-specific distance metric and weakly annotated data. For example, the triplet-loss can be used to maximize the agreement between different songs of same artist (Park et al., 2017) or a contrastive loss can enable maximizing the similarity of different transformations of the same example (Chen et al., 2020). Other approaches leverage images and their associated tags to learn content-based representations by aligning autoencoders (Schonfeld et al., 2019). However the alignment is done by optimizing cross-reconstruction objectives, which can be overly complex for learning data representations.
|
| 14 |
+
|
| 15 |
+
In our work we are interested in learning audio representations that can be used for developing general machine listening systems, rather than being tied to a specific audio domain. We take advantage of the massive amount of online audio recordings and their accompanying tag metadata, and learn acoustically and semantically meaningful features. To do so, we propose a new approach inspired from image and the natural language processing fields (Schonfeld et al., 2019; Silberer & Lapata, 2014), but we relax the alignment objective by employing a contrastive loss (Chen et al., 2020), in order to co-regularize the latent representations of two autoencoders, each one learned on a different modality.
|
| 16 |
+
|
| 17 |
+
---
|
| 18 |
+
|
| 19 |
+
*Equal contribution ${}^{1}$ Music Technology Group, Universitat Pompeu Fabra, Barcelona, Spain ${}^{2}$ Audio Research Group, Tampere University, Tampere, Finland. Correspondence to: Xavier Favory <xavier.favory@upf.edu>.
|
| 20 |
+
|
| 21 |
+
Published at the workshop on Self-supervision in Audio and Speech at the ${37}^{\text{th }}$ International Conference on Machine Learning, Vienna, Austria. Copyright 2020 by the author(s).
|
| 22 |
+
|
| 23 |
+
---
|
| 24 |
+
|
| 25 |
+
The contributions of our work are:
|
| 26 |
+
|
| 27 |
+
- We adapt a recently introduced constrastive loss framework (Chen et al., 2020), and we apply it for audio representation learning in a heterogeneous setting (the embedding models process different modalities).
|
| 28 |
+
|
| 29 |
+
- We propose a learning algorithm, combining a contrastive loss and an autoencoder architecture, for obtaining aligned audio and tag latent representations, in order to learn audio features that reflect both semantic and acoustic characteristics.
|
| 30 |
+
|
| 31 |
+
- We provide a thorough investigation of the performance of the approach, by employing three different classification tasks.
|
| 32 |
+
|
| 33 |
+
- Finally we conduct a correlation analysis of our em-beddings with acoustic features in order to get more understanding of what characteristics they capture.
|
| 34 |
+
|
| 35 |
+
The rest of the paper is as follows. In Section 2 we thoroughly present our proposed method. Section 3 describes the utilized dataset, the tasks and metrics that we employed for the assessment of the performance, the baselines that we compare our method with, and the correlation analysis with acoustic features that we conducted. The results of these evaluation processes are presented and discussed in Section 4. Finally, Section 5 concludes the paper and proposes future research directions.
|
| 36 |
+
|
| 37 |
+
## 2. Proposed method
|
| 38 |
+
|
| 39 |
+
Our method employs two different autoencoders (AEs) and a dataset of multi-labeled annotated (i.e. multiple labels/tags per example) time-frequency (TF) representations of audio signals, $\mathbb{G} = {\left\{ \left( {\mathbf{X}}_{\mathrm{a}}^{q},{\mathbf{y}}_{\mathrm{t}}^{q}\right) \right\} }_{q = 1}^{Q}$ , where ${\mathbf{X}}_{\mathrm{a}}^{q} \in {\mathbb{R}}^{N \times F}$ is the TF representation of audio, consisting of $N$ feature vectors with $F$ log mel-band energies, ${\mathbf{y}}_{\mathrm{t}}^{q} \in \{ 0,1{\} }^{C}$ is the multi-hot encoding of tags for ${\mathbf{X}}_{\mathrm{a}}^{q}$ , out of a total of $C$ different tags, and $Q$ is the amount of paired examples in our dataset. These tags characterize the content of each corresponding audio signal (e.g. "kick", "techno", "hard").
|
| 40 |
+
|
| 41 |
+
The audio TF representation and the associated multi-hot encoded tags of the audio signal, are used as inputs to two different AEs, one targeting to learn low-level acoustic features for audio and the other learning semantic features (for the tags), by employing a bottleneck layer and a reconstruction objective. At the same time, the learned low-level features of the audio signal are aligned with the learned semantic features of the tags, using a contrastive loss. All employed modules are jointly optimized, yielding an audio encoder that provides audio embeddings capturing both low-level acoustic characteristics and semantic information regarding the contents of the audio. An illustration of our method is in Figure 1.
|
| 42 |
+
|
| 43 |
+

|
| 44 |
+
|
| 45 |
+
Figure 1. Illustration of our proposed method. ${\mathbf{Z}}_{\mathrm{a}}$ and ${\mathbf{z}}_{\mathrm{t}}$ are aligned through maximizing their agreement and, at the same time, are used for reconstructing back the original inputs.
|
| 46 |
+
|
| 47 |
+
### 2.1. Learning low-level audio and semantic features
|
| 48 |
+
|
| 49 |
+
For learning low-level acoustic features from the input audio TF representation, ${\mathbf{X}}_{\mathrm{a}}{}^{1}$ , we employ a typical AE structure based on convolutional neural networks (CNNs) and on having a reconstruction objective. Since AEs have proven to be effective in unsupervised learning of low-level features in different tasks and especially in audio (Van Den Oord et al., 2017; Amiriparian et al., 2017; Mimilakis et al., 2018; Drossos et al., 2018), our choice of the AE structure followed naturally.
|
| 50 |
+
|
| 51 |
+
The AE that processes ${\mathbf{X}}_{\mathrm{a}}$ is composed of an encoder ${e}_{\mathrm{a}}\left( \cdot \right)$ and a decoder ${d}_{\mathrm{a}}\left( \cdot \right)$ , parameterized by ${\theta }_{e\mathrm{a}}$ and ${\theta }_{d\mathrm{a}}$ respectively. ${e}_{\mathrm{a}}$ accepts ${\mathbf{X}}_{\mathrm{a}}$ as an input and yields the learned latent audio representation, ${\mathbf{Z}}_{\mathrm{a}} \in {\mathbb{R}}_{ \geq 0}^{K \times {T}^{\prime } \times {F}^{\prime }}$ . Then, ${d}_{\mathrm{a}}$ gets ${\mathbf{Z}}_{\mathrm{a}}$ as input and outputs a reconstructed version of ${\mathbf{X}}_{\mathrm{a}},{\widehat{\mathbf{X}}}_{\mathrm{a}}$ , as
|
| 52 |
+
|
| 53 |
+
$$
|
| 54 |
+
{\mathbf{Z}}_{\mathrm{a}} = {e}_{\mathrm{a}}\left( {{\mathbf{X}}_{a};{\theta }_{e\mathrm{a}}}\right) \text{, and} \tag{1}
|
| 55 |
+
$$
|
| 56 |
+
|
| 57 |
+
$$
|
| 58 |
+
{\widehat{\mathbf{X}}}_{\mathrm{a}} = {d}_{\mathrm{a}}\left( {{\mathbf{Z}}_{\mathrm{a}};{\theta }_{d\mathrm{a}}}\right) . \tag{2}
|
| 59 |
+
$$
|
| 60 |
+
|
| 61 |
+
We model ${e}_{\mathrm{a}}$ using a series of convolutional blocks, where each convolutional block consists of a CNN, a normalization process, and a non-linearity. As a normalization process we employ the batch normalization (BN), and as a non-linearity we employ the rectified linear unit (ReLU). The process for each convolutional block is
|
| 62 |
+
|
| 63 |
+
$$
|
| 64 |
+
{\mathbf{H}}^{{l}_{e}} = \operatorname{ReLU}\left( {{\mathrm{{BN}}}^{{l}_{e}}\left( {{\mathrm{{CNN}}}^{{l}_{e}}\left( {\mathbf{H}}^{{l}_{e} - 1}\right) }\right) }\right) , \tag{3}
|
| 65 |
+
$$
|
| 66 |
+
|
| 67 |
+
where ${l}_{ea} = 1,\ldots ,{N}_{\mathrm{{CNN}}}$ is the index of the convolutional block, ${\mathbf{H}}^{{l}_{ea}} \in {\mathbb{R}}_{ \geq 0}^{{K}_{{l}_{ea}} \times {T}_{{l}_{ea}}^{\prime } \times {F}_{{l}_{ea}}^{\prime }}$ is the ${K}_{{l}_{ea}}$ learned feature maps of the ${l}_{ea}$ -th $\mathrm{{CNN}},{\mathbf{H}}^{{N}_{\mathrm{{CNN}}}} = {\mathbf{Z}}_{\mathrm{a}}$ , and ${\mathbf{H}}^{0} = {\mathbf{X}}_{\mathrm{a}}$ . Audio decoder, ${d}_{\mathrm{a}}$ , is also based on CNNs, but it employs transposed convolutions (Radford et al., 2016; Dumoulin &Visin,2016) in order to expand ${\mathbf{Z}}_{\mathrm{a}}$ back to the dimensions of ${\mathbf{X}}_{\mathrm{a}}$ . For having a decoding scheme analogous to the encoding one, we employ another set of ${N}_{\mathrm{{CNN}}}$ convolutional blocks for ${d}_{\mathrm{a}}$ , again with BN and ReLU, and using the same serial processing described by Eq. (3). This processing yields the learned feature maps of the decoder, ${\mathbf{H}}^{{l}_{da}} \in$ ${\mathbb{R}}_{ > 0}^{{K}_{{l}_{da}} \times {T}_{{l}_{da}}^{\prime } \times {F}_{{l}_{da}}^{\prime }}$ , with ${l}_{da} = 1 + {N}_{\mathrm{{CNN}}},\ldots ,2{N}_{\mathrm{{CNN}}}$ and ${\mathbf{H}}^{2{N}_{\mathrm{{CNN}}}} = {\widehat{\mathbf{X}}}_{\mathrm{a}}$ . To optimize ${e}_{\mathrm{a}}$ and ${d}_{\mathrm{a}}$ , we employ the generalized KL divergence, ${D}_{\mathrm{{KL}}}$ , and we utilize the following loss function
|
| 68 |
+
|
| 69 |
+
$$
|
| 70 |
+
{\mathcal{L}}_{\mathrm{a}}\left( {{\mathbf{X}}_{\mathrm{a}},{\theta }_{e\mathrm{a}},{\theta }_{d\mathrm{a}}}\right) = {D}_{\mathrm{{KL}}}\left( {{\mathbf{X}}_{\mathrm{a}}\parallel {\widehat{\mathbf{X}}}_{\mathrm{a}}}\right) . \tag{4}
|
| 71 |
+
$$
|
| 72 |
+
|
| 73 |
+
---
|
| 74 |
+
|
| 75 |
+
${}^{1}$ For the clarity of notation, the superscript $q$ is dropped here and for the rest of the document, unless it is explicitly needed.
|
| 76 |
+
|
| 77 |
+
---
|
| 78 |
+
|
| 79 |
+
Each audio signal represented by ${\mathbf{X}}_{\mathrm{a}}$ is annotated by a set of tags from a vocabulary of size $C$ . We want to exploit the semantics of each tag and, at the same time, capture the semantic relationships between tags. For that reason, we opt to use another AE structure, which outputs a latent learned representation of the set of tags of ${\mathbf{X}}_{\mathrm{a}}$ as the learned features from the tags, and then tries to reconstruct the tags from that latent representation. Similar approaches have been used in (Silberer & Lapata, 2014), where an AE structure was employed in order to learn an embedding from a $k$ -hot encoding of tags/words that would encapsulate semantic information. Specifically, we represent the set of tags for ${\mathbf{X}}_{\mathrm{a}}$ as a multi-hot vector, ${\mathbf{y}}_{\mathrm{t}} \in \{ 0,1{\} }^{C}$ . We use again an encoder ${e}_{\mathrm{t}}$ and a decoder ${d}_{\mathrm{t}}$ , to obtain a learned latent representation of ${\mathbf{y}}_{\mathrm{t}}$ as
|
| 80 |
+
|
| 81 |
+
$$
|
| 82 |
+
{\mathbf{z}}_{\mathrm{t}} = {e}_{\mathrm{t}}\left( {{\mathbf{y}}_{\mathrm{t}};{\theta }_{e\mathrm{t}}}\right) \text{, and} \tag{5}
|
| 83 |
+
$$
|
| 84 |
+
|
| 85 |
+
$$
|
| 86 |
+
{\widehat{\mathbf{y}}}_{\mathrm{t}} = {d}_{\mathrm{t}}\left( {{\mathbf{z}}_{\mathrm{t}};{\theta }_{d\mathrm{t}}}\right) , \tag{6}
|
| 87 |
+
$$
|
| 88 |
+
|
| 89 |
+
where ${\mathbf{z}}_{\mathrm{t}} \in {\mathbb{R}}_{ > 0}^{M}$ is the learned latent representation of the tags for ${\mathbf{X}}_{\mathrm{a}},{\mathbf{y}}_{\mathrm{t}}$ and ${\widehat{\mathbf{y}}}_{\mathrm{t}}$ is the reconstructed multi-hot encoding of the same tags ${\mathbf{y}}_{\mathrm{t}}$ . The ${e}_{\mathrm{t}}$ consists of a set of trainable feed-forward linear layers, where each layer is followed by a BN and a ReLU, similar to Eq. 3. That is, if ${\mathrm{{FNN}}}^{{l}_{\mathrm{t}}}$ is the ${l}_{\mathrm{t}}$ -th feed-forward linear layer, then
|
| 90 |
+
|
| 91 |
+
$$
|
| 92 |
+
{\mathbf{h}}^{{l}_{t}} = \operatorname{ReLU}\left( {{\mathrm{{BN}}}^{{l}_{t}}\left( {{\mathrm{{FNN}}}^{{l}_{t}}\left( {\mathbf{h}}^{{l}_{t} - 1}\right) }\right) }\right) , \tag{7}
|
| 93 |
+
$$
|
| 94 |
+
|
| 95 |
+
where ${l}_{\mathrm{t}} = 1,\ldots ,{N}_{\mathrm{{FNN}}},{\mathbf{h}}^{{N}_{\mathrm{{FNN}}}} = {\mathbf{z}}_{\mathrm{t}}$ , and ${\mathbf{h}}^{0} = {\mathbf{y}}_{\mathrm{t}}$ . To obtain the reconstructed version of ${\mathbf{y}}_{\mathrm{t}},{\widehat{\mathbf{y}}}_{\mathrm{t}}$ , through ${\mathbf{z}}_{\mathrm{t}}$ , we use the decoder ${d}_{\mathrm{t}}$ , which is modeled analogously to ${e}_{\mathrm{t}}$ and containing another set of ${N}_{\mathrm{{FNN}}}$ feed-forward linear layers. ${d}_{\mathrm{t}}$ processes ${\mathbf{z}}_{\mathrm{t}}$ similarly to Eq. 7, with ${\mathbf{h}}^{1 + {N}_{\mathrm{{FNN}}}}$ to be the output of the first feed-forward linear layer of ${d}_{\mathrm{t}}$ , and ${\mathbf{h}}^{2{N}_{\mathrm{{FNN}}}} = {\widehat{\mathbf{y}}}_{\mathrm{t}}$ . To optimize ${e}_{\mathrm{t}}$ and ${d}_{\mathrm{t}}$ we utilize the loss ${\mathcal{L}}_{\mathrm{t}}\left( {{\mathbf{y}}_{\mathrm{t}},{\theta }_{e\mathrm{t}},{\theta }_{d\mathrm{t}}}\right) = {CE}\left( {{\mathbf{y}}_{\mathrm{t}},{\widehat{\mathbf{y}}}_{\mathrm{t}}}\right)$ , where ${CE}$ is the cross-entropy function.
|
| 96 |
+
|
| 97 |
+
### 2.2. Alignment of acoustic and semantic features
|
| 98 |
+
|
| 99 |
+
One of the main targets of our method is to infuse semantic information from the latent representation of tags to the learned acoustic features of audio. To do this, we maximize the agreement between (i.e. align) the paired latent representations of the audio signal, ${\mathbf{Z}}_{\mathrm{a}}^{q}$ , and the corresponding tags, ${\mathbf{z}}_{\mathrm{t}}^{q}$ , inspired by previous and relative work on image processing (Feng et al., 2014; Schonfeld et al., 2019), and by using a contrastive loss, similarly to (Sohn, 2016; Chen et al., 2020). Aligning these two latent representations (by pushing ${\mathbf{Z}}_{\mathrm{a}}^{q}$ towards ${\mathbf{z}}_{\mathrm{t}}^{q}$ ), will infuse ${\mathbf{Z}}_{\mathrm{a}}^{q}$ with information from ${\mathbf{z}}_{\mathrm{t}}^{q}$ . This task is expected to be difficult, due to the fact that some acoustic aspects may not be covered by the tags, or that some existing tags may be wrong or not informative. Therefore, we utilize two affine transforms, and we align the outputs of these transforms. Specifically, we utilize the affine transforms ${\mathrm{{AFF}}}_{\mathrm{a}}$ and ${\mathrm{{AFF}}}_{\mathrm{t}}$ , parameterized by ${\theta }_{\mathrm{{af}} - \mathrm{a}}$ and ${\theta }_{\mathrm{{af}} - \mathrm{t}}$ respectively, as
|
| 100 |
+
|
| 101 |
+
$$
|
| 102 |
+
{\mathbf{\Phi }}_{\mathrm{a}} = {\mathrm{{AFF}}}_{\mathrm{a}}\left( {{\mathbf{Z}}_{\mathrm{a}};{\theta }_{\mathrm{{af}} - \mathrm{a}}}\right) \text{, and} \tag{8}
|
| 103 |
+
$$
|
| 104 |
+
|
| 105 |
+
$$
|
| 106 |
+
{\mathbf{\phi }}_{\mathrm{t}} = {\mathrm{{AFF}}}_{\mathrm{t}}\left( {{\mathbf{z}}_{\mathrm{t}};{\theta }_{\mathrm{{af}} - \mathrm{t}}}\right) . \tag{9}
|
| 107 |
+
$$
|
| 108 |
+
|
| 109 |
+
where ${\mathbf{\Phi }}_{\mathrm{a}} \in {\mathbb{R}}_{ \geq 0}^{K \times {T}^{\prime } \times F}$ and ${\mathbf{\phi }}_{\mathrm{t}} \in {\mathbb{R}}_{ \geq 0}^{M}$ . Then, since ${\mathbf{\Phi }}_{\mathrm{a}}$ is a matrix and ${\mathbf{\phi }}_{\mathrm{t}}$ a vector, we flatten ${\mathbf{\Phi }}_{\mathrm{a}}$ to ${\mathbf{\phi }}_{\mathrm{a}} \in {\mathbb{R}}_{ \geq 0}^{K{T}^{\prime }{F}^{\prime }}$ . To align ${\phi }_{\mathrm{a}}$ with its paired ${\phi }_{\mathrm{t}}$ , we utilize randomly (and without repetition) sampled minibatches ${\mathbb{G}}_{b} = {\left\{ \left( {\mathbf{X}}_{a}^{b},{\mathbf{y}}_{\mathrm{t}}^{b}\right) \right\} }_{b = 1}^{{N}_{\mathrm{b}}}$ from our dataset $\mathbb{G}$ , where ${N}_{\mathrm{b}}$ is the amount of paired examples in the minibatch ${\mathbb{G}}_{b}$ . For each minibatch ${\mathbb{G}}_{b}$ , we align the ${\phi }_{\mathrm{a}}^{b}$ with its paired ${\phi }_{\mathrm{t}}^{b}$ and, at the same time, we optimize ${e}_{\mathrm{a}},{d}_{\mathrm{a}},{e}_{\mathrm{t}},{d}_{\mathrm{t}},{\mathrm{{AFF}}}_{\mathrm{a}}$ and ${\mathrm{{AFF}}}_{\mathrm{t}}$ . To do this, we follow (Chen et al., 2020) and we use the contrastive loss function
|
| 110 |
+
|
| 111 |
+
$$
|
| 112 |
+
{\mathcal{L}}_{\xi }\left( {{\mathbb{G}}_{b},{\mathbf{\Theta }}_{\mathrm{c}}}\right) = \mathop{\sum }\limits_{{b = 1}}^{{N}_{\mathrm{B}}} - \log \frac{\Xi \left( {{\phi }_{\mathrm{a}}^{b},{\phi }_{\mathrm{t}}^{b},\tau }\right) }{\mathop{\sum }\limits_{{i = 1}}^{{N}_{\mathrm{b}}}{\mathbb{1}}_{\left\lbrack i \neq b\right\rbrack }\Xi \left( {{\phi }_{\mathrm{a}}^{b},{\phi }_{\mathrm{t}}^{i},\tau }\right) }\text{, where }
|
| 113 |
+
$$
|
| 114 |
+
|
| 115 |
+
(10)
|
| 116 |
+
|
| 117 |
+
$$
|
| 118 |
+
\Xi \left( {\mathbf{a},\mathbf{b},\tau }\right) = \exp \left( {\operatorname{sim}\left( {\mathbf{a},\mathbf{b}}\right) {\tau }^{-1}}\right) , \tag{11}
|
| 119 |
+
$$
|
| 120 |
+
|
| 121 |
+
$$
|
| 122 |
+
\operatorname{sim}\left( {\mathbf{a},\mathbf{b}}\right) = {\mathbf{a}}^{\top }\mathbf{b}{\left( \parallel \mathbf{a}\parallel \parallel \mathbf{b}\parallel \right) }^{-1}, \tag{12}
|
| 123 |
+
$$
|
| 124 |
+
|
| 125 |
+
${\Theta }_{\mathrm{c}} = \left\{ {{\theta }_{e\mathrm{a}},{\theta }_{\mathrm{{af}} - \mathrm{a}},{\theta }_{e\mathrm{t}},{\theta }_{\mathrm{{af}} - \mathrm{t}}}\right\} ,{\mathbb{1}}_{\mathrm{A}}$ is the indicator function with ${\mathbb{1}}_{\mathrm{A}} = 1$ iff $\mathrm{A}$ else 0, and $\tau$ is a temperature hyper-parameter. Finally, we jointly optimize ${\theta }_{e\mathrm{a}},{\theta }_{d\mathrm{a}},{\theta }_{e\mathrm{t}}$ , and ${\theta }_{d\mathrm{t}}$ , for each minibatch ${\mathbb{G}}_{b}$ , minimizing
|
| 126 |
+
|
| 127 |
+
$$
|
| 128 |
+
{\mathcal{L}}_{\text{total }}\left( {{\mathbb{G}}_{b},\mathbf{\Theta }}\right) = {\lambda }_{\mathrm{a}}\mathop{\sum }\limits_{{b = 1}}^{{N}_{\mathrm{B}}}{\mathcal{L}}_{\mathrm{a}}\left( {{\mathbf{X}}_{\mathrm{a}}^{b},{\mathbf{\Theta }}_{\mathrm{a}}}\right) + {\lambda }_{\mathrm{t}}\mathop{\sum }\limits_{{b = 1}}^{{N}_{\mathrm{B}}}{\mathcal{L}}_{\mathrm{t}}\left( {{\mathbf{y}}_{\mathrm{t}}^{b},{\mathbf{\Theta }}_{\mathrm{t}}}\right)
|
| 129 |
+
$$
|
| 130 |
+
|
| 131 |
+
$$
|
| 132 |
+
+ {\lambda }_{\xi }{\mathcal{L}}_{\xi }\left( {{\mathbb{G}}_{b},{\mathbf{\Theta }}_{\mathrm{c}}}\right) \text{,} \tag{13}
|
| 133 |
+
$$
|
| 134 |
+
|
| 135 |
+
where ${\Theta }_{\mathrm{a}} = \left\{ {{\theta }_{e\mathrm{a}},{\theta }_{d\mathrm{a}}}\right\} ,{\Theta }_{\mathrm{t}} = \left\{ {{\theta }_{e\mathrm{t}},{\theta }_{d\mathrm{t}}}\right\} ,\Theta$ is the union of the ${\Theta }_{ \star }$ sets in Eq. (13), and ${\lambda }_{ \star }$ is a hyper-parameter used for numerical balancing of the different learning signals/losses. After the minimization of ${\mathcal{L}}_{\text{total }}$ , we use ${e}_{\mathrm{a}}$ as a pre-learned feature extractor for different audio classification tasks.
|
| 136 |
+
|
| 137 |
+
## 3. Evaluation
|
| 138 |
+
|
| 139 |
+
We conduct an ablation study where we compare different methods for learning audio embeddings on their classification performance at different tasks, using as input the embeddings from the employed methods. This allows us to evaluate the benefit of using the alignment and the reconstruction objectives in our method. We consider a traditional set of hand-crafted features, as a low anchor. Additionally, we perform a correlation analysis with a set of acoustic features in order to understand what kind of acoustic properties are reflected in the learnt embeddings.
|
| 140 |
+
|
| 141 |
+
### 3.1. Pre-training dataset and data pre-processing
|
| 142 |
+
|
| 143 |
+
For creating our pre-training dataset $\mathbb{G}$ , we collect all sounds from Freesound (Font et al., 2013), that have a duration of maximum 10 seconds. We remove sounds that are used in any datasets of our downstream tasks. We apply a uniform sampling rate of ${22}\mathrm{{kHz}}$ and length of 10 secs to all collected sounds, by resampling and zero-padding as needed. We extract $F = {96}\log$ -scaled mel-band energies using sliding windows of 1024 samples $\left( { \approx {46}\mathrm{\;{ms}}}\right)$ , with ${50}\%$ overlap and the Hamming windowing function. We create overlapping patches of $T = {96}$ feature vectors $\left( { \approx {2.2}\mathrm{\;s}}\right)$ , using a step of 12 vectors for overlap. Then, we select the $T \times F$ patch with the maximum energy. This process is simple but we assume that in many cases, the associated tags will refer to salient events present in regions of high energy. We process the tags associated to the audio clips, by firstly removing any stop-words and making any plural forms of nouns to singular. We remove tags that occur in more than ${70}\%$ of the sounds as they can be considered less informative, and consider the $C = {1000}$ remaining most occurring tags, which we encode using the multi-hot scheme. Finally, we discard sounds that were left with no tag after this filtering process. This process generated $Q = {189896}$ spectrogram patches for our dataset G. 10% of these patches are kept for validation and all the patches are scaled to values between 0 and 1 .
|
| 144 |
+
|
| 145 |
+
We consider three different cases for evaluating the benefit of the alignment and the reconstruction objectives. The first is the method presented in Section 2, termed as AE-C. At the second, termed as E-C, we do not employ ${d}_{\mathrm{a}}$ and ${d}_{\mathrm{t}}$ , and we optimize ${e}_{\mathrm{a}}$ using only ${\mathcal{L}}_{\xi }$ , similar to (Chen et al.,2020). The third, termed as CNN, is composed of ${e}_{\mathrm{a}}$ , followed by two fully connected layers and is optimized for directly predicting the tag vector ${\mathbf{y}}_{\mathrm{t}}$ using the ${CE}$ function. Finally, we employ the 20 first mel-frequency cepstral coefficients (MFCCs) with their $\Delta \mathrm{s}$ and ${\Delta \Delta }\mathrm{s}$ as a low anchor, using means and standard deviations through time, and we term this case as MFCCs.
|
| 146 |
+
|
| 147 |
+
### 3.2. Downstream classification tasks
|
| 148 |
+
|
| 149 |
+
We consider three different audio classification tasks: i) sound event recognition/tagging (SER), ii) music genre classification (MGC), and iii) musical instrument classification (MIC). For SER, we use the Urban Sound $8\mathrm{\;K}$ dataset (US8K) (Salamon et al., 2014) in our experiment, which consists of around 8000 single-labeled sounds of maximum 4 seconds and 10 classes. We use the provided folds for cross-validation. For MGC, we use the fault-filtered version of the GTZAN dataset (Tzanetakis & Cook, 2002; Kereliuk et al., 2015) consisting of single-labeled music excepts of 30 seconds, split in pre-computed sets of 443 songs for training and 290 for testing. Finally, for MIC, we use the NSynth dataset (Engel et al., 2017) which consists of more than ${300}\mathrm{k}$ sound samples organised in 10 instrument families. However, because we are interested to see how our models performs with relatively low amount of training data, we randomly sample from NSynth a balanced set of ${20}\mathrm{k}$ samples from the training set which correspond to approximately 7% of the original set. The evaluation set is kept the same.
|
| 150 |
+
|
| 151 |
+
For the above tasks and datasets, we use non-overlapping frames of audio clips that are calculated similarly to the pre-training dataset, and are given as input to the different methods in order to obtain the embeddings. Then, these em-beddings are aggregated into a single vector (e.g. of 1152 dimensionality for our ${e}_{\mathrm{a}}$ ) employing the mean statistic, and are used as an input to a classifier that is optimized for each corresponding task. Embeddings and MFCCs vectors are standardized to zero-mean and unit-variance, using statistics calculated from the training split of each task. As a classifier for each of the different tasks, we use a multi-layer perceptron (MLP) with one hidden layer of 256 features, similar to what is used in (Cramer et al., 2019). To obtain an unbiased evaluation of our method, we repeat the training procedure of the MLP in each task 10 times, average and report the mean accuracies.
|
| 152 |
+
|
| 153 |
+
### 3.3. Correlation analysis with acoustic features
|
| 154 |
+
|
| 155 |
+
We perform a correlation analysis using a similarity measure involving the Canonical Correlation Analysis (CCA) (Hardoon et al., 2004), to investigate the correlation of the output embeddings from our method, with various low-level acoustic features. Similar to (Raghu et al., 2017), we use sounds from the validation set of the pre-training dataset $\mathbb{G}$ , and we compute the canonical correlation similarity (CCS) of our audio embedding ${\mathbf{Z}}_{a}$ with statistics of acoustic features computed with the librosa library (McFee et al., 2015). These features correspond to MFCCs, chro-magram, spectral centroid, and spectral bandwidth, all computed at a frame level.
|
| 156 |
+
|
| 157 |
+
## 4. Results
|
| 158 |
+
|
| 159 |
+
In Table 1 are the results of the performance of the different embeddings and our MFCCs baseline, and results reported in the literature which are briefly explained in the supplementary material section. In all the tasks, AE-C and E-C embeddings yielded better results than the MFCCs baseline, showing that it is possible to learn meaningful audio representations, by taking advantage of tag metadata. However, the CNN case does not even reach the performance of the MFCCs features. This clearly indicates the benefit of our approach for building general audio representations by leveraging user-provided noisy tags. When comparing the different proposed embeddings, we see that the AE-C case consistently leads to better results. For the MIC (NSynth) task, combining reconstruction and contrastive objectives (i.e. AE-C case) brings important benefits. For the MGC (GTZAN) task, these benefits are not as pronounced, and finally, when looking at the SER (US8K) task, adding the reconstruction objective does not improve the results much. Our assumption is that recognizing musical instruments can be more easily done using lower-level features reflecting acoustic characteristics of the sounds, and that the reconstruction objective imposed by the autoencoder architecture is forcing the embedding to reflect low-level characteristics present in the spectrogram. However, for recognizing urban sounds or musical genres, a feature that reflects mainly semantic information is needed, which seems to be learned successfully when considering the contrastive objective.
|
| 160 |
+
|
| 161 |
+
Table 1. Average mean accuracies for SER, MGC, and MIC. Additional performances are taken from the literature (Cramer et al., 2019; Salamon & Bello, 2017; Pons & Serra, 2019b; Lee et al., 2018; Ramires & Serra, 2019).
|
| 162 |
+
|
| 163 |
+
<table><tr><td/><td>US8K</td><td>GTZAN</td><td>$\mathbf{{NSynth}}$</td></tr><tr><td>MFCCs</td><td>65.8</td><td>49.8</td><td>62.6</td></tr><tr><td>AE-C</td><td>72.7</td><td>60.7</td><td>73.1</td></tr><tr><td>E-C</td><td>72.5</td><td>58.9</td><td>69.5</td></tr><tr><td>CNN</td><td>48.4</td><td>47.0</td><td>56.4</td></tr><tr><td>OpenL3</td><td>78.2</td><td>-</td><td>-</td></tr><tr><td>VGGish</td><td>73.4</td><td>-</td><td>-</td></tr><tr><td>DeepConv</td><td>79.0</td><td>-</td><td>-</td></tr><tr><td>rVGG</td><td>70.7</td><td>59.7</td><td>-</td></tr><tr><td>sampleCNN</td><td>-</td><td>82.1</td><td>-</td></tr><tr><td>smallCNN</td><td>-</td><td>-</td><td>73.8</td></tr></table>
|
| 164 |
+
|
| 165 |
+
Table 2. CCA correlation scores between the embeddings model outputs and some acoustic features statistics.
|
| 166 |
+
|
| 167 |
+
<table><tr><td/><td>mean</td><td>var</td><td>skew</td><td>mean</td><td>var</td><td>skew</td></tr><tr><td/><td colspan="3">MFCCs</td><td colspan="3">Chromagram</td></tr><tr><td>AE-C</td><td>0.84</td><td>0.51</td><td>0.42</td><td>0.48</td><td>0.37</td><td>0.40</td></tr><tr><td>E-C</td><td>0.58</td><td>0.49</td><td>0.39</td><td>0.38</td><td>0.36</td><td>0.32</td></tr><tr><td>CNN</td><td>0.73</td><td>0.43</td><td>0.32</td><td>0.59</td><td>0.33</td><td>0.48</td></tr></table>
|
| 168 |
+
|
| 169 |
+
<table><tr><td colspan="4">Spectral Centroid</td><td colspan="3">Spectral Bandwidth</td></tr><tr><td>AE-C</td><td>0.97</td><td>0.87</td><td>0.80</td><td>0.96</td><td>0.86</td><td>0.84</td></tr><tr><td>E-C</td><td>0.93</td><td>0.82</td><td>0.76</td><td>0.92</td><td>0.82</td><td>0.81</td></tr><tr><td>CNN</td><td>0.95</td><td>0.76</td><td>0.74</td><td>0.91</td><td>0.72</td><td>0.80</td></tr></table>
|
| 170 |
+
|
| 171 |
+
Comparing our method to others for the SER, we can see that we are slightly outperformed by VGGish (Hershey et al., 2017; Gemmeke et al., 2017), according to results taken from (Cramer et al., 2019), which has been trained with million of manually annotated audio files using predefined categories. This shows that our approach which only takes advantage of small-scale content with their original tag metadata is very promising for learning competitive audio features. However, our model is still far from reaching performances given by OpenL3 or the current SOTA Deep-Conv with data augmentation. Similarly in MGC, the sam-pleCNN classifier, pre-trained on the Million Song Dataset (MSD) (Lee et al., 2018) produces much better results than our approach. But, all these models have been either trained with much more data than ours, or use a more powerful classifier. Finally, NSynth dataset has been originally released in order to train generative models rather than classifiers. Still, results from (Ramires & Serra, 2019), show that our approach training using around $7\%$ of the training data, is only slightly outperformed by a CNN trained with all the training data (smallCNN).
|
| 172 |
+
|
| 173 |
+
Table 2 shows the correlation for the different embeddings ${\mathbf{Z}}_{\mathrm{a}}$ with the mean, the variance, and the skewness of the different acoustic feature vectors. Overall, we observe a consistent increase of the correlation between the acoustic features and embeddings trained with models containing an AE structure. This suggests that the reconstruction objective enables to learn features that reflect some low-level acoustic characteristics of audio signals, which makes it more valuable as a general-purpose feature. More specifically, there is a large correlation increase between the mean of MFCCs and models that contain AE structure, showing that they can capture more timbral characteristics of the signal. However, variance and skwewness did not increase considerably, which can mean that our embeddings lack to capture temporal queues. Considering chromagrams, which reflect the harmonic contents of a sound, we see little improvement with AE models. This suggests that our embeddings lack some important musical characteristics. Regarding the spectral centroid and bandwidth, we only observe a slight increase of correlations with AE-based embeddings.
|
| 174 |
+
|
| 175 |
+
## 5. Conclusions
|
| 176 |
+
|
| 177 |
+
In this work we present a method for learning an audio representation that can capture acoustic and semantic characteristics for a wide range of sounds. We utilise two heterogeneous autoencoders (AEs), one taking as an input audio spectrogram and the other processing a tag representation. These AEs are jointly trained and a contrastive loss enables to align their latent representations by leveraging associated pairs of audio and tags. We evaluate our method by conducting an ablation study, where we compare different methods for learning audio representations over three different classification tasks. We also perform a correlation analysis with acoustic features in order to grasp knowledge about what type of acoustic characteristics the embedding captures.
|
| 178 |
+
|
| 179 |
+
Results indicate that combining reconstruction objectives with a contrastive learning framework enables to learn audio features that reflect both semantic and lower-level acoustic characteristics of sounds, which makes it suitable for general audio machine listening applications. Future work may focus on improving the network models by for instance using audio architectures that can capture more temporal aspects and dynamics present in audio signals.
|
| 180 |
+
|
| 181 |
+
## Supplementary Material
|
| 182 |
+
|
| 183 |
+
## Code and data
|
| 184 |
+
|
| 185 |
+
The code of our method is available online at: https: //github.com/xavierfav/coala. We provide the pre-training dataset $\mathbb{G}$ online and publicly at: https: //zenodo.org/record/3887261. Sounds were accessed from the Freesound API on the 7th of May, 2019.
|
| 186 |
+
|
| 187 |
+
## Utilized hyper-parameters, training procedure, and models
|
| 188 |
+
|
| 189 |
+
For the audio autoencoder, we use ${N}_{\mathrm{{CNN}}} = 5$ convolutional blocks each one containing ${K}_{{l}_{e\mathrm{a}}} = {128}$ filters of shape $4\mathrm{x}4$ , with a stride of $2\mathrm{x}2$ , yielding an embedding ${\phi }_{\mathrm{a}}$ of size 1152. This audio encoder model has approximately ${2.4}\mathrm{M}$ parameters. The tag autoencoder is composed of ${N}_{\mathrm{{FNN}}} = 3$ layers of size 512, 512 and 1152, accepting a multi-hot vector of dimension 1000 as input. We train the models for 200 epochs using a minibatch size ${N}_{\mathrm{B}} = {128}$ , using an SGD optimizer with a learning rate value of 0.005 . We utilize the validation set to define the different $\lambda$ ’s at Eq. (13) and the constrastive loss temperature parameter $\tau$ , to ${\lambda }_{\mathrm{a}} = {\lambda }_{\mathrm{t}} = 5$ , ${\lambda }_{\xi } = {10}$ , and $\tau = {0.1}$ . We add a dropout regularization with rate 25% after each activation layer to avoid overfitting while training. The CNN baseline that is trained by predicting directly the multi-hot tag vectors from the audio spectrogram has follows the same architecture as the encoder from the audio autoencoder. When training, we add 2 fully connected layers and train it for 20 epochs using a minibatch size ${N}_{\mathrm{B}} = {128}$ and an SGD optimizer with a learning rate value of 0.005 as well.
|
| 190 |
+
|
| 191 |
+
## Tag processing
|
| 192 |
+
|
| 193 |
+
Removing stop-words in sound tags is done using the NLTK python library (https://www.nltk.org/).Making any plural forms of nouns to singular is done with the inflect python library (https://github.com/jazzband/ inflect). Additionally we transform all tags to lowercase.
|
| 194 |
+
|
| 195 |
+
## Models from the literature
|
| 196 |
+
|
| 197 |
+
OpenL3 (Cramer et al., 2019) is an open source implementation of Look, Listen, and Learn (L3-Net) (Arandjelovic & Zisserman, 2017). It consists of an embedding model using blocks of convolutional and max-pooling layers, trained through self-supervised learning of audio-visual correspondence in videos from YouTube. The model has around 4.7M parameters and computes embedding vectors of size 6144. In (Cramer et al., 2019), the authors report the classification accuracies of different variants of the model used as a feature extractor combined with a MLP classifier on the US8K dataset. Their mean accuracy is ${78.2}\%$ .
|
| 198 |
+
|
| 199 |
+
VGGish (Hershey et al., 2017; Gemmeke et al., 2017) consists of an audio-based CNN model, a modified version of the VGGNet model (Simonyan & Zisserman, 2014) trained to predict video tags from the Youtube-8M dataset (Abu-El-Haija et al.,2016). The model has around ${62}\mathrm{M}$ parameters and computes embedding vectors of size 128. Its accuracy when used as a feature extractor combined with a MLP classifier on the US8K dataset is reported in (Cramer et al., 2019) as being 73.4%.
|
| 200 |
+
|
| 201 |
+
DeepConv (Salamon & Bello, 2017) is a deep neural network composed of convolutional and max-pooling layers. When trained with data augmentation on the US8K dataset, it achieved ${79.0}\%$ accuracy.
|
| 202 |
+
|
| 203 |
+
rVGG (Pons & Serra, 2019b) corresponds to a VGGish non-trained model (randomly weighted). The referenced work experiment using it as a feature extractor by comparing different embeddings from different layers of the network. The best accuracies on US8K and GTZAN (fault-filtered) when combined with an SVM classifier were reported as ${70.7}\%$ and ${59.7}\%$ respectively, using an embedding vector of size of 3585 .
|
| 204 |
+
|
| 205 |
+
sampleCNN (Lee et al., 2018) is a deep neural network that takes as input the raw waveform and is composed of many small 1D convolutional layers and that has been designed for musical classification tasks. When pre-trained on the Million Song Dataset (Bertin-Mahieux et al., 2011), this model reached a 82.1% accuracy on the GTZAN dataset (fault-filtered).
|
| 206 |
+
|
| 207 |
+
smallCNN (Pons et al., 2017b) is a neural network composed of one CNN layer with filters of different sizes that can capture timbral characteristics of the sounds. It is combined with pooling operations and a fully-connected layer in order to predict labels. In (Ramires & Serra, 2019), it has been trained with the NSynth dataset in order to predict the instrument family classes and was reported to reach 73.8% accuracy.
|
| 208 |
+
|
| 209 |
+
## Acknowledgement
|
| 210 |
+
|
| 211 |
+
X. Favory, K. Drossos, and T. Virtanen would like to acknowledge CSC Finland for computational resources. The authors would also like to thank all the Freesound users that have been sharing very valuable content for many years. Xavier Favory is also grateful for the GPU donated by NVidia.
|
| 212 |
+
|
| 213 |
+
## References
|
| 214 |
+
|
| 215 |
+
Abu-El-Haija, S., Kothari, N., Lee, J., Natsev, P., Toderici, G., Varadarajan, B., and Vijayanarasimhan, S. Youtube-8m: A large-scale video classification benchmark. arXiv preprint arXiv:1609.08675, 2016.
|
| 216 |
+
|
| 217 |
+
Alonso-Jiménez, P., Bogdanov, D., Pons, J., and Serra, X. Ten-sorflow audio models in essentia. In ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 266-270, 2020.
|
| 218 |
+
|
| 219 |
+
Amiriparian, S., Freitag, M., Cummins, N., and Schuller, B. Sequence to sequence autoencoders for unsupervised representation learning from audio. In Proc. of the DCASE 2017 Workshop, 2017.
|
| 220 |
+
|
| 221 |
+
Arandjelovic, R. and Zisserman, A. Look, listen and learn. In Proceedings of the IEEE International Conference on Computer Vision, pp. 609-617, 2017.
|
| 222 |
+
|
| 223 |
+
Aytar, Y., Vondrick, C., and Torralba, A. Soundnet: Learning sound representations from unlabeled video. In Advances in neural information processing systems, pp. 892-900, 2016.
|
| 224 |
+
|
| 225 |
+
Bengio, Y., Courville, A., and Vincent, P. Representation learning: A review and new perspectives. IEEE transactions on pattern analysis and machine intelligence, 35(8):1798-1828, 2013.
|
| 226 |
+
|
| 227 |
+
Bertin-Mahieux, T., Ellis, D. P., Whitman, B., and Lamere, P. The million song dataset. In Proceedings of the 12th International Conference on Music Information Retrieval (ISMIR 2011), 2011.
|
| 228 |
+
|
| 229 |
+
Chen, T., Kornblith, S., Norouzi, M., and Hinton, G. A simple framework for contrastive learning of visual representations. arXiv preprint arXiv:2002.05709, 2020.
|
| 230 |
+
|
| 231 |
+
Choi, K., Fazekas, G., Sandler, M., and Cho, K. Transfer learning for music classification and regression tasks. arXiv preprint arXiv:1703.09179, 2017.
|
| 232 |
+
|
| 233 |
+
Cramer, J., Wu, H.-H., Salamon, J., and Bello, J. P. Look, listen, and learn more: Design choices for deep audio embeddings. In ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 3852-3856. IEEE, 2019.
|
| 234 |
+
|
| 235 |
+
Drossos, K., Mimilakis, S. I., Serdyuk, D., Schuller, G., Virtanen, T., and Bengio, Y. Mad twinnet: Masker-denoiser architecture with twin networks for monaural sound source separation. In 2018 International Joint Conference on Neural Networks (IJCNN), pp. 1-8. IEEE, 2018.
|
| 236 |
+
|
| 237 |
+
Dumoulin, V. and Visin, F. A guide to convolution arithmetic for deep learning, 2016.
|
| 238 |
+
|
| 239 |
+
Engel, J., Resnick, C., Roberts, A., Dieleman, S., Norouzi, M., Eck, D., and Simonyan, K. Neural audio synthesis of musical notes with wavenet autoencoders. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 1068-1077. JMLR. org, 2017.
|
| 240 |
+
|
| 241 |
+
Favory, X., Fonseca, E., Font, F., and Serra, X. Facilitating the manual annotation of sounds when using large taxonomies. In Proceedings of the 23rd Conference of Open Innovations Association FRUCT, pp. 60. FRUCT Oy, 2018.
|
| 242 |
+
|
| 243 |
+
Feng, F., Wang, X., and Li, R. Cross-modal retrieval with correspondence autoencoder. In Proceedings of the 22nd ACM
|
| 244 |
+
|
| 245 |
+
international conference on Multimedia, pp. 7-16, 2014.
|
| 246 |
+
|
| 247 |
+
Font, F., Roma, G., and Serra, X. Freesound technical demo. In Proceedings of the 21st ACM international conference on Multimedia, pp. 411-412, 2013.
|
| 248 |
+
|
| 249 |
+
Gemmeke, J. F., Ellis, D. P., Freedman, D., Jansen, A., Lawrence, W., Moore, R. C., Plakal, M., and Ritter, M. Audio set: An ontology and human-labeled dataset for audio events. In 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 776-780. IEEE, 2017.
|
| 250 |
+
|
| 251 |
+
Hardoon, D. R., Szedmak, S., and Shawe-Taylor, J. Canonical correlation analysis: An overview with application to learning methods. Neural computation, 16(12):2639-2664, 2004.
|
| 252 |
+
|
| 253 |
+
Hershey, S., Chaudhuri, S., Ellis, D. P., Gemmeke, J. F., Jansen, A., Moore, R. C., Plakal, M., Platt, D., Saurous, R. A., Seybold, B., et al. Cnn architectures for large-scale audio classification. In 2017 ieee international conference on acoustics, speech and signal processing (icassp), pp. 131-135. IEEE, 2017.
|
| 254 |
+
|
| 255 |
+
Kereliuk, C., Sturm, B. L., and Larsen, J. Deep learning and music adversaries. IEEE Transactions on Multimedia, 17(11): 2059-2071, 2015.
|
| 256 |
+
|
| 257 |
+
Lee, J., Park, J., Kim, K. L., and Nam, J. Samplecnn: End-to-end deep convolutional neural networks using very small filters for music classification. Applied Sciences, 8(1):150, 2018.
|
| 258 |
+
|
| 259 |
+
Marchand, U. and Peeters, G. The extended ballroom dataset. In Conference of the International Society for Music Information Retrieval (ISMIR) late-breaking session, 2016.
|
| 260 |
+
|
| 261 |
+
McFee, B., Raffel, C., Liang, D., Ellis, D. P., McVicar, M., Bat-tenberg, E., and Nieto, O. librosa: Audio and music signal analysis in python. In Proceedings of the 14th python in science conference, volume 8, 2015.
|
| 262 |
+
|
| 263 |
+
Mimilakis, S. I., Drossos, K., Santos, J. F., Schuller, G., Virtanen, T., and Bengio, Y. Monaural singing voice separation with skip-filtering connections and recurrent inference of time-frequency mask. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 721-725, 2018.
|
| 264 |
+
|
| 265 |
+
Park, J., Lee, J., Park, J., Ha, J.-W., and Nam, J. Representation learning of music using artist labels. arXiv preprint arXiv:1710.06648, 2017.
|
| 266 |
+
|
| 267 |
+
Pons, J. and Serra, X. musicnn: Pre-trained convolutional neural networks for music audio tagging. arXiv preprint arXiv:1909.06654, 2019a.
|
| 268 |
+
|
| 269 |
+
Pons, J. and Serra, X. Randomly weighted cnns for (music) audio classification. In ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 336-340. IEEE, 2019b.
|
| 270 |
+
|
| 271 |
+
Pons, J., Nieto, O., Prockup, M., Schmidt, E., Ehmann, A., and Serra, X. End-to-end learning for music audio tagging at scale. arXiv preprint arXiv:1711.02520, 2017a.
|
| 272 |
+
|
| 273 |
+
Pons, J., Slizovskaia, O., Gong, R., Gómez, E., and Serra, X. Timbre analysis of music audio signals with convolutional neural networks. In 2017 25th European Signal Processing Conference (EUSIPCO), pp. 2744-2748. IEEE, 2017b.
|
| 274 |
+
|
| 275 |
+
Radford, A., Metz, L., and Chintala, S. Unsupervised representation learning with deep convolutional generative adversarial networks. In International Conference on Learning Representations (ICLR), 2016.
|
| 276 |
+
|
| 277 |
+
Raghu, M., Gilmer, J., Yosinski, J., and Sohl-Dickstein, J. Svcca: Singular vector canonical correlation analysis for deep learning dynamics and interpretability. In Advances in Neural Information Processing Systems, pp. 6076-6085, 2017.
|
| 278 |
+
|
| 279 |
+
Ramires, A. and Serra, X. Data augmentation for instrument classification robust to audio effects. arXiv preprint arXiv:1907.08520, 2019.
|
| 280 |
+
|
| 281 |
+
Salamon, J. and Bello, J. P. Deep convolutional neural networks and data augmentation for environmental sound classification. IEEE Signal Processing Letters, 24(3):279-283, 2017.
|
| 282 |
+
|
| 283 |
+
Salamon, J., Jacoby, C., and Bello, J. P. A dataset and taxonomy for urban sound research. In Proceedings of the 22nd ACM international conference on Multimedia, pp. 1041-1044, 2014.
|
| 284 |
+
|
| 285 |
+
Schonfeld, E., Ebrahimi, S., Sinha, S., Darrell, T., and Akata, Z. Generalized zero-and few-shot learning via aligned variational autoencoders. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8247-8255, 2019.
|
| 286 |
+
|
| 287 |
+
Silberer, C. and Lapata, M. Learning grounded meaning representations with autoencoders. In Proceedings of the 52nd Annual
|
| 288 |
+
|
| 289 |
+
Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 721-732, 2014.
|
| 290 |
+
|
| 291 |
+
Simonyan, K. and Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
|
| 292 |
+
|
| 293 |
+
Sohn, K. Improved deep metric learning with multi-class n-pair loss objective. In Advances in neural information processing systems, pp. 1857-1865, 2016.
|
| 294 |
+
|
| 295 |
+
Tzanetakis, G. and Cook, P. Musical genre classification of audio signals. IEEE Transactions on speech and audio processing, 10 (5):293-302, 2002.
|
| 296 |
+
|
| 297 |
+
Van Den Oord, A., Dieleman, S., and Schrauwen, B. Transfer learning by supervised pre-training for audio-based music classification. In Conference of the International Society for Music Information Retrieval (ISMIR 2014), 2014.
|
| 298 |
+
|
| 299 |
+
Van Den Oord, A., Vinyals, O., et al. Neural discrete representation learning. In Advances in Neural Information Processing Systems, pp. 6306-6315, 2017.
|
| 300 |
+
|
| 301 |
+
Yosinski, J., Clune, J., Bengio, Y., and Lipson, H. How transferable are features in deep neural networks? In Advances in neural information processing systems, pp. 3320-3328, 2014.
|
papers/ICML/ICML 2020/ICML 2020 Workshop/ICML 2020 Workshop SAS/7jxwhNDM0Uv/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,266 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ COALA: CO-ALIGNED AUTOENCODERS FOR LEARNING SEMANTICALLY ENRICHED AUDIO REPRESENTATIONS
|
| 2 |
+
|
| 3 |
+
Xavier Favory ${}^{ * }{}^{1}$ Konstantinos Drossos ${}^{ * }{}^{2}$ Tuomas Virtanen ${}^{2}$ Xavier Serra ${}^{1}$
|
| 4 |
+
|
| 5 |
+
§ ABSTRACT
|
| 6 |
+
|
| 7 |
+
Audio representation learning based on deep neural networks (DNNs) emerged as an alternative approach to hand-crafted features. For achieving high performance, DNNs often need a large amount of annotated data which can be difficult and costly to obtain. In this paper, we propose a method for learning audio representations, aligning the learned latent representations of audio and associated tags. Aligning is done by maximizing the agreement of the latent representations of audio and tags, using a contrastive loss. The result is an audio embedding model which reflects acoustic and semantic characteristics of sounds. We evaluate the quality of our embedding model, measuring its performance as a feature extractor on three different tasks (namely, sound event recognition, and music genre and musical instrument classification), and investigate what type of characteristics the model captures. Our results are promising, sometimes in par with the state-of-the-art in the considered tasks and the embeddings produced with our method are well correlated with some acoustic descriptors.
|
| 8 |
+
|
| 9 |
+
§ 1. INTRODUCTION
|
| 10 |
+
|
| 11 |
+
Legacy audio-based machine learning models were trained using sets of handcrafted features, carefully designed by relying on psychoacoustics and signal processing expert knowledge. Recent approaches are based on learning such features directly from the data, usually by employing deep learning (DL) models (Bengio et al., 2013; Hershey et al., 2017; Pons et al., 2017a), often making use of manually annotated datasets that are tied to specific applications (Tzane-takis & Cook, 2002; Marchand & Peeters, 2016; Salamon et al., 2014). Achieving high performance with DL-based methods and models, often requires sufficient labeled data which can be difficult and costly to obtain, especially for audio signals (Favory et al., 2018). As a way to lift the restrictions imposed by the limited amount of audio data, different published works employ transfer learning on tasks were only small datasets are available (Yosinski et al., 2014; Choi et al., 2017). Usually in such a scenario, an embedding model is first optimized on a supervised task for which a large amount of data is available. Then, this embedding model is used as a pre-trained feature extractor, to extract input features that are used to optimize another model on a different task, where a limited amount of data is available (Van Den Oord et al., 2014; Choi et al., 2017; Pons & Serra, 2019a; Alonso-Jiménez et al., 2020).
|
| 12 |
+
|
| 13 |
+
Recent approaches adopt self-supervised learning, aiming to learn audio representations on a large set of unlabeled multimedia data, e.g. by exploiting audio and visual correspondences (Aytar et al., 2016; Arandjelovic & Zisserman, 2017). Such approaches have the advantage of not requiring manual labelling of large amount of data, and have been successful for learning audio features that can be used in training simple, but competitive classifiers (Cramer et al., 2019). Different approaches focus on learning audio representations by employing a task-specific distance metric and weakly annotated data. For example, the triplet-loss can be used to maximize the agreement between different songs of same artist (Park et al., 2017) or a contrastive loss can enable maximizing the similarity of different transformations of the same example (Chen et al., 2020). Other approaches leverage images and their associated tags to learn content-based representations by aligning autoencoders (Schonfeld et al., 2019). However the alignment is done by optimizing cross-reconstruction objectives, which can be overly complex for learning data representations.
|
| 14 |
+
|
| 15 |
+
In our work we are interested in learning audio representations that can be used for developing general machine listening systems, rather than being tied to a specific audio domain. We take advantage of the massive amount of online audio recordings and their accompanying tag metadata, and learn acoustically and semantically meaningful features. To do so, we propose a new approach inspired from image and the natural language processing fields (Schonfeld et al., 2019; Silberer & Lapata, 2014), but we relax the alignment objective by employing a contrastive loss (Chen et al., 2020), in order to co-regularize the latent representations of two autoencoders, each one learned on a different modality.
|
| 16 |
+
|
| 17 |
+
*Equal contribution ${}^{1}$ Music Technology Group, Universitat Pompeu Fabra, Barcelona, Spain ${}^{2}$ Audio Research Group, Tampere University, Tampere, Finland. Correspondence to: Xavier Favory <xavier.favory@upf.edu>.
|
| 18 |
+
|
| 19 |
+
Published at the workshop on Self-supervision in Audio and Speech at the ${37}^{\text{ th }}$ International Conference on Machine Learning, Vienna, Austria. Copyright 2020 by the author(s).
|
| 20 |
+
|
| 21 |
+
The contributions of our work are:
|
| 22 |
+
|
| 23 |
+
* We adapt a recently introduced constrastive loss framework (Chen et al., 2020), and we apply it for audio representation learning in a heterogeneous setting (the embedding models process different modalities).
|
| 24 |
+
|
| 25 |
+
* We propose a learning algorithm, combining a contrastive loss and an autoencoder architecture, for obtaining aligned audio and tag latent representations, in order to learn audio features that reflect both semantic and acoustic characteristics.
|
| 26 |
+
|
| 27 |
+
* We provide a thorough investigation of the performance of the approach, by employing three different classification tasks.
|
| 28 |
+
|
| 29 |
+
* Finally we conduct a correlation analysis of our em-beddings with acoustic features in order to get more understanding of what characteristics they capture.
|
| 30 |
+
|
| 31 |
+
The rest of the paper is as follows. In Section 2 we thoroughly present our proposed method. Section 3 describes the utilized dataset, the tasks and metrics that we employed for the assessment of the performance, the baselines that we compare our method with, and the correlation analysis with acoustic features that we conducted. The results of these evaluation processes are presented and discussed in Section 4. Finally, Section 5 concludes the paper and proposes future research directions.
|
| 32 |
+
|
| 33 |
+
§ 2. PROPOSED METHOD
|
| 34 |
+
|
| 35 |
+
Our method employs two different autoencoders (AEs) and a dataset of multi-labeled annotated (i.e. multiple labels/tags per example) time-frequency (TF) representations of audio signals, $\mathbb{G} = {\left\{ \left( {\mathbf{X}}_{\mathrm{a}}^{q},{\mathbf{y}}_{\mathrm{t}}^{q}\right) \right\} }_{q = 1}^{Q}$ , where ${\mathbf{X}}_{\mathrm{a}}^{q} \in {\mathbb{R}}^{N \times F}$ is the TF representation of audio, consisting of $N$ feature vectors with $F$ log mel-band energies, ${\mathbf{y}}_{\mathrm{t}}^{q} \in \{ 0,1{\} }^{C}$ is the multi-hot encoding of tags for ${\mathbf{X}}_{\mathrm{a}}^{q}$ , out of a total of $C$ different tags, and $Q$ is the amount of paired examples in our dataset. These tags characterize the content of each corresponding audio signal (e.g. "kick", "techno", "hard").
|
| 36 |
+
|
| 37 |
+
The audio TF representation and the associated multi-hot encoded tags of the audio signal, are used as inputs to two different AEs, one targeting to learn low-level acoustic features for audio and the other learning semantic features (for the tags), by employing a bottleneck layer and a reconstruction objective. At the same time, the learned low-level features of the audio signal are aligned with the learned semantic features of the tags, using a contrastive loss. All employed modules are jointly optimized, yielding an audio encoder that provides audio embeddings capturing both low-level acoustic characteristics and semantic information regarding the contents of the audio. An illustration of our method is in Figure 1.
|
| 38 |
+
|
| 39 |
+
< g r a p h i c s >
|
| 40 |
+
|
| 41 |
+
Figure 1. Illustration of our proposed method. ${\mathbf{Z}}_{\mathrm{a}}$ and ${\mathbf{z}}_{\mathrm{t}}$ are aligned through maximizing their agreement and, at the same time, are used for reconstructing back the original inputs.
|
| 42 |
+
|
| 43 |
+
§ 2.1. LEARNING LOW-LEVEL AUDIO AND SEMANTIC FEATURES
|
| 44 |
+
|
| 45 |
+
For learning low-level acoustic features from the input audio TF representation, ${\mathbf{X}}_{\mathrm{a}}{}^{1}$ , we employ a typical AE structure based on convolutional neural networks (CNNs) and on having a reconstruction objective. Since AEs have proven to be effective in unsupervised learning of low-level features in different tasks and especially in audio (Van Den Oord et al., 2017; Amiriparian et al., 2017; Mimilakis et al., 2018; Drossos et al., 2018), our choice of the AE structure followed naturally.
|
| 46 |
+
|
| 47 |
+
The AE that processes ${\mathbf{X}}_{\mathrm{a}}$ is composed of an encoder ${e}_{\mathrm{a}}\left( \cdot \right)$ and a decoder ${d}_{\mathrm{a}}\left( \cdot \right)$ , parameterized by ${\theta }_{e\mathrm{a}}$ and ${\theta }_{d\mathrm{a}}$ respectively. ${e}_{\mathrm{a}}$ accepts ${\mathbf{X}}_{\mathrm{a}}$ as an input and yields the learned latent audio representation, ${\mathbf{Z}}_{\mathrm{a}} \in {\mathbb{R}}_{ \geq 0}^{K \times {T}^{\prime } \times {F}^{\prime }}$ . Then, ${d}_{\mathrm{a}}$ gets ${\mathbf{Z}}_{\mathrm{a}}$ as input and outputs a reconstructed version of ${\mathbf{X}}_{\mathrm{a}},{\widehat{\mathbf{X}}}_{\mathrm{a}}$ , as
|
| 48 |
+
|
| 49 |
+
$$
|
| 50 |
+
{\mathbf{Z}}_{\mathrm{a}} = {e}_{\mathrm{a}}\left( {{\mathbf{X}}_{a};{\theta }_{e\mathrm{a}}}\right) \text{ , and } \tag{1}
|
| 51 |
+
$$
|
| 52 |
+
|
| 53 |
+
$$
|
| 54 |
+
{\widehat{\mathbf{X}}}_{\mathrm{a}} = {d}_{\mathrm{a}}\left( {{\mathbf{Z}}_{\mathrm{a}};{\theta }_{d\mathrm{a}}}\right) . \tag{2}
|
| 55 |
+
$$
|
| 56 |
+
|
| 57 |
+
We model ${e}_{\mathrm{a}}$ using a series of convolutional blocks, where each convolutional block consists of a CNN, a normalization process, and a non-linearity. As a normalization process we employ the batch normalization (BN), and as a non-linearity we employ the rectified linear unit (ReLU). The process for each convolutional block is
|
| 58 |
+
|
| 59 |
+
$$
|
| 60 |
+
{\mathbf{H}}^{{l}_{e}} = \operatorname{ReLU}\left( {{\mathrm{{BN}}}^{{l}_{e}}\left( {{\mathrm{{CNN}}}^{{l}_{e}}\left( {\mathbf{H}}^{{l}_{e} - 1}\right) }\right) }\right) , \tag{3}
|
| 61 |
+
$$
|
| 62 |
+
|
| 63 |
+
where ${l}_{ea} = 1,\ldots ,{N}_{\mathrm{{CNN}}}$ is the index of the convolutional block, ${\mathbf{H}}^{{l}_{ea}} \in {\mathbb{R}}_{ \geq 0}^{{K}_{{l}_{ea}} \times {T}_{{l}_{ea}}^{\prime } \times {F}_{{l}_{ea}}^{\prime }}$ is the ${K}_{{l}_{ea}}$ learned feature maps of the ${l}_{ea}$ -th $\mathrm{{CNN}},{\mathbf{H}}^{{N}_{\mathrm{{CNN}}}} = {\mathbf{Z}}_{\mathrm{a}}$ , and ${\mathbf{H}}^{0} = {\mathbf{X}}_{\mathrm{a}}$ . Audio decoder, ${d}_{\mathrm{a}}$ , is also based on CNNs, but it employs transposed convolutions (Radford et al., 2016; Dumoulin &Visin,2016) in order to expand ${\mathbf{Z}}_{\mathrm{a}}$ back to the dimensions of ${\mathbf{X}}_{\mathrm{a}}$ . For having a decoding scheme analogous to the encoding one, we employ another set of ${N}_{\mathrm{{CNN}}}$ convolutional blocks for ${d}_{\mathrm{a}}$ , again with BN and ReLU, and using the same serial processing described by Eq. (3). This processing yields the learned feature maps of the decoder, ${\mathbf{H}}^{{l}_{da}} \in$ ${\mathbb{R}}_{ > 0}^{{K}_{{l}_{da}} \times {T}_{{l}_{da}}^{\prime } \times {F}_{{l}_{da}}^{\prime }}$ , with ${l}_{da} = 1 + {N}_{\mathrm{{CNN}}},\ldots ,2{N}_{\mathrm{{CNN}}}$ and ${\mathbf{H}}^{2{N}_{\mathrm{{CNN}}}} = {\widehat{\mathbf{X}}}_{\mathrm{a}}$ . To optimize ${e}_{\mathrm{a}}$ and ${d}_{\mathrm{a}}$ , we employ the generalized KL divergence, ${D}_{\mathrm{{KL}}}$ , and we utilize the following loss function
|
| 64 |
+
|
| 65 |
+
$$
|
| 66 |
+
{\mathcal{L}}_{\mathrm{a}}\left( {{\mathbf{X}}_{\mathrm{a}},{\theta }_{e\mathrm{a}},{\theta }_{d\mathrm{a}}}\right) = {D}_{\mathrm{{KL}}}\left( {{\mathbf{X}}_{\mathrm{a}}\parallel {\widehat{\mathbf{X}}}_{\mathrm{a}}}\right) . \tag{4}
|
| 67 |
+
$$
|
| 68 |
+
|
| 69 |
+
${}^{1}$ For the clarity of notation, the superscript $q$ is dropped here and for the rest of the document, unless it is explicitly needed.
|
| 70 |
+
|
| 71 |
+
Each audio signal represented by ${\mathbf{X}}_{\mathrm{a}}$ is annotated by a set of tags from a vocabulary of size $C$ . We want to exploit the semantics of each tag and, at the same time, capture the semantic relationships between tags. For that reason, we opt to use another AE structure, which outputs a latent learned representation of the set of tags of ${\mathbf{X}}_{\mathrm{a}}$ as the learned features from the tags, and then tries to reconstruct the tags from that latent representation. Similar approaches have been used in (Silberer & Lapata, 2014), where an AE structure was employed in order to learn an embedding from a $k$ -hot encoding of tags/words that would encapsulate semantic information. Specifically, we represent the set of tags for ${\mathbf{X}}_{\mathrm{a}}$ as a multi-hot vector, ${\mathbf{y}}_{\mathrm{t}} \in \{ 0,1{\} }^{C}$ . We use again an encoder ${e}_{\mathrm{t}}$ and a decoder ${d}_{\mathrm{t}}$ , to obtain a learned latent representation of ${\mathbf{y}}_{\mathrm{t}}$ as
|
| 72 |
+
|
| 73 |
+
$$
|
| 74 |
+
{\mathbf{z}}_{\mathrm{t}} = {e}_{\mathrm{t}}\left( {{\mathbf{y}}_{\mathrm{t}};{\theta }_{e\mathrm{t}}}\right) \text{ , and } \tag{5}
|
| 75 |
+
$$
|
| 76 |
+
|
| 77 |
+
$$
|
| 78 |
+
{\widehat{\mathbf{y}}}_{\mathrm{t}} = {d}_{\mathrm{t}}\left( {{\mathbf{z}}_{\mathrm{t}};{\theta }_{d\mathrm{t}}}\right) , \tag{6}
|
| 79 |
+
$$
|
| 80 |
+
|
| 81 |
+
where ${\mathbf{z}}_{\mathrm{t}} \in {\mathbb{R}}_{ > 0}^{M}$ is the learned latent representation of the tags for ${\mathbf{X}}_{\mathrm{a}},{\mathbf{y}}_{\mathrm{t}}$ and ${\widehat{\mathbf{y}}}_{\mathrm{t}}$ is the reconstructed multi-hot encoding of the same tags ${\mathbf{y}}_{\mathrm{t}}$ . The ${e}_{\mathrm{t}}$ consists of a set of trainable feed-forward linear layers, where each layer is followed by a BN and a ReLU, similar to Eq. 3. That is, if ${\mathrm{{FNN}}}^{{l}_{\mathrm{t}}}$ is the ${l}_{\mathrm{t}}$ -th feed-forward linear layer, then
|
| 82 |
+
|
| 83 |
+
$$
|
| 84 |
+
{\mathbf{h}}^{{l}_{t}} = \operatorname{ReLU}\left( {{\mathrm{{BN}}}^{{l}_{t}}\left( {{\mathrm{{FNN}}}^{{l}_{t}}\left( {\mathbf{h}}^{{l}_{t} - 1}\right) }\right) }\right) , \tag{7}
|
| 85 |
+
$$
|
| 86 |
+
|
| 87 |
+
where ${l}_{\mathrm{t}} = 1,\ldots ,{N}_{\mathrm{{FNN}}},{\mathbf{h}}^{{N}_{\mathrm{{FNN}}}} = {\mathbf{z}}_{\mathrm{t}}$ , and ${\mathbf{h}}^{0} = {\mathbf{y}}_{\mathrm{t}}$ . To obtain the reconstructed version of ${\mathbf{y}}_{\mathrm{t}},{\widehat{\mathbf{y}}}_{\mathrm{t}}$ , through ${\mathbf{z}}_{\mathrm{t}}$ , we use the decoder ${d}_{\mathrm{t}}$ , which is modeled analogously to ${e}_{\mathrm{t}}$ and containing another set of ${N}_{\mathrm{{FNN}}}$ feed-forward linear layers. ${d}_{\mathrm{t}}$ processes ${\mathbf{z}}_{\mathrm{t}}$ similarly to Eq. 7, with ${\mathbf{h}}^{1 + {N}_{\mathrm{{FNN}}}}$ to be the output of the first feed-forward linear layer of ${d}_{\mathrm{t}}$ , and ${\mathbf{h}}^{2{N}_{\mathrm{{FNN}}}} = {\widehat{\mathbf{y}}}_{\mathrm{t}}$ . To optimize ${e}_{\mathrm{t}}$ and ${d}_{\mathrm{t}}$ we utilize the loss ${\mathcal{L}}_{\mathrm{t}}\left( {{\mathbf{y}}_{\mathrm{t}},{\theta }_{e\mathrm{t}},{\theta }_{d\mathrm{t}}}\right) = {CE}\left( {{\mathbf{y}}_{\mathrm{t}},{\widehat{\mathbf{y}}}_{\mathrm{t}}}\right)$ , where ${CE}$ is the cross-entropy function.
|
| 88 |
+
|
| 89 |
+
§ 2.2. ALIGNMENT OF ACOUSTIC AND SEMANTIC FEATURES
|
| 90 |
+
|
| 91 |
+
One of the main targets of our method is to infuse semantic information from the latent representation of tags to the learned acoustic features of audio. To do this, we maximize the agreement between (i.e. align) the paired latent representations of the audio signal, ${\mathbf{Z}}_{\mathrm{a}}^{q}$ , and the corresponding tags, ${\mathbf{z}}_{\mathrm{t}}^{q}$ , inspired by previous and relative work on image processing (Feng et al., 2014; Schonfeld et al., 2019), and by using a contrastive loss, similarly to (Sohn, 2016; Chen et al., 2020). Aligning these two latent representations (by pushing ${\mathbf{Z}}_{\mathrm{a}}^{q}$ towards ${\mathbf{z}}_{\mathrm{t}}^{q}$ ), will infuse ${\mathbf{Z}}_{\mathrm{a}}^{q}$ with information from ${\mathbf{z}}_{\mathrm{t}}^{q}$ . This task is expected to be difficult, due to the fact that some acoustic aspects may not be covered by the tags, or that some existing tags may be wrong or not informative. Therefore, we utilize two affine transforms, and we align the outputs of these transforms. Specifically, we utilize the affine transforms ${\mathrm{{AFF}}}_{\mathrm{a}}$ and ${\mathrm{{AFF}}}_{\mathrm{t}}$ , parameterized by ${\theta }_{\mathrm{{af}} - \mathrm{a}}$ and ${\theta }_{\mathrm{{af}} - \mathrm{t}}$ respectively, as
|
| 92 |
+
|
| 93 |
+
$$
|
| 94 |
+
{\mathbf{\Phi }}_{\mathrm{a}} = {\mathrm{{AFF}}}_{\mathrm{a}}\left( {{\mathbf{Z}}_{\mathrm{a}};{\theta }_{\mathrm{{af}} - \mathrm{a}}}\right) \text{ , and } \tag{8}
|
| 95 |
+
$$
|
| 96 |
+
|
| 97 |
+
$$
|
| 98 |
+
{\mathbf{\phi }}_{\mathrm{t}} = {\mathrm{{AFF}}}_{\mathrm{t}}\left( {{\mathbf{z}}_{\mathrm{t}};{\theta }_{\mathrm{{af}} - \mathrm{t}}}\right) . \tag{9}
|
| 99 |
+
$$
|
| 100 |
+
|
| 101 |
+
where ${\mathbf{\Phi }}_{\mathrm{a}} \in {\mathbb{R}}_{ \geq 0}^{K \times {T}^{\prime } \times F}$ and ${\mathbf{\phi }}_{\mathrm{t}} \in {\mathbb{R}}_{ \geq 0}^{M}$ . Then, since ${\mathbf{\Phi }}_{\mathrm{a}}$ is a matrix and ${\mathbf{\phi }}_{\mathrm{t}}$ a vector, we flatten ${\mathbf{\Phi }}_{\mathrm{a}}$ to ${\mathbf{\phi }}_{\mathrm{a}} \in {\mathbb{R}}_{ \geq 0}^{K{T}^{\prime }{F}^{\prime }}$ . To align ${\phi }_{\mathrm{a}}$ with its paired ${\phi }_{\mathrm{t}}$ , we utilize randomly (and without repetition) sampled minibatches ${\mathbb{G}}_{b} = {\left\{ \left( {\mathbf{X}}_{a}^{b},{\mathbf{y}}_{\mathrm{t}}^{b}\right) \right\} }_{b = 1}^{{N}_{\mathrm{b}}}$ from our dataset $\mathbb{G}$ , where ${N}_{\mathrm{b}}$ is the amount of paired examples in the minibatch ${\mathbb{G}}_{b}$ . For each minibatch ${\mathbb{G}}_{b}$ , we align the ${\phi }_{\mathrm{a}}^{b}$ with its paired ${\phi }_{\mathrm{t}}^{b}$ and, at the same time, we optimize ${e}_{\mathrm{a}},{d}_{\mathrm{a}},{e}_{\mathrm{t}},{d}_{\mathrm{t}},{\mathrm{{AFF}}}_{\mathrm{a}}$ and ${\mathrm{{AFF}}}_{\mathrm{t}}$ . To do this, we follow (Chen et al., 2020) and we use the contrastive loss function
|
| 102 |
+
|
| 103 |
+
$$
|
| 104 |
+
{\mathcal{L}}_{\xi }\left( {{\mathbb{G}}_{b},{\mathbf{\Theta }}_{\mathrm{c}}}\right) = \mathop{\sum }\limits_{{b = 1}}^{{N}_{\mathrm{B}}} - \log \frac{\Xi \left( {{\phi }_{\mathrm{a}}^{b},{\phi }_{\mathrm{t}}^{b},\tau }\right) }{\mathop{\sum }\limits_{{i = 1}}^{{N}_{\mathrm{b}}}{\mathbb{1}}_{\left\lbrack i \neq b\right\rbrack }\Xi \left( {{\phi }_{\mathrm{a}}^{b},{\phi }_{\mathrm{t}}^{i},\tau }\right) }\text{ , where }
|
| 105 |
+
$$
|
| 106 |
+
|
| 107 |
+
(10)
|
| 108 |
+
|
| 109 |
+
$$
|
| 110 |
+
\Xi \left( {\mathbf{a},\mathbf{b},\tau }\right) = \exp \left( {\operatorname{sim}\left( {\mathbf{a},\mathbf{b}}\right) {\tau }^{-1}}\right) , \tag{11}
|
| 111 |
+
$$
|
| 112 |
+
|
| 113 |
+
$$
|
| 114 |
+
\operatorname{sim}\left( {\mathbf{a},\mathbf{b}}\right) = {\mathbf{a}}^{\top }\mathbf{b}{\left( \parallel \mathbf{a}\parallel \parallel \mathbf{b}\parallel \right) }^{-1}, \tag{12}
|
| 115 |
+
$$
|
| 116 |
+
|
| 117 |
+
${\Theta }_{\mathrm{c}} = \left\{ {{\theta }_{e\mathrm{a}},{\theta }_{\mathrm{{af}} - \mathrm{a}},{\theta }_{e\mathrm{t}},{\theta }_{\mathrm{{af}} - \mathrm{t}}}\right\} ,{\mathbb{1}}_{\mathrm{A}}$ is the indicator function with ${\mathbb{1}}_{\mathrm{A}} = 1$ iff $\mathrm{A}$ else 0, and $\tau$ is a temperature hyper-parameter. Finally, we jointly optimize ${\theta }_{e\mathrm{a}},{\theta }_{d\mathrm{a}},{\theta }_{e\mathrm{t}}$ , and ${\theta }_{d\mathrm{t}}$ , for each minibatch ${\mathbb{G}}_{b}$ , minimizing
|
| 118 |
+
|
| 119 |
+
$$
|
| 120 |
+
{\mathcal{L}}_{\text{ total }}\left( {{\mathbb{G}}_{b},\mathbf{\Theta }}\right) = {\lambda }_{\mathrm{a}}\mathop{\sum }\limits_{{b = 1}}^{{N}_{\mathrm{B}}}{\mathcal{L}}_{\mathrm{a}}\left( {{\mathbf{X}}_{\mathrm{a}}^{b},{\mathbf{\Theta }}_{\mathrm{a}}}\right) + {\lambda }_{\mathrm{t}}\mathop{\sum }\limits_{{b = 1}}^{{N}_{\mathrm{B}}}{\mathcal{L}}_{\mathrm{t}}\left( {{\mathbf{y}}_{\mathrm{t}}^{b},{\mathbf{\Theta }}_{\mathrm{t}}}\right)
|
| 121 |
+
$$
|
| 122 |
+
|
| 123 |
+
$$
|
| 124 |
+
+ {\lambda }_{\xi }{\mathcal{L}}_{\xi }\left( {{\mathbb{G}}_{b},{\mathbf{\Theta }}_{\mathrm{c}}}\right) \text{ , } \tag{13}
|
| 125 |
+
$$
|
| 126 |
+
|
| 127 |
+
where ${\Theta }_{\mathrm{a}} = \left\{ {{\theta }_{e\mathrm{a}},{\theta }_{d\mathrm{a}}}\right\} ,{\Theta }_{\mathrm{t}} = \left\{ {{\theta }_{e\mathrm{t}},{\theta }_{d\mathrm{t}}}\right\} ,\Theta$ is the union of the ${\Theta }_{ \star }$ sets in Eq. (13), and ${\lambda }_{ \star }$ is a hyper-parameter used for numerical balancing of the different learning signals/losses. After the minimization of ${\mathcal{L}}_{\text{ total }}$ , we use ${e}_{\mathrm{a}}$ as a pre-learned feature extractor for different audio classification tasks.
|
| 128 |
+
|
| 129 |
+
§ 3. EVALUATION
|
| 130 |
+
|
| 131 |
+
We conduct an ablation study where we compare different methods for learning audio embeddings on their classification performance at different tasks, using as input the embeddings from the employed methods. This allows us to evaluate the benefit of using the alignment and the reconstruction objectives in our method. We consider a traditional set of hand-crafted features, as a low anchor. Additionally, we perform a correlation analysis with a set of acoustic features in order to understand what kind of acoustic properties are reflected in the learnt embeddings.
|
| 132 |
+
|
| 133 |
+
§ 3.1. PRE-TRAINING DATASET AND DATA PRE-PROCESSING
|
| 134 |
+
|
| 135 |
+
For creating our pre-training dataset $\mathbb{G}$ , we collect all sounds from Freesound (Font et al., 2013), that have a duration of maximum 10 seconds. We remove sounds that are used in any datasets of our downstream tasks. We apply a uniform sampling rate of ${22}\mathrm{{kHz}}$ and length of 10 secs to all collected sounds, by resampling and zero-padding as needed. We extract $F = {96}\log$ -scaled mel-band energies using sliding windows of 1024 samples $\left( { \approx {46}\mathrm{\;{ms}}}\right)$ , with ${50}\%$ overlap and the Hamming windowing function. We create overlapping patches of $T = {96}$ feature vectors $\left( { \approx {2.2}\mathrm{\;s}}\right)$ , using a step of 12 vectors for overlap. Then, we select the $T \times F$ patch with the maximum energy. This process is simple but we assume that in many cases, the associated tags will refer to salient events present in regions of high energy. We process the tags associated to the audio clips, by firstly removing any stop-words and making any plural forms of nouns to singular. We remove tags that occur in more than ${70}\%$ of the sounds as they can be considered less informative, and consider the $C = {1000}$ remaining most occurring tags, which we encode using the multi-hot scheme. Finally, we discard sounds that were left with no tag after this filtering process. This process generated $Q = {189896}$ spectrogram patches for our dataset G. 10% of these patches are kept for validation and all the patches are scaled to values between 0 and 1 .
|
| 136 |
+
|
| 137 |
+
We consider three different cases for evaluating the benefit of the alignment and the reconstruction objectives. The first is the method presented in Section 2, termed as AE-C. At the second, termed as E-C, we do not employ ${d}_{\mathrm{a}}$ and ${d}_{\mathrm{t}}$ , and we optimize ${e}_{\mathrm{a}}$ using only ${\mathcal{L}}_{\xi }$ , similar to (Chen et al.,2020). The third, termed as CNN, is composed of ${e}_{\mathrm{a}}$ , followed by two fully connected layers and is optimized for directly predicting the tag vector ${\mathbf{y}}_{\mathrm{t}}$ using the ${CE}$ function. Finally, we employ the 20 first mel-frequency cepstral coefficients (MFCCs) with their $\Delta \mathrm{s}$ and ${\Delta \Delta }\mathrm{s}$ as a low anchor, using means and standard deviations through time, and we term this case as MFCCs.
|
| 138 |
+
|
| 139 |
+
§ 3.2. DOWNSTREAM CLASSIFICATION TASKS
|
| 140 |
+
|
| 141 |
+
We consider three different audio classification tasks: i) sound event recognition/tagging (SER), ii) music genre classification (MGC), and iii) musical instrument classification (MIC). For SER, we use the Urban Sound $8\mathrm{\;K}$ dataset (US8K) (Salamon et al., 2014) in our experiment, which consists of around 8000 single-labeled sounds of maximum 4 seconds and 10 classes. We use the provided folds for cross-validation. For MGC, we use the fault-filtered version of the GTZAN dataset (Tzanetakis & Cook, 2002; Kereliuk et al., 2015) consisting of single-labeled music excepts of 30 seconds, split in pre-computed sets of 443 songs for training and 290 for testing. Finally, for MIC, we use the NSynth dataset (Engel et al., 2017) which consists of more than ${300}\mathrm{k}$ sound samples organised in 10 instrument families. However, because we are interested to see how our models performs with relatively low amount of training data, we randomly sample from NSynth a balanced set of ${20}\mathrm{k}$ samples from the training set which correspond to approximately 7% of the original set. The evaluation set is kept the same.
|
| 142 |
+
|
| 143 |
+
For the above tasks and datasets, we use non-overlapping frames of audio clips that are calculated similarly to the pre-training dataset, and are given as input to the different methods in order to obtain the embeddings. Then, these em-beddings are aggregated into a single vector (e.g. of 1152 dimensionality for our ${e}_{\mathrm{a}}$ ) employing the mean statistic, and are used as an input to a classifier that is optimized for each corresponding task. Embeddings and MFCCs vectors are standardized to zero-mean and unit-variance, using statistics calculated from the training split of each task. As a classifier for each of the different tasks, we use a multi-layer perceptron (MLP) with one hidden layer of 256 features, similar to what is used in (Cramer et al., 2019). To obtain an unbiased evaluation of our method, we repeat the training procedure of the MLP in each task 10 times, average and report the mean accuracies.
|
| 144 |
+
|
| 145 |
+
§ 3.3. CORRELATION ANALYSIS WITH ACOUSTIC FEATURES
|
| 146 |
+
|
| 147 |
+
We perform a correlation analysis using a similarity measure involving the Canonical Correlation Analysis (CCA) (Hardoon et al., 2004), to investigate the correlation of the output embeddings from our method, with various low-level acoustic features. Similar to (Raghu et al., 2017), we use sounds from the validation set of the pre-training dataset $\mathbb{G}$ , and we compute the canonical correlation similarity (CCS) of our audio embedding ${\mathbf{Z}}_{a}$ with statistics of acoustic features computed with the librosa library (McFee et al., 2015). These features correspond to MFCCs, chro-magram, spectral centroid, and spectral bandwidth, all computed at a frame level.
|
| 148 |
+
|
| 149 |
+
§ 4. RESULTS
|
| 150 |
+
|
| 151 |
+
In Table 1 are the results of the performance of the different embeddings and our MFCCs baseline, and results reported in the literature which are briefly explained in the supplementary material section. In all the tasks, AE-C and E-C embeddings yielded better results than the MFCCs baseline, showing that it is possible to learn meaningful audio representations, by taking advantage of tag metadata. However, the CNN case does not even reach the performance of the MFCCs features. This clearly indicates the benefit of our approach for building general audio representations by leveraging user-provided noisy tags. When comparing the different proposed embeddings, we see that the AE-C case consistently leads to better results. For the MIC (NSynth) task, combining reconstruction and contrastive objectives (i.e. AE-C case) brings important benefits. For the MGC (GTZAN) task, these benefits are not as pronounced, and finally, when looking at the SER (US8K) task, adding the reconstruction objective does not improve the results much. Our assumption is that recognizing musical instruments can be more easily done using lower-level features reflecting acoustic characteristics of the sounds, and that the reconstruction objective imposed by the autoencoder architecture is forcing the embedding to reflect low-level characteristics present in the spectrogram. However, for recognizing urban sounds or musical genres, a feature that reflects mainly semantic information is needed, which seems to be learned successfully when considering the contrastive objective.
|
| 152 |
+
|
| 153 |
+
Table 1. Average mean accuracies for SER, MGC, and MIC. Additional performances are taken from the literature (Cramer et al., 2019; Salamon & Bello, 2017; Pons & Serra, 2019b; Lee et al., 2018; Ramires & Serra, 2019).
|
| 154 |
+
|
| 155 |
+
max width=
|
| 156 |
+
|
| 157 |
+
X US8K GTZAN $\mathbf{{NSynth}}$
|
| 158 |
+
|
| 159 |
+
1-4
|
| 160 |
+
MFCCs 65.8 49.8 62.6
|
| 161 |
+
|
| 162 |
+
1-4
|
| 163 |
+
AE-C 72.7 60.7 73.1
|
| 164 |
+
|
| 165 |
+
1-4
|
| 166 |
+
E-C 72.5 58.9 69.5
|
| 167 |
+
|
| 168 |
+
1-4
|
| 169 |
+
CNN 48.4 47.0 56.4
|
| 170 |
+
|
| 171 |
+
1-4
|
| 172 |
+
OpenL3 78.2 - -
|
| 173 |
+
|
| 174 |
+
1-4
|
| 175 |
+
VGGish 73.4 - -
|
| 176 |
+
|
| 177 |
+
1-4
|
| 178 |
+
DeepConv 79.0 - -
|
| 179 |
+
|
| 180 |
+
1-4
|
| 181 |
+
rVGG 70.7 59.7 -
|
| 182 |
+
|
| 183 |
+
1-4
|
| 184 |
+
sampleCNN - 82.1 -
|
| 185 |
+
|
| 186 |
+
1-4
|
| 187 |
+
smallCNN - - 73.8
|
| 188 |
+
|
| 189 |
+
1-4
|
| 190 |
+
|
| 191 |
+
Table 2. CCA correlation scores between the embeddings model outputs and some acoustic features statistics.
|
| 192 |
+
|
| 193 |
+
max width=
|
| 194 |
+
|
| 195 |
+
X mean var skew mean var skew
|
| 196 |
+
|
| 197 |
+
1-7
|
| 198 |
+
X 3|c|MFCCs 3|c|Chromagram
|
| 199 |
+
|
| 200 |
+
1-7
|
| 201 |
+
AE-C 0.84 0.51 0.42 0.48 0.37 0.40
|
| 202 |
+
|
| 203 |
+
1-7
|
| 204 |
+
E-C 0.58 0.49 0.39 0.38 0.36 0.32
|
| 205 |
+
|
| 206 |
+
1-7
|
| 207 |
+
CNN 0.73 0.43 0.32 0.59 0.33 0.48
|
| 208 |
+
|
| 209 |
+
1-7
|
| 210 |
+
|
| 211 |
+
max width=
|
| 212 |
+
|
| 213 |
+
4|c|Spectral Centroid 3|c|Spectral Bandwidth
|
| 214 |
+
|
| 215 |
+
1-7
|
| 216 |
+
AE-C 0.97 0.87 0.80 0.96 0.86 0.84
|
| 217 |
+
|
| 218 |
+
1-7
|
| 219 |
+
E-C 0.93 0.82 0.76 0.92 0.82 0.81
|
| 220 |
+
|
| 221 |
+
1-7
|
| 222 |
+
CNN 0.95 0.76 0.74 0.91 0.72 0.80
|
| 223 |
+
|
| 224 |
+
1-7
|
| 225 |
+
|
| 226 |
+
Comparing our method to others for the SER, we can see that we are slightly outperformed by VGGish (Hershey et al., 2017; Gemmeke et al., 2017), according to results taken from (Cramer et al., 2019), which has been trained with million of manually annotated audio files using predefined categories. This shows that our approach which only takes advantage of small-scale content with their original tag metadata is very promising for learning competitive audio features. However, our model is still far from reaching performances given by OpenL3 or the current SOTA Deep-Conv with data augmentation. Similarly in MGC, the sam-pleCNN classifier, pre-trained on the Million Song Dataset (MSD) (Lee et al., 2018) produces much better results than our approach. But, all these models have been either trained with much more data than ours, or use a more powerful classifier. Finally, NSynth dataset has been originally released in order to train generative models rather than classifiers. Still, results from (Ramires & Serra, 2019), show that our approach training using around $7\%$ of the training data, is only slightly outperformed by a CNN trained with all the training data (smallCNN).
|
| 227 |
+
|
| 228 |
+
Table 2 shows the correlation for the different embeddings ${\mathbf{Z}}_{\mathrm{a}}$ with the mean, the variance, and the skewness of the different acoustic feature vectors. Overall, we observe a consistent increase of the correlation between the acoustic features and embeddings trained with models containing an AE structure. This suggests that the reconstruction objective enables to learn features that reflect some low-level acoustic characteristics of audio signals, which makes it more valuable as a general-purpose feature. More specifically, there is a large correlation increase between the mean of MFCCs and models that contain AE structure, showing that they can capture more timbral characteristics of the signal. However, variance and skwewness did not increase considerably, which can mean that our embeddings lack to capture temporal queues. Considering chromagrams, which reflect the harmonic contents of a sound, we see little improvement with AE models. This suggests that our embeddings lack some important musical characteristics. Regarding the spectral centroid and bandwidth, we only observe a slight increase of correlations with AE-based embeddings.
|
| 229 |
+
|
| 230 |
+
§ 5. CONCLUSIONS
|
| 231 |
+
|
| 232 |
+
In this work we present a method for learning an audio representation that can capture acoustic and semantic characteristics for a wide range of sounds. We utilise two heterogeneous autoencoders (AEs), one taking as an input audio spectrogram and the other processing a tag representation. These AEs are jointly trained and a contrastive loss enables to align their latent representations by leveraging associated pairs of audio and tags. We evaluate our method by conducting an ablation study, where we compare different methods for learning audio representations over three different classification tasks. We also perform a correlation analysis with acoustic features in order to grasp knowledge about what type of acoustic characteristics the embedding captures.
|
| 233 |
+
|
| 234 |
+
Results indicate that combining reconstruction objectives with a contrastive learning framework enables to learn audio features that reflect both semantic and lower-level acoustic characteristics of sounds, which makes it suitable for general audio machine listening applications. Future work may focus on improving the network models by for instance using audio architectures that can capture more temporal aspects and dynamics present in audio signals.
|
| 235 |
+
|
| 236 |
+
§ SUPPLEMENTARY MATERIAL
|
| 237 |
+
|
| 238 |
+
§ CODE AND DATA
|
| 239 |
+
|
| 240 |
+
The code of our method is available online at: https: //github.com/xavierfav/coala. We provide the pre-training dataset $\mathbb{G}$ online and publicly at: https: //zenodo.org/record/3887261. Sounds were accessed from the Freesound API on the 7th of May, 2019.
|
| 241 |
+
|
| 242 |
+
§ UTILIZED HYPER-PARAMETERS, TRAINING PROCEDURE, AND MODELS
|
| 243 |
+
|
| 244 |
+
For the audio autoencoder, we use ${N}_{\mathrm{{CNN}}} = 5$ convolutional blocks each one containing ${K}_{{l}_{e\mathrm{a}}} = {128}$ filters of shape $4\mathrm{x}4$ , with a stride of $2\mathrm{x}2$ , yielding an embedding ${\phi }_{\mathrm{a}}$ of size 1152. This audio encoder model has approximately ${2.4}\mathrm{M}$ parameters. The tag autoencoder is composed of ${N}_{\mathrm{{FNN}}} = 3$ layers of size 512, 512 and 1152, accepting a multi-hot vector of dimension 1000 as input. We train the models for 200 epochs using a minibatch size ${N}_{\mathrm{B}} = {128}$ , using an SGD optimizer with a learning rate value of 0.005 . We utilize the validation set to define the different $\lambda$ ’s at Eq. (13) and the constrastive loss temperature parameter $\tau$ , to ${\lambda }_{\mathrm{a}} = {\lambda }_{\mathrm{t}} = 5$ , ${\lambda }_{\xi } = {10}$ , and $\tau = {0.1}$ . We add a dropout regularization with rate 25% after each activation layer to avoid overfitting while training. The CNN baseline that is trained by predicting directly the multi-hot tag vectors from the audio spectrogram has follows the same architecture as the encoder from the audio autoencoder. When training, we add 2 fully connected layers and train it for 20 epochs using a minibatch size ${N}_{\mathrm{B}} = {128}$ and an SGD optimizer with a learning rate value of 0.005 as well.
|
| 245 |
+
|
| 246 |
+
§ TAG PROCESSING
|
| 247 |
+
|
| 248 |
+
Removing stop-words in sound tags is done using the NLTK python library (https://www.nltk.org/).Making any plural forms of nouns to singular is done with the inflect python library (https://github.com/jazzband/ inflect). Additionally we transform all tags to lowercase.
|
| 249 |
+
|
| 250 |
+
§ MODELS FROM THE LITERATURE
|
| 251 |
+
|
| 252 |
+
OpenL3 (Cramer et al., 2019) is an open source implementation of Look, Listen, and Learn (L3-Net) (Arandjelovic & Zisserman, 2017). It consists of an embedding model using blocks of convolutional and max-pooling layers, trained through self-supervised learning of audio-visual correspondence in videos from YouTube. The model has around 4.7M parameters and computes embedding vectors of size 6144. In (Cramer et al., 2019), the authors report the classification accuracies of different variants of the model used as a feature extractor combined with a MLP classifier on the US8K dataset. Their mean accuracy is ${78.2}\%$ .
|
| 253 |
+
|
| 254 |
+
VGGish (Hershey et al., 2017; Gemmeke et al., 2017) consists of an audio-based CNN model, a modified version of the VGGNet model (Simonyan & Zisserman, 2014) trained to predict video tags from the Youtube-8M dataset (Abu-El-Haija et al.,2016). The model has around ${62}\mathrm{M}$ parameters and computes embedding vectors of size 128. Its accuracy when used as a feature extractor combined with a MLP classifier on the US8K dataset is reported in (Cramer et al., 2019) as being 73.4%.
|
| 255 |
+
|
| 256 |
+
DeepConv (Salamon & Bello, 2017) is a deep neural network composed of convolutional and max-pooling layers. When trained with data augmentation on the US8K dataset, it achieved ${79.0}\%$ accuracy.
|
| 257 |
+
|
| 258 |
+
rVGG (Pons & Serra, 2019b) corresponds to a VGGish non-trained model (randomly weighted). The referenced work experiment using it as a feature extractor by comparing different embeddings from different layers of the network. The best accuracies on US8K and GTZAN (fault-filtered) when combined with an SVM classifier were reported as ${70.7}\%$ and ${59.7}\%$ respectively, using an embedding vector of size of 3585 .
|
| 259 |
+
|
| 260 |
+
sampleCNN (Lee et al., 2018) is a deep neural network that takes as input the raw waveform and is composed of many small 1D convolutional layers and that has been designed for musical classification tasks. When pre-trained on the Million Song Dataset (Bertin-Mahieux et al., 2011), this model reached a 82.1% accuracy on the GTZAN dataset (fault-filtered).
|
| 261 |
+
|
| 262 |
+
smallCNN (Pons et al., 2017b) is a neural network composed of one CNN layer with filters of different sizes that can capture timbral characteristics of the sounds. It is combined with pooling operations and a fully-connected layer in order to predict labels. In (Ramires & Serra, 2019), it has been trained with the NSynth dataset in order to predict the instrument family classes and was reported to reach 73.8% accuracy.
|
| 263 |
+
|
| 264 |
+
§ ACKNOWLEDGEMENT
|
| 265 |
+
|
| 266 |
+
X. Favory, K. Drossos, and T. Virtanen would like to acknowledge CSC Finland for computational resources. The authors would also like to thank all the Freesound users that have been sharing very valuable content for many years. Xavier Favory is also grateful for the GPU donated by NVidia.
|
papers/ICML/ICML 2020/ICML 2020 Workshop/ICML 2020 Workshop SAS/CSqjS121nsU/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,197 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# ASR free End-to-End SLU using the Transformer
|
| 2 |
+
|
| 3 |
+
Anonymous Authors ${}^{1}$
|
| 4 |
+
|
| 5 |
+
## Abstract
|
| 6 |
+
|
| 7 |
+
End-to-end spoken language understanding (SLU) systems directly map speech to intent through a single trainable model whereas conventional SLU systems use Automatic Speech Recognition (ASR) to convert speech to text and utilize Natural Language Understanding (NLU) to get intent. In this paper, we show how transformer-based architecture can be used for building end to end SLU systems. We conducted experiments on the Fluent Speech Commands (FSC) dataset, where intents are formed as combinations of three slots namely action, object, and location. We also demonstrate how state-of-the-art results can be obtained using a combination of various data augmentation methods.
|
| 8 |
+
|
| 9 |
+
## 1. Introduction
|
| 10 |
+
|
| 11 |
+
With the growing demand of voice interfaces for various smart devices (e.g. smartphone, smartTV, in-car navigation system) Spoken Language Understanding (SLU) has drawn a great deal of attention in recent years. Traditional SLU approaches use the text transcribed by an automatic speech recognition (ASR) system to extract the intent of the user and the slots describing the query (Mesnil et al., 2015). The main problem with Traditional SLU systems is that the errors occurred while transcribing the audio is being forwarded and affects the intent and the slot filling task. One way to avoid this problem is by combining ASR and NLU (referred as end-to-end SLU) and directly map speech to intent (Chen et al., 2018), (Lugosch et al., 2019). In this method the model is first pre-trained to predict ASR targets (words and phonemes). The word and phoneme classifiers are then discarded, and the entire model is then trained end-to-end on the supervised SLU task. The pre-trained model weights can be either frozen or fine-tuned during the SLU task training. In this paper, we propose an ASR free end-to-end spoken language understanding using the transformer (Vaswani et al., 2017). The model doesnt learn any ASR level representation or use any pre-trained ASR model. We use the transformer encoder blocks with the convolution layer. Recurrent neural network (RNN) based approaches, particularly gated recurrent unit (GRU) and long short-term memory (LSTM) models, have achieved good performance for most of the tasks. But when compared with RNNs, the transformer-based encoder can capture the long term dependency better and can produce even better results. We use other data augmentations(e.g. changing pitch, reverberation, changing speed, noise injection) with SpecAugment (Park et al., 2019) (time masking and frequency masking) and get significantly low classification error compared to any other approaches. Following (Palogiannidi et al., 2019), instead of considering intents as the classes, we consider them as tuples of slots, each having an associated SoftMax layer. This technique converts a single-label classification task into a multi-label classification task and thus helps in reducing the number of classes. In the case of the Fluent Speech Command dataset, we have a three-slot tuple (action, object, location). We can say that an intent is predicted correctly if all the three slots corresponding to that intent are predicted correctly.
|
| 12 |
+
|
| 13 |
+
## 2. Related Work
|
| 14 |
+
|
| 15 |
+
(Lugosch et al., 2019) suggested a pre-training approach for end-to-end SLU models and also introduced the Fluent speech command dataset. They used a single trainable that directly maps speech to intent without explicitly producing a text transcript.They showed that by using the pre training techniques boost efficiency for both large and small SLU training sets.
|
| 16 |
+
|
| 17 |
+
(Wang et al., 2020) proposed an unsupervised pre-training approach for the SLU component of an end-to-end SLU system to preserve semantic features from large-scale raw audios. They first pretrain the AM component by using (Lu-gosch et al., 2019) approach and then feed the AM output to a softmax layer to get a posterior distribution. This posterior distribution is used as input of the next SLU component. (Palogiannidi et al., 2019) uses a RNN based end-to-end SLU for intent classification. Unlike (Lugosch et al., 2019), (Palogiannidi et al., 2019) didnt make use of any ASR level prediction (e.g. phonemes, characters, words) and handle intent as tuples of slots. Additionally this approach uses various data augmentation methods and achieves state-of-the-art results. Our approach is closely related to (Palogiannidi et al., 2019), but rather than using LSTM we make use of transformer encoder blocks.
|
| 18 |
+
|
| 19 |
+
---
|
| 20 |
+
|
| 21 |
+
${}^{1}$ Anonymous Institution, Anonymous City, Anonymous Region, Anonymous Country. Correspondence to: Anonymous Author <anon.email@domain.com>.
|
| 22 |
+
|
| 23 |
+
Preliminary work. Under review by the International Conference on Machine Learning (ICML). Do not distribute.
|
| 24 |
+
|
| 25 |
+
---
|
| 26 |
+
|
| 27 |
+
## 3. Model Architecture
|
| 28 |
+
|
| 29 |
+
The model consists of three parts: (1) Convolution layer, (2) Transformer block and (3) Classifier. The overall architecture of our end-to-end SLU model is shown in Figure 1. (Wang et al., 2019)discarded the sinusoidal positional encoding for transformers and used convolutionally learned input representations and got very decent results for the Automatic Speech Recognition task. Following these, we use a VGG-like convolution block (Simonyan & Zisserman, 2014) before the transformer encoder.The following section will describe the three parts separately.
|
| 30 |
+
|
| 31 |
+
### 3.1. Convolution layer
|
| 32 |
+
|
| 33 |
+
In order to make sense of a sequence, the model needs to know the position of each word in the sequence. To address this, the transformer uses a sinusoidal positional encoding. We replace the widely used sinusoidal positional encoding with the convolution layer. We feel that adding early convolutional layers allow the model to learn the relative positional encoding and helps the model to identify the right order of the input sequence. We used 2-D convolutional blocks with layer normalization and ReLU activation after each convolutional layer. Each convolutional block contains two convolutional layers followed by a max-pooling layer. The architecture is shown in the figure 2.
|
| 34 |
+
|
| 35 |
+
### 3.2. Transformer block
|
| 36 |
+
|
| 37 |
+
The input to the transformer encoder is the output of the convolution block. We will describe the details of the Transformer encoder block in this section.
|
| 38 |
+
|
| 39 |
+
#### 3.2.1. Scaled Dot-Product Attention
|
| 40 |
+
|
| 41 |
+
Self-attention is a mechanism that relates different positions of input sequences to compute representations for the inputs. It uses three inputs namely queries(Q), keys(K), and values(V). The output of one query is calculated as a weighted sum of the values, where weights can be computed by taking the dot products of the query with all keys, divide each by $\frac{1}{\sqrt{{d}_{k}}}$ , and apply a softmax function. The attention can :
|
| 42 |
+
|
| 43 |
+
$$
|
| 44 |
+
\operatorname{Attention}\left( {Q, K, V}\right) = \operatorname{softmax}\left( \frac{Q{K}^{T}}{\sqrt{{d}_{k}}}\right) V
|
| 45 |
+
$$
|
| 46 |
+
|
| 47 |
+
Where ${d}_{k}$ is the dimension of the key vector and the scalar $\frac{1}{\sqrt{{d}_{k}}}$ is used to prevent softmax function into regions that have very small gradients.
|
| 48 |
+
|
| 49 |
+
INTENT LOCATION Linear location ACTION OBJECT Linear Linear action object Add & Norm FFN Add & Norm Multi-head Attention VGG Convolution Input features
|
| 50 |
+
|
| 51 |
+
Figure 1. End to End SLU Architecture using Transformer
|
| 52 |
+
|
| 53 |
+
#### 3.2.2. MULTI-HEAD ATTENTION
|
| 54 |
+
|
| 55 |
+
To allow the model to jointly attend to information from different representation subspaces at different positions, the transformer uses multi-head attention. Multi-head attention calculates $h$ times scaled dot-product attention where $h$ is the number of heads. Before performing each attention, first linearly project the queries, keys and values to more discriminated representations. Then, each Scaled Dot-Product Attention is calculated independently, and their outputs are concatenated and fed into another linear projection to obtain the final ${d}_{\text{model }}$ dimensional outputs. The multi-head attention can be formulated as:
|
| 56 |
+
|
| 57 |
+
MultiHead $\left( {Q, K, V}\right) =$ Concat $\left( {{\text{head}}_{1},\ldots ,{\text{head}}_{h}}\right) {W}^{O}$
|
| 58 |
+
|
| 59 |
+
Where ${\text{head}}_{i} = \operatorname{Attention}\left( {Q{W}_{i}^{Q}, K{W}_{i}^{K}, V{W}_{i}^{V}}\right)$
|
| 60 |
+
|
| 61 |
+
#### 3.2.3. Position-wise Feed-Forward Network
|
| 62 |
+
|
| 63 |
+
In addition to attention, each of the encoders contains a position wise fully connected feed-forward network. It consists of two linear transformations with a ReLU activation in between.
|
| 64 |
+
|
| 65 |
+
$$
|
| 66 |
+
{FFN}\left( x\right) = \max \left( {0, x{W}_{1} + {b}_{1}}\right) {W}^{2} + {b}^{2}
|
| 67 |
+
$$
|
| 68 |
+
|
| 69 |
+
2-D Max Pooling OUTPUT Layer Norm 2-D Convulation
|
| 70 |
+
|
| 71 |
+
Figure 2. Encoder convolution layer
|
| 72 |
+
|
| 73 |
+
The dimensionality of input and output is ${d}_{\text{model }}$ , and the inner layer has dimensionality ${d}_{ff}$ . Although the linear transformations are similar in various locations, different parameters are used from layer to layer. In addition, residual connection and layer normalization (Ba et al., 2016) are important components of the transformer. To squeeze the output of the transformer encoder, we use an average pooling layer. Besides that batch normalization (Ioffe & Szegedy, 2015) is also used.
|
| 74 |
+
|
| 75 |
+
### 3.3. Classifier
|
| 76 |
+
|
| 77 |
+
Following [8], the prediction can be made by considering both conditional and unconditional models. In case of an unconditional model, the slots are independent. The intent probability can be formulated as:
|
| 78 |
+
|
| 79 |
+
$$
|
| 80 |
+
p\left( {A, O, L \mid D}\right) = p\left( {A \mid D}\right) p\left( {O \mid D}\right) p\left( {L \mid D}\right)
|
| 81 |
+
$$
|
| 82 |
+
|
| 83 |
+
Here Action, Object, Location, and sequence of acoustic features for the utterance is represented by $\mathrm{A},\mathrm{O},\mathrm{L}$ and $\mathrm{D}$ respectively. In the case of conditional model, the intent probability can be formulated as :
|
| 84 |
+
|
| 85 |
+
$$
|
| 86 |
+
p\left( {A, O, L \mid D}\right) = p\left( {A \mid D}\right) p\left( {O \mid A, D}\right) p\left( {L \mid A, O, D}\right)
|
| 87 |
+
$$
|
| 88 |
+
|
| 89 |
+
Please note that any ordering of A, O, L is valid and there will be one independent slot and two dependent slots. When using unconditional classifiers, the slots can be predicted by using the transformer encoder output. In the case of conditional classifiers, the action slot is predicted using the transformer encoder output, whereas the object slot is predicted by considering (concatenating) both action prediction embedding and the transformer encoder output. For predicting location, we use(concatenate) action prediction embedding, object prediction embedding and the transformer encoder output. The intent predicted by the model can be then expressed by combining the prediction for action slot, object slot and the location slot.
|
| 90 |
+
|
| 91 |
+
Table 1. Fluent Speech Commands dataset statistics
|
| 92 |
+
|
| 93 |
+
<table><tr><td>SPLIT</td><td>SPEAKERS</td><td>UTTERANCES</td></tr><tr><td>TRAIN</td><td>77</td><td>23,132</td></tr><tr><td>TEST</td><td>10</td><td>3,118</td></tr><tr><td>VALID</td><td>10</td><td>3,793</td></tr></table>
|
| 94 |
+
|
| 95 |
+
Table 2. Classification error(%) on the test set, given conditional or unconditional classifier.
|
| 96 |
+
|
| 97 |
+
<table><tr><td>CLASSIFIER</td><td>ERROR(%)</td></tr><tr><td>CONDITIONAL CLASSIFIER</td><td>2.95</td></tr><tr><td>Unconditional Classifier</td><td>3.725</td></tr></table>
|
| 98 |
+
|
| 99 |
+
## 4. Experiments
|
| 100 |
+
|
| 101 |
+
In this section we are going to talk about the experiments that we conduct on Fluent Speech Command datasets. We compare our results with state-of-the-art models. We represent input signals as a sequence of 83 dimensional log-Mel filter bank features that is extracted every ${10}\mathrm{\;{ms}}$ . We use a 512 dimensional attention vector with 4 heads along with Adam optimizer with a learning rate of 0.0001 . We conducted multiple sets of experiments. Some of the experiments are conducted without using any augmentations while some use augmentation. The best epoch is chosen for each experiment based on the results on the validation set and the classification error achieved on the test set. The overall loss function for the model is the summation of cross entropy losses for the three slots.
|
| 102 |
+
|
| 103 |
+
### 4.1. Dataset
|
| 104 |
+
|
| 105 |
+
The dataset is composed of ${16}\mathrm{{kHz}}$ single-channel .wav audio files. Each audio file has a recording of a single spoken command in English. The dataset statistics are given in the Table 1. Here intents are considered as valid combinations of slots. There are 31 unique intents in total with 6,14,4 unique action, object, location respectively. For each intent there can be multiple possible wordings. For example, the intent action: "bring", object: "newspaper", location: "none" can have Bring me the newspaper, Get me the newspaper and Fetch the newspaper as the possible wordings.
|
| 106 |
+
|
| 107 |
+
### 4.2. Conditional and Unconditional classifier
|
| 108 |
+
|
| 109 |
+
To examine which classifier works best, we trained both the conditional and the unconditional model given the entire training set (without using any augmentations). Examining the results in Table 2, we observe that the model using
|
| 110 |
+
|
| 111 |
+
165
|
| 112 |
+
|
| 113 |
+
Table 3. Classification error on the test set, given conditional or unconditional classifier.
|
| 114 |
+
|
| 115 |
+
<table><tr><td>ENCODER LAYERS</td><td>ERROR(%)</td></tr><tr><td>4</td><td>4.45</td></tr><tr><td>6</td><td>3.49</td></tr><tr><td>8</td><td>3.07</td></tr><tr><td>12</td><td>2.95</td></tr></table>
|
| 116 |
+
|
| 117 |
+
166
|
| 118 |
+
|
| 119 |
+
167
|
| 120 |
+
|
| 121 |
+
168 conditional classifier performs better than the model using unconditional classifier.
|
| 122 |
+
|
| 123 |
+
### 4.3. Varying number of encoder layers
|
| 124 |
+
|
| 125 |
+
To explore the effect of large models, we vary the number of encoder layers. We try 4, 6, 8 and 12 encoder layers. The result of the experiments is shown in Table 3. All these experiments are conducted using the entire training set (without using any augmentations). We can see that as we are increasing the number of encoder layers, the classification error is decreasing. By using 12 encoder layers, we achieve ${2.95}\%$ as the lowest classification error on the test set.
|
| 126 |
+
|
| 127 |
+
### 4.4. Data Augmentation Methods
|
| 128 |
+
|
| 129 |
+
We trained our model in three different ways. Firstly to evaluate the performance of the model on the original dataset we trained our model without using any data augmentation. We then use SpecAugment (Time masking and Frequency masking) on log-Mel filter bank features while training. To make it more robust, we first augment the original data using four different augmentations namely reverberation, pitch change, speed change, and noise injection. After using data augmentation the number of training samples increases from 23132 to 115660. We then make use of SpecAugment (Time masking and Feature masking) on log-Mel filter bank features of the augmented data while training. In this section, we are going to talk about some of the augmentation methods we used. Table 5 shows the results of augmentation.
|
| 130 |
+
|
| 131 |
+
Noise Injection: Noise injection is a fundamental tool for data augmentation. Adding noise during training can make the training process more robust and reduce generalization
|
| 132 |
+
|
| 133 |
+
208 error.
|
| 134 |
+
|
| 135 |
+
Changing Pitch: Pitch is the quality that enables sounds to be judged as higher and lower in the sense associated with musical melodies. We use the librosa library for this data
|
| 136 |
+
|
| 137 |
+
212 augmentation. 213
|
| 138 |
+
|
| 139 |
+
Reverberation: Reverberation is the reflection of sound waves created by the superposition of echoes. This can be
|
| 140 |
+
|
| 141 |
+
216 done using the pysndfx library.
|
| 142 |
+
|
| 143 |
+
217 Changing speed: Changing speed is a commonly used 218 method for doing data augmentation, where the play rate of 219 the audio is randomly changed. Same as changing pitch, this augmentation is performed by librosa function. It stretches time series by a fixed rate. The audio speed is changed by taking a value between 0.85 to 1.15 randomly.
|
| 144 |
+
|
| 145 |
+
SpecAugment: (Park et al., 2019) introduced SpecAugment for data augmentation in speech recognition. SpecAugment is applied directly to the input features of a neural network. There are three basic ways to augment data which are time warping, frequency masking, and time masking. We use time masking and frequency masking methods while training the model.
|
| 146 |
+
|
| 147 |
+
0.9 validation accuracy on full dataset SpecAug+OtherAug SpecAug NoAug 20 25 30 35 40 Epochs 0.8 Accuracy 0.7 0.6 0.5 0.4 0 10 15
|
| 148 |
+
|
| 149 |
+
Figure 3. Results of training on complete dataset
|
| 150 |
+
|
| 151 |
+
0.8 validation accuracy on ${10}\%$ of the dataset SpecAug+OtherAug SpecAug NoAug 60 80 100 Epochs 0.7 0.6 Accuracy 0.5 0.4 0.3 0.2 0.1 0.0 20 40
|
| 152 |
+
|
| 153 |
+
Figure 4. Results of training on partial dataset
|
| 154 |
+
|
| 155 |
+
### 4.5. Training on complete dataset
|
| 156 |
+
|
| 157 |
+
We conducted multiple experiments using the entire training set. Firstly we trained the model without using any augmentation. Then we experimented with SpecAugment. Finally we used other data augmentation methods (described earlier) with the SpecAugment and achieved a classification error of ${0.34}\%$ on the test set. In comparison with the previous state-of-the-art results Table 4, our model achieved significantly low classification error. We performed all these experiments using 12 encoder layers. The validation accuracy for these experiments over time is shown in Figure 3. The results obtained on the test set for different experiments is shown in the Table 5 (Full training set column).
|
| 158 |
+
|
| 159 |
+
Table 4. Comparison of classification error(%) between different approaches on the Fluent Speech Command dataset.
|
| 160 |
+
|
| 161 |
+
<table><tr><td>MODEL</td><td>ERROR(%)</td></tr><tr><td>PRE TRAINED SLU(LUGOSCH ET AL., 2019)</td><td>1.2</td></tr><tr><td>LSTM BASED SLU(PALOGIANNIDI ET AL., 2019)</td><td>1.15</td></tr><tr><td>ERNIE(WANG ET AL., 2020)</td><td>0.98</td></tr><tr><td>SPEC AUGMENT</td><td>1</td></tr><tr><td>SPEC + OTHER AUGMENTATION</td><td>0.34</td></tr></table>
|
| 162 |
+
|
| 163 |
+
Table 5. Classification error(%) on full training set and 10% of the training set.
|
| 164 |
+
|
| 165 |
+
<table><tr><td>EXPERIMENT</td><td>FULL DATA</td><td>10% DATA</td></tr><tr><td>NO AUG</td><td>2.95</td><td>25.05</td></tr><tr><td>SPECAUG</td><td>1</td><td>14.12</td></tr><tr><td>SPECAUG + OTHERAUG</td><td>0.34</td><td>8</td></tr></table>
|
| 166 |
+
|
| 167 |
+
### 4.6. Training on ${10}\%$ dataset
|
| 168 |
+
|
| 169 |
+
To evaluate the performance of models, we randomly selected ${10}\%$ of the training data and used this dataset for training instead of using the full dataset. All the experiments. We conducted multiple experiments using ${10}\%$ of the training set (all the experiments described for the full dataset), and observed that by using other data augmentation methods with the SpecAugment we achieved a classification error of $8\%$ . The validation accuracy for these experiments over time is shown in Figure 4. Table 5 compares the results obtained on a full training set with the results obtained using only ${10}\%$ of the training data.
|
| 170 |
+
|
| 171 |
+
## 5. Conclusion
|
| 172 |
+
|
| 173 |
+
End-to-end SLU approaches provide a new perspective for various applications since the speech is directly map to intent. In this paper, we proposed an end-to-end transformer based SLU for intent classification. The experiment results show that our proposed approach significantly outperforms SOTA end-to-end SLU systems. In the future, we plan to explore the limitations of end-to-end SLU and will try to enhance the architecture.
|
| 174 |
+
|
| 175 |
+
## References
|
| 176 |
+
|
| 177 |
+
Ba, J. L., Kiros, J. R., and Hinton, G. E. Layer normalization, 2016.
|
| 178 |
+
|
| 179 |
+
Chen, Y., Price, R., and Bangalore, S. Spoken language understanding without speech recognition. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 6189-6193, 2018.
|
| 180 |
+
|
| 181 |
+
Ioffe, S. and Szegedy, C. Batch normalization: Accelerating deep network training by reducing internal covariate shift, 2015.
|
| 182 |
+
|
| 183 |
+
Lugosch, L., Ravanelli, M., Ignoto, P., Tomar, V. S., and Bengio, Y. Speech model pre-training for end-to-end spoken language understanding, 2019.
|
| 184 |
+
|
| 185 |
+
Mesnil, G., Dauphin, Y., Yao, K., Bengio, Y., Deng, L., Hakkani-Tur, D., He, X., Heck, L., Tur, G., Yu, D., and Zweig, G. Using recurrent neural networks for slot filling in spoken language understanding. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 23(3): ${530} - {539},{2015}$ .
|
| 186 |
+
|
| 187 |
+
Palogiannidi, E., Gkinis, I., Mastrapas, G., Mizera, P., and Stafylakis, T. End-to-end architectures for asr-free spoken language understanding, 2019.
|
| 188 |
+
|
| 189 |
+
Park, D. S., Chan, W., Zhang, Y., Chiu, C.-C., Zoph, B., Cubuk, E. D., and Le, Q. V. Specaugment: A simple data augmentation method for automatic speech recognition. Interspeech 2019, Sep 2019. doi: 10.21437/ interspeech.2019-2680. URL http://dx.doi.org/ 10.21437/Interspeech.2019-2680.
|
| 190 |
+
|
| 191 |
+
Simonyan, K. and Zisserman, A. Very deep convolutional networks for large-scale image recognition, 2014.
|
| 192 |
+
|
| 193 |
+
Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, L., and Polosukhin, I. Attention is all you need, 2017.
|
| 194 |
+
|
| 195 |
+
Wang, C., Wu, Y., Du, Y., Li, J., Liu, S., Lu, L., Ren, S., Ye, G., Zhao, S., and Zhou, M. Semantic mask for transformer based end-to-end speech recognition, 2019.
|
| 196 |
+
|
| 197 |
+
Wang, P., Wei, L., Cao, Y., Xie, J., and Nie, Z. Large-scale unsupervised pre-training for end-to-end spoken language understanding. In ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 7999-8003, 2020.
|
papers/ICML/ICML 2020/ICML 2020 Workshop/ICML 2020 Workshop SAS/CSqjS121nsU/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,240 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ ASR FREE END-TO-END SLU USING THE TRANSFORMER
|
| 2 |
+
|
| 3 |
+
Anonymous Authors ${}^{1}$
|
| 4 |
+
|
| 5 |
+
§ ABSTRACT
|
| 6 |
+
|
| 7 |
+
End-to-end spoken language understanding (SLU) systems directly map speech to intent through a single trainable model whereas conventional SLU systems use Automatic Speech Recognition (ASR) to convert speech to text and utilize Natural Language Understanding (NLU) to get intent. In this paper, we show how transformer-based architecture can be used for building end to end SLU systems. We conducted experiments on the Fluent Speech Commands (FSC) dataset, where intents are formed as combinations of three slots namely action, object, and location. We also demonstrate how state-of-the-art results can be obtained using a combination of various data augmentation methods.
|
| 8 |
+
|
| 9 |
+
§ 1. INTRODUCTION
|
| 10 |
+
|
| 11 |
+
With the growing demand of voice interfaces for various smart devices (e.g. smartphone, smartTV, in-car navigation system) Spoken Language Understanding (SLU) has drawn a great deal of attention in recent years. Traditional SLU approaches use the text transcribed by an automatic speech recognition (ASR) system to extract the intent of the user and the slots describing the query (Mesnil et al., 2015). The main problem with Traditional SLU systems is that the errors occurred while transcribing the audio is being forwarded and affects the intent and the slot filling task. One way to avoid this problem is by combining ASR and NLU (referred as end-to-end SLU) and directly map speech to intent (Chen et al., 2018), (Lugosch et al., 2019). In this method the model is first pre-trained to predict ASR targets (words and phonemes). The word and phoneme classifiers are then discarded, and the entire model is then trained end-to-end on the supervised SLU task. The pre-trained model weights can be either frozen or fine-tuned during the SLU task training. In this paper, we propose an ASR free end-to-end spoken language understanding using the transformer (Vaswani et al., 2017). The model doesnt learn any ASR level representation or use any pre-trained ASR model. We use the transformer encoder blocks with the convolution layer. Recurrent neural network (RNN) based approaches, particularly gated recurrent unit (GRU) and long short-term memory (LSTM) models, have achieved good performance for most of the tasks. But when compared with RNNs, the transformer-based encoder can capture the long term dependency better and can produce even better results. We use other data augmentations(e.g. changing pitch, reverberation, changing speed, noise injection) with SpecAugment (Park et al., 2019) (time masking and frequency masking) and get significantly low classification error compared to any other approaches. Following (Palogiannidi et al., 2019), instead of considering intents as the classes, we consider them as tuples of slots, each having an associated SoftMax layer. This technique converts a single-label classification task into a multi-label classification task and thus helps in reducing the number of classes. In the case of the Fluent Speech Command dataset, we have a three-slot tuple (action, object, location). We can say that an intent is predicted correctly if all the three slots corresponding to that intent are predicted correctly.
|
| 12 |
+
|
| 13 |
+
§ 2. RELATED WORK
|
| 14 |
+
|
| 15 |
+
(Lugosch et al., 2019) suggested a pre-training approach for end-to-end SLU models and also introduced the Fluent speech command dataset. They used a single trainable that directly maps speech to intent without explicitly producing a text transcript.They showed that by using the pre training techniques boost efficiency for both large and small SLU training sets.
|
| 16 |
+
|
| 17 |
+
(Wang et al., 2020) proposed an unsupervised pre-training approach for the SLU component of an end-to-end SLU system to preserve semantic features from large-scale raw audios. They first pretrain the AM component by using (Lu-gosch et al., 2019) approach and then feed the AM output to a softmax layer to get a posterior distribution. This posterior distribution is used as input of the next SLU component. (Palogiannidi et al., 2019) uses a RNN based end-to-end SLU for intent classification. Unlike (Lugosch et al., 2019), (Palogiannidi et al., 2019) didnt make use of any ASR level prediction (e.g. phonemes, characters, words) and handle intent as tuples of slots. Additionally this approach uses various data augmentation methods and achieves state-of-the-art results. Our approach is closely related to (Palogiannidi et al., 2019), but rather than using LSTM we make use of transformer encoder blocks.
|
| 18 |
+
|
| 19 |
+
${}^{1}$ Anonymous Institution, Anonymous City, Anonymous Region, Anonymous Country. Correspondence to: Anonymous Author <anon.email@domain.com>.
|
| 20 |
+
|
| 21 |
+
Preliminary work. Under review by the International Conference on Machine Learning (ICML). Do not distribute.
|
| 22 |
+
|
| 23 |
+
§ 3. MODEL ARCHITECTURE
|
| 24 |
+
|
| 25 |
+
The model consists of three parts: (1) Convolution layer, (2) Transformer block and (3) Classifier. The overall architecture of our end-to-end SLU model is shown in Figure 1. (Wang et al., 2019)discarded the sinusoidal positional encoding for transformers and used convolutionally learned input representations and got very decent results for the Automatic Speech Recognition task. Following these, we use a VGG-like convolution block (Simonyan & Zisserman, 2014) before the transformer encoder.The following section will describe the three parts separately.
|
| 26 |
+
|
| 27 |
+
§ 3.1. CONVOLUTION LAYER
|
| 28 |
+
|
| 29 |
+
In order to make sense of a sequence, the model needs to know the position of each word in the sequence. To address this, the transformer uses a sinusoidal positional encoding. We replace the widely used sinusoidal positional encoding with the convolution layer. We feel that adding early convolutional layers allow the model to learn the relative positional encoding and helps the model to identify the right order of the input sequence. We used 2-D convolutional blocks with layer normalization and ReLU activation after each convolutional layer. Each convolutional block contains two convolutional layers followed by a max-pooling layer. The architecture is shown in the figure 2.
|
| 30 |
+
|
| 31 |
+
§ 3.2. TRANSFORMER BLOCK
|
| 32 |
+
|
| 33 |
+
The input to the transformer encoder is the output of the convolution block. We will describe the details of the Transformer encoder block in this section.
|
| 34 |
+
|
| 35 |
+
§ 3.2.1. SCALED DOT-PRODUCT ATTENTION
|
| 36 |
+
|
| 37 |
+
Self-attention is a mechanism that relates different positions of input sequences to compute representations for the inputs. It uses three inputs namely queries(Q), keys(K), and values(V). The output of one query is calculated as a weighted sum of the values, where weights can be computed by taking the dot products of the query with all keys, divide each by $\frac{1}{\sqrt{{d}_{k}}}$ , and apply a softmax function. The attention can :
|
| 38 |
+
|
| 39 |
+
$$
|
| 40 |
+
\operatorname{Attention}\left( {Q,K,V}\right) = \operatorname{softmax}\left( \frac{Q{K}^{T}}{\sqrt{{d}_{k}}}\right) V
|
| 41 |
+
$$
|
| 42 |
+
|
| 43 |
+
Where ${d}_{k}$ is the dimension of the key vector and the scalar $\frac{1}{\sqrt{{d}_{k}}}$ is used to prevent softmax function into regions that have very small gradients.
|
| 44 |
+
|
| 45 |
+
INTENT LOCATION Linear location ACTION OBJECT Linear Linear action object Add & Norm FFN Add & Norm Multi-head Attention VGG Convolution Input features
|
| 46 |
+
|
| 47 |
+
Figure 1. End to End SLU Architecture using Transformer
|
| 48 |
+
|
| 49 |
+
§ 3.2.2. MULTI-HEAD ATTENTION
|
| 50 |
+
|
| 51 |
+
To allow the model to jointly attend to information from different representation subspaces at different positions, the transformer uses multi-head attention. Multi-head attention calculates $h$ times scaled dot-product attention where $h$ is the number of heads. Before performing each attention, first linearly project the queries, keys and values to more discriminated representations. Then, each Scaled Dot-Product Attention is calculated independently, and their outputs are concatenated and fed into another linear projection to obtain the final ${d}_{\text{ model }}$ dimensional outputs. The multi-head attention can be formulated as:
|
| 52 |
+
|
| 53 |
+
MultiHead $\left( {Q,K,V}\right) =$ Concat $\left( {{\text{ head }}_{1},\ldots ,{\text{ head }}_{h}}\right) {W}^{O}$
|
| 54 |
+
|
| 55 |
+
Where ${\text{ head }}_{i} = \operatorname{Attention}\left( {Q{W}_{i}^{Q},K{W}_{i}^{K},V{W}_{i}^{V}}\right)$
|
| 56 |
+
|
| 57 |
+
§ 3.2.3. POSITION-WISE FEED-FORWARD NETWORK
|
| 58 |
+
|
| 59 |
+
In addition to attention, each of the encoders contains a position wise fully connected feed-forward network. It consists of two linear transformations with a ReLU activation in between.
|
| 60 |
+
|
| 61 |
+
$$
|
| 62 |
+
{FFN}\left( x\right) = \max \left( {0,x{W}_{1} + {b}_{1}}\right) {W}^{2} + {b}^{2}
|
| 63 |
+
$$
|
| 64 |
+
|
| 65 |
+
2-D Max Pooling OUTPUT Layer Norm 2-D Convulation
|
| 66 |
+
|
| 67 |
+
Figure 2. Encoder convolution layer
|
| 68 |
+
|
| 69 |
+
The dimensionality of input and output is ${d}_{\text{ model }}$ , and the inner layer has dimensionality ${d}_{ff}$ . Although the linear transformations are similar in various locations, different parameters are used from layer to layer. In addition, residual connection and layer normalization (Ba et al., 2016) are important components of the transformer. To squeeze the output of the transformer encoder, we use an average pooling layer. Besides that batch normalization (Ioffe & Szegedy, 2015) is also used.
|
| 70 |
+
|
| 71 |
+
§ 3.3. CLASSIFIER
|
| 72 |
+
|
| 73 |
+
Following [8], the prediction can be made by considering both conditional and unconditional models. In case of an unconditional model, the slots are independent. The intent probability can be formulated as:
|
| 74 |
+
|
| 75 |
+
$$
|
| 76 |
+
p\left( {A,O,L \mid D}\right) = p\left( {A \mid D}\right) p\left( {O \mid D}\right) p\left( {L \mid D}\right)
|
| 77 |
+
$$
|
| 78 |
+
|
| 79 |
+
Here Action, Object, Location, and sequence of acoustic features for the utterance is represented by $\mathrm{A},\mathrm{O},\mathrm{L}$ and $\mathrm{D}$ respectively. In the case of conditional model, the intent probability can be formulated as :
|
| 80 |
+
|
| 81 |
+
$$
|
| 82 |
+
p\left( {A,O,L \mid D}\right) = p\left( {A \mid D}\right) p\left( {O \mid A,D}\right) p\left( {L \mid A,O,D}\right)
|
| 83 |
+
$$
|
| 84 |
+
|
| 85 |
+
Please note that any ordering of A, O, L is valid and there will be one independent slot and two dependent slots. When using unconditional classifiers, the slots can be predicted by using the transformer encoder output. In the case of conditional classifiers, the action slot is predicted using the transformer encoder output, whereas the object slot is predicted by considering (concatenating) both action prediction embedding and the transformer encoder output. For predicting location, we use(concatenate) action prediction embedding, object prediction embedding and the transformer encoder output. The intent predicted by the model can be then expressed by combining the prediction for action slot, object slot and the location slot.
|
| 86 |
+
|
| 87 |
+
Table 1. Fluent Speech Commands dataset statistics
|
| 88 |
+
|
| 89 |
+
max width=
|
| 90 |
+
|
| 91 |
+
SPLIT SPEAKERS UTTERANCES
|
| 92 |
+
|
| 93 |
+
1-3
|
| 94 |
+
TRAIN 77 23,132
|
| 95 |
+
|
| 96 |
+
1-3
|
| 97 |
+
TEST 10 3,118
|
| 98 |
+
|
| 99 |
+
1-3
|
| 100 |
+
VALID 10 3,793
|
| 101 |
+
|
| 102 |
+
1-3
|
| 103 |
+
|
| 104 |
+
Table 2. Classification error(%) on the test set, given conditional or unconditional classifier.
|
| 105 |
+
|
| 106 |
+
max width=
|
| 107 |
+
|
| 108 |
+
CLASSIFIER ERROR(%)
|
| 109 |
+
|
| 110 |
+
1-2
|
| 111 |
+
CONDITIONAL CLASSIFIER 2.95
|
| 112 |
+
|
| 113 |
+
1-2
|
| 114 |
+
Unconditional Classifier 3.725
|
| 115 |
+
|
| 116 |
+
1-2
|
| 117 |
+
|
| 118 |
+
§ 4. EXPERIMENTS
|
| 119 |
+
|
| 120 |
+
In this section we are going to talk about the experiments that we conduct on Fluent Speech Command datasets. We compare our results with state-of-the-art models. We represent input signals as a sequence of 83 dimensional log-Mel filter bank features that is extracted every ${10}\mathrm{\;{ms}}$ . We use a 512 dimensional attention vector with 4 heads along with Adam optimizer with a learning rate of 0.0001 . We conducted multiple sets of experiments. Some of the experiments are conducted without using any augmentations while some use augmentation. The best epoch is chosen for each experiment based on the results on the validation set and the classification error achieved on the test set. The overall loss function for the model is the summation of cross entropy losses for the three slots.
|
| 121 |
+
|
| 122 |
+
§ 4.1. DATASET
|
| 123 |
+
|
| 124 |
+
The dataset is composed of ${16}\mathrm{{kHz}}$ single-channel .wav audio files. Each audio file has a recording of a single spoken command in English. The dataset statistics are given in the Table 1. Here intents are considered as valid combinations of slots. There are 31 unique intents in total with 6,14,4 unique action, object, location respectively. For each intent there can be multiple possible wordings. For example, the intent action: "bring", object: "newspaper", location: "none" can have Bring me the newspaper, Get me the newspaper and Fetch the newspaper as the possible wordings.
|
| 125 |
+
|
| 126 |
+
§ 4.2. CONDITIONAL AND UNCONDITIONAL CLASSIFIER
|
| 127 |
+
|
| 128 |
+
To examine which classifier works best, we trained both the conditional and the unconditional model given the entire training set (without using any augmentations). Examining the results in Table 2, we observe that the model using
|
| 129 |
+
|
| 130 |
+
165
|
| 131 |
+
|
| 132 |
+
Table 3. Classification error on the test set, given conditional or unconditional classifier.
|
| 133 |
+
|
| 134 |
+
max width=
|
| 135 |
+
|
| 136 |
+
ENCODER LAYERS ERROR(%)
|
| 137 |
+
|
| 138 |
+
1-2
|
| 139 |
+
4 4.45
|
| 140 |
+
|
| 141 |
+
1-2
|
| 142 |
+
6 3.49
|
| 143 |
+
|
| 144 |
+
1-2
|
| 145 |
+
8 3.07
|
| 146 |
+
|
| 147 |
+
1-2
|
| 148 |
+
12 2.95
|
| 149 |
+
|
| 150 |
+
1-2
|
| 151 |
+
|
| 152 |
+
166
|
| 153 |
+
|
| 154 |
+
167
|
| 155 |
+
|
| 156 |
+
168 conditional classifier performs better than the model using unconditional classifier.
|
| 157 |
+
|
| 158 |
+
§ 4.3. VARYING NUMBER OF ENCODER LAYERS
|
| 159 |
+
|
| 160 |
+
To explore the effect of large models, we vary the number of encoder layers. We try 4, 6, 8 and 12 encoder layers. The result of the experiments is shown in Table 3. All these experiments are conducted using the entire training set (without using any augmentations). We can see that as we are increasing the number of encoder layers, the classification error is decreasing. By using 12 encoder layers, we achieve ${2.95}\%$ as the lowest classification error on the test set.
|
| 161 |
+
|
| 162 |
+
§ 4.4. DATA AUGMENTATION METHODS
|
| 163 |
+
|
| 164 |
+
We trained our model in three different ways. Firstly to evaluate the performance of the model on the original dataset we trained our model without using any data augmentation. We then use SpecAugment (Time masking and Frequency masking) on log-Mel filter bank features while training. To make it more robust, we first augment the original data using four different augmentations namely reverberation, pitch change, speed change, and noise injection. After using data augmentation the number of training samples increases from 23132 to 115660. We then make use of SpecAugment (Time masking and Feature masking) on log-Mel filter bank features of the augmented data while training. In this section, we are going to talk about some of the augmentation methods we used. Table 5 shows the results of augmentation.
|
| 165 |
+
|
| 166 |
+
Noise Injection: Noise injection is a fundamental tool for data augmentation. Adding noise during training can make the training process more robust and reduce generalization
|
| 167 |
+
|
| 168 |
+
208 error.
|
| 169 |
+
|
| 170 |
+
Changing Pitch: Pitch is the quality that enables sounds to be judged as higher and lower in the sense associated with musical melodies. We use the librosa library for this data
|
| 171 |
+
|
| 172 |
+
212 augmentation. 213
|
| 173 |
+
|
| 174 |
+
Reverberation: Reverberation is the reflection of sound waves created by the superposition of echoes. This can be
|
| 175 |
+
|
| 176 |
+
216 done using the pysndfx library.
|
| 177 |
+
|
| 178 |
+
217 Changing speed: Changing speed is a commonly used 218 method for doing data augmentation, where the play rate of 219 the audio is randomly changed. Same as changing pitch, this augmentation is performed by librosa function. It stretches time series by a fixed rate. The audio speed is changed by taking a value between 0.85 to 1.15 randomly.
|
| 179 |
+
|
| 180 |
+
SpecAugment: (Park et al., 2019) introduced SpecAugment for data augmentation in speech recognition. SpecAugment is applied directly to the input features of a neural network. There are three basic ways to augment data which are time warping, frequency masking, and time masking. We use time masking and frequency masking methods while training the model.
|
| 181 |
+
|
| 182 |
+
0.9 validation accuracy on full dataset SpecAug+OtherAug SpecAug NoAug 20 25 30 35 40 Epochs 0.8 Accuracy 0.7 0.6 0.5 0.4 0 10 15
|
| 183 |
+
|
| 184 |
+
Figure 3. Results of training on complete dataset
|
| 185 |
+
|
| 186 |
+
0.8 validation accuracy on ${10}\%$ of the dataset SpecAug+OtherAug SpecAug NoAug 60 80 100 Epochs 0.7 0.6 Accuracy 0.5 0.4 0.3 0.2 0.1 0.0 20 40
|
| 187 |
+
|
| 188 |
+
Figure 4. Results of training on partial dataset
|
| 189 |
+
|
| 190 |
+
§ 4.5. TRAINING ON COMPLETE DATASET
|
| 191 |
+
|
| 192 |
+
We conducted multiple experiments using the entire training set. Firstly we trained the model without using any augmentation. Then we experimented with SpecAugment. Finally we used other data augmentation methods (described earlier) with the SpecAugment and achieved a classification error of ${0.34}\%$ on the test set. In comparison with the previous state-of-the-art results Table 4, our model achieved significantly low classification error. We performed all these experiments using 12 encoder layers. The validation accuracy for these experiments over time is shown in Figure 3. The results obtained on the test set for different experiments is shown in the Table 5 (Full training set column).
|
| 193 |
+
|
| 194 |
+
Table 4. Comparison of classification error(%) between different approaches on the Fluent Speech Command dataset.
|
| 195 |
+
|
| 196 |
+
max width=
|
| 197 |
+
|
| 198 |
+
MODEL ERROR(%)
|
| 199 |
+
|
| 200 |
+
1-2
|
| 201 |
+
PRE TRAINED SLU(LUGOSCH ET AL., 2019) 1.2
|
| 202 |
+
|
| 203 |
+
1-2
|
| 204 |
+
LSTM BASED SLU(PALOGIANNIDI ET AL., 2019) 1.15
|
| 205 |
+
|
| 206 |
+
1-2
|
| 207 |
+
ERNIE(WANG ET AL., 2020) 0.98
|
| 208 |
+
|
| 209 |
+
1-2
|
| 210 |
+
SPEC AUGMENT 1
|
| 211 |
+
|
| 212 |
+
1-2
|
| 213 |
+
SPEC + OTHER AUGMENTATION 0.34
|
| 214 |
+
|
| 215 |
+
1-2
|
| 216 |
+
|
| 217 |
+
Table 5. Classification error(%) on full training set and 10% of the training set.
|
| 218 |
+
|
| 219 |
+
max width=
|
| 220 |
+
|
| 221 |
+
EXPERIMENT FULL DATA 10% DATA
|
| 222 |
+
|
| 223 |
+
1-3
|
| 224 |
+
NO AUG 2.95 25.05
|
| 225 |
+
|
| 226 |
+
1-3
|
| 227 |
+
SPECAUG 1 14.12
|
| 228 |
+
|
| 229 |
+
1-3
|
| 230 |
+
SPECAUG + OTHERAUG 0.34 8
|
| 231 |
+
|
| 232 |
+
1-3
|
| 233 |
+
|
| 234 |
+
§ 4.6. TRAINING ON ${10}\%$ DATASET
|
| 235 |
+
|
| 236 |
+
To evaluate the performance of models, we randomly selected ${10}\%$ of the training data and used this dataset for training instead of using the full dataset. All the experiments. We conducted multiple experiments using ${10}\%$ of the training set (all the experiments described for the full dataset), and observed that by using other data augmentation methods with the SpecAugment we achieved a classification error of $8\%$ . The validation accuracy for these experiments over time is shown in Figure 4. Table 5 compares the results obtained on a full training set with the results obtained using only ${10}\%$ of the training data.
|
| 237 |
+
|
| 238 |
+
§ 5. CONCLUSION
|
| 239 |
+
|
| 240 |
+
End-to-end SLU approaches provide a new perspective for various applications since the speech is directly map to intent. In this paper, we proposed an end-to-end transformer based SLU for intent classification. The experiment results show that our proposed approach significantly outperforms SOTA end-to-end SLU systems. In the future, we plan to explore the limitations of end-to-end SLU and will try to enhance the architecture.
|
papers/ICML/ICML 2020/ICML 2020 Workshop/ICML 2020 Workshop SAS/DxtEfUpf2q7/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,195 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Self-supervised Learning for Speech Enhancement
|
| 2 |
+
|
| 3 |
+
Yu-Che Wang ${}^{ * }{}^{1}$ Shrikant Venkataramani ${}^{ * }{}^{1}$ Paris Smaragdis ${}^{1}{}^{2}{}^{3}$
|
| 4 |
+
|
| 5 |
+
## Abstract
|
| 6 |
+
|
| 7 |
+
Supervised learning for single-channel speech enhancement requires carefully labeled training examples where the noisy mixture is input into the network and the network is trained to produce an output close to the ideal target. To relax the conditions on the training data, we consider the task of training speech enhancement networks in a self-supervised manner. We first use a limited training set of clean speech sounds and learn a latent representation by autoencoding on their magnitude spectrograms. We then autoencode on speech mixtures recorded in noisy environments and train the resulting autoencoder to share a latent representation with the clean examples. We show that using this training schema, we can now map noisy speech to its clean version using a network that is autonomously trainable without requiring labeled training examples or human intervention.
|
| 8 |
+
|
| 9 |
+
## 1. Introduction
|
| 10 |
+
|
| 11 |
+
Given a mixture of a speech signal co-occurring in a background of ambient noise, the goal of single-channel speech enhancement is to extract the speech signal in the given mixture. With recent advancements in Neural Networks (NNs) and deep learning, several neural network based approaches have been proposed for single-channel speech enhancement (Xu et al., 2013; Weninger et al., 2015; Pascual et al., 2017). These networks and approaches are predominantly trained in a supervised manner. The noisy mixture signal is fed as an input to the NN. The NN is then trained to estimate the corresponding clean speech signal in the mixture at its output. Thus, to train NNs for supervised speech enhancement, we require access to a vast training set of paired examples of noisy mixtures and their corresponding clean speech versions. As a result, supervised learning approaches to speech enhancement and source separation suffer from the following drawbacks.
|
| 12 |
+
|
| 13 |
+
1. Clean targets can often be difficult or expensive to obtain. For example, bird calls recorded in a forest are often found to be in the presence of interfering sounds like the ones from animals, trees and thunderstorms. Alternatively, machine fault recordings are often taken when the machine is in operation to identify potential damages and it is infeasible to record these sounds in an isolated manner to obtain clean recorded versions.
|
| 14 |
+
|
| 15 |
+
2. These networks cannot be used as stand-alone learning machines that autonomously learn to denoise speech mixtures in ambient recording environments.
|
| 16 |
+
|
| 17 |
+
3. The trained speech enhancement systems can potentially be deployed in previously unseen conditions. Thus, there is a strong possibility of a mismatch between the training and test conditions. In such cases, we do not have the ability to use the recorded test mixtures to improve the performance of our model in the unseen test setting.
|
| 18 |
+
|
| 19 |
+
To relax the constraints of paired training data, a few recent approaches interpret the problem of denoising and source separation as a style-transfer problem wherein, the goal is to map from the domain of noisy mixtures to the domain of clean sounds (Stoller et al., 2018; Michelashvili et al., 2019; Venkataramani et al., 2019). These approaches only require a training set of mixtures and a training set of clean sounds, but the clean sounds can be unpaired and unrelated to the mixtures. However, these methods rely on training a pair of autoencoders jointly, one for each domain in order to learn the mapping and can be tedious to train. Other approaches have tried to relax the constraints by learning to enhance noisy mixtures in a "weakly supervised" setting. Instead of using representative clean training examples to identify a source, these approaches use alternate techniques for identification. For example, (Kong et al., 2020; Pishda-dian et al., 2020) assume that in addition to the mixtures, we have access to information about when the source we wish to isolate is active in the mixture. Generating the timing information about the activity of the source requires training an event detection network which relies either on human listening or on clean training examples. Furthermore, these methods cannot be reused if the test conditions do not match the conditions under which the network was trained.
|
| 20 |
+
|
| 21 |
+
---
|
| 22 |
+
|
| 23 |
+
*Equal contribution ${}^{1}$ University of Illinois at Urbana-Champaign ${}^{2}$ Adobe Research ${}^{3}$ Supported by NSF grant #1453104. Correspondence to: Yu-Che Wang <yuchecw2@illinois.edu>.
|
| 24 |
+
|
| 25 |
+
Proceedings of the ${37}^{\text{th }}$ International Conference on Machine Learning, Vienna, Austria, PMLR 119, 2020. Copyright 2020 by the author(s).
|
| 26 |
+
|
| 27 |
+
---
|
| 28 |
+
|
| 29 |
+
To relax constraints on training data, a recent learning paradigm gaining popularity in the fields of computer vision and natural language processing is the idea of self-supervised learning (Kolesnikov et al., 2019; Doersch & Zisserman, 2017; Lan et al., 2019). Instead of constructing large labeled datasets and using them for supervised learning, we use the relationships, correlations and similarities between the training examples to construct the corresponding paired labels for the training set. Thus, we can learn suitable representations and mappings from autonomously labeled training examples. In the case of audio, this strategy has been recently explored to learn unsupervised representations and perform speech recognition, speaker identification and other allied tasks (Pascual et al., 2019; Ravanelli & Bengio, 2019). However, self-supervision and unsupervised representation learning have not been explored for other audio applications including speech enhancement.
|
| 30 |
+
|
| 31 |
+
The goal of this paper is to develop and investigate the use of a self-supervised learning approach for speech denois-ing. To do so, we assume that we have access to a training set of clean speech examples. We first use these examples to learn a suitable representation for the clean sounds in an unsupervised manner. Thereafter, we use the learned representation along with noisy speech recordings to learn a mapping from the domain of mixtures to the domain of clean sounds. These developments allow us to devise speech enhancement systems that can learn autonomously in noisy ambient environments without human intervention, thereby alleviating the various drawbacks and constraints of supervised speech enhancement networks.
|
| 32 |
+
|
| 33 |
+
## 2. Self-Supervision for Speech Enhancement
|
| 34 |
+
|
| 35 |
+
As briefly discussed in Section 1, NN based supervised speech enhancement relies on the availability of paired training examples. This imposes several limitations on the trained networks and using self-supervision can relax these constraints. But first, we begin with a description of how we can train our NNs to perform speech enhancement in a self-supervised manner. To identify the source we wish to isolate from the mixtures, we assume that we have access to a dataset of clean sounds that represent the source. For example, if the goal is to isolate human speech from ambient noisy recordings, we assume that we have access to a dataset of a few clean speech examples. These examples can be completely unrelated to the mixture recordings used and contain a completely different set of speakers and utterances.
|
| 36 |
+
|
| 37 |
+
Over the last decade, a popular method to perform Self-supervised Speech Enhancement (SSE) problem is the idea of Non-negative Matrix Factorization (NMF). In the case of NMF based methods, the problem of SSE (also known as semi-supervised speech enhancement) was solved as a two-step procedure. In the first step, we perform an NMF decomposition of the clean sounds to learn representative spectral models for the speech signal. In the second step, we iteratively fit these models on unseen noisy speech recordings to isolate the underlying speech component from the ambient noise. However, NMF based SSE requires the use of an iterative fitting procedure for each test example during inference. We improve upon this by training NNs for SSE To train NNs for SSE, we use a similar two-step approach.
|
| 38 |
+
|
| 39 |
+

|
| 40 |
+
|
| 41 |
+
Figure 1. Block diagram of our self-supervised speech enhancement system. We first train the CAE to learn a latent representation for the clean sounds. We then autoencode on the mixtures and enforce that the MAE shares the latent space with the CAE using our cycle-consistency loss terms. Once both the autoencoders are trained, the diagonal path through ${\mathcal{E}}_{\mathbf{m}}$ and ${\mathcal{D}}_{\mathbf{c}}$ gives the denoised outputs at inference time.
|
| 42 |
+
|
| 43 |
+
1. In step I, we use the clean training examples to learn an unsupervised representation for the clean speech sounds. Essentially, we train an autoencoder NN on the magnitude spectrograms of the clean sounds and learn a suitable representation. We refer to this autoencoder as the Clean AutoEncoder (CAE).
|
| 44 |
+
|
| 45 |
+
2. In step II, we use ambient mixture recordings to train an autoencoder NN on the mixture spectrograms. We refer to this autoencoder as the Mixture AutoEn-coder (MAE). The representations learned by the CAE is then used to modify the cost-functions used to train the MAE network so as to learn a shared space between the CAE and MAE representations. This allows us to learn a mapping from the domain of mixtures to the domain of clean sounds without paired training examples.
|
| 46 |
+
|
| 47 |
+
### 2.1. Network Architecture
|
| 48 |
+
|
| 49 |
+
Having described the overall outline of our SSE approach, we now begin with a description of the finer details. Figure 1 shows the block diagram of the proposed SSE approach. The network basically consists of a pair of Variational AutoEn-coders (VAEs) and is motivated by the architecture for unsupervised domain translation (Liu et al.,2017). Here, ${\mathcal{E}}_{\mathbf{c}}$ and ${\mathcal{D}}_{\mathbf{c}}$ denote the encoder and decoder for the CAE respectively. The magnitude spectrogram of the clean speech signal is given as the input to the CAE and the CAE is trained to reconstruct the input magnitude spectrogram. Once we learn the unsupervised representation, we use ambient noisy mixture recordings and the CAE to train the MAE. ${\mathcal{E}}_{\mathbf{m}}$ and ${\mathcal{D}}_{\mathbf{m}}$ represent the encoder and decoder for the mixture autoen-coder. The cost-functions described in Section 2.2 enforce that the MAE learns a latent representation that is shared with the latent representation of the CAE. Once the MAE is also trained, the path ${\mathcal{E}}_{\mathbf{m}} \rightarrow {\mathcal{D}}_{\mathbf{c}}$ gives the enhanced speech component corresponding to the mixture spectrogram $\mathbf{M}$ .
|
| 50 |
+
|
| 51 |
+
### 2.2. Cost-function
|
| 52 |
+
|
| 53 |
+
We now describe the cost-functions used to train our network.
|
| 54 |
+
|
| 55 |
+
#### 2.2.1. Training the CAE
|
| 56 |
+
|
| 57 |
+
As seen earlier, the first step of the SSE is to train the CAE and learn a suitable representation for the clean sounds. To achieve this, we train the CAE by minimizing an appropriate measure of discrepancy between the input spectrogram $\mathbf{C}$ and its reconstruction $\widehat{\mathbf{C}}$ . Here, we use the ${L2}$ norm of the error given by ${\mathcal{L}}_{\mathrm{{CAE}}} = \parallel \mathbf{C} - \widehat{\mathbf{C}}{\parallel }_{2}^{2} + {\lambda }_{1} \cdot {\mathcal{L}}_{\mathrm{{KL}} - \mathrm{{CAE}}}$ . Being a VAE, the goal of ${\mathcal{L}}_{\text{KL-CAE }}$ is to learn a latent representation that is close to a zero-mean normal distribution.
|
| 58 |
+
|
| 59 |
+
#### 2.2.2. Training the MAE
|
| 60 |
+
|
| 61 |
+
Once we train the CAE, we now use the ambient mixture recordings, the CAE and ambient noise recordings to train the MAE. Since the MAE encounters different types of input signals, the cost-functions used to train the MAE can be divided into the following terms.
|
| 62 |
+
|
| 63 |
+
Reconstruction Loss: Given the mixture spectrogram M of a speech signal in the background of ambient noise, we feed $\mathbf{M}$ as an input to the MAE. We train the MAE to reconstruct the mixture spectrogram at its output and produce a reconstruction $\widehat{\mathbf{M}}$ . As before, we use the ${L2}$ norm of the error given by ${\mathcal{L}}_{\mathrm{M}} = \parallel \mathbf{M} - \widehat{\mathbf{M}}{\parallel }_{2}^{2}$ as our cost-function.
|
| 64 |
+
|
| 65 |
+
Cycle Loss: We now describe the cost-function terms used to enforce a shared latent representation between the MAE and the CAE. To achieve this, we use the CAE and incorporate the following cycle-consistency terms into our cost-function. Given a mixture spectrogram $\mathbf{M}$ , let ${\mathbf{h}}_{M}$ denote the corresponding latent representation at the output of the MAE encoder ${\mathcal{E}}_{\mathbf{m}}$ . We can pass the latent representation ${\mathbf{h}}_{M}$ through the CAE decoder ${\mathcal{D}}_{\mathbf{c}}$ to get the clean version of the mixture spectrogram ${\mathbf{C}}_{M}$ . This resulting spectrogram can be mapped back into the latent space through the CAE encoder ${\mathcal{E}}_{\mathbf{c}}$ to get the latent representation ${\widehat{\mathbf{h}}}_{M}$ . This can again be passed through the MAE decoder ${\mathcal{D}}_{\mathbf{m}}$ to get the reconstruction $\widehat{\mathbf{M}}$ . Summarizing these in the form of equations, we now have,
|
| 66 |
+
|
| 67 |
+
$$
|
| 68 |
+
{\mathbf{h}}_{M} = {\mathcal{E}}_{\mathbf{m}}\left( \mathbf{M}\right) \;{\mathbf{C}}_{M} = {\mathcal{D}}_{\mathbf{c}}\left( {\mathbf{h}}_{M}\right)
|
| 69 |
+
$$
|
| 70 |
+
|
| 71 |
+
$$
|
| 72 |
+
{\widehat{\mathbf{h}}}_{M} = {\mathcal{E}}_{\mathbf{c}}\left( {\mathbf{C}}_{M}\right) \;\widehat{\mathbf{M}} = {\mathcal{D}}_{\mathbf{m}}\left( {\widehat{\mathbf{h}}}_{M}\right)
|
| 73 |
+
$$
|
| 74 |
+
|
| 75 |
+
With these relationships, we now enforce that the cycle reconstruction of the mixture spectrogram $\widehat{\mathbf{M}}$ resembles the input mixture spectrogram M. Likewise, we also enforce that the two latent representations before and after the cycle loop through the CAE are close. Thus, the overall cycle loss term can be given as,
|
| 76 |
+
|
| 77 |
+
$$
|
| 78 |
+
{\mathcal{L}}_{\text{cyc }} = \parallel \mathbf{M} - \widehat{\mathbf{M}}{\parallel }_{2}^{2} + {\lambda }_{2} \cdot {\begin{Vmatrix}{\mathbf{h}}_{M} - {\widehat{\mathbf{h}}}_{M}\end{Vmatrix}}_{2}^{2} \tag{1}
|
| 79 |
+
$$
|
| 80 |
+
|
| 81 |
+
Noise Example Loss: As we discuss in Section 2.3, one of the advantages of SSE is its ability to autonomously train in an ambient environment and learn to separate speech signals from their noisy backgrounds. To do so, we assume that the model also sees glimpses of the background without any speech signal. Such clips can be easily separated from clips that contain a mixture of speech and background noise using a simple thresholding operation on the energy of the signals. Given a noise input spectrogram ${\mathbf{M}}_{N},{\mathbf{h}}_{N}$ denotes the corresponding latent representation and ${\mathbf{C}}_{N}$ denotes the clean version of the noise spectrogram. The latent representation can be reconstructed through the MAE decoder to get ${\widehat{\mathbf{M}}}_{N}$ . As before, we now have the following relationships,
|
| 82 |
+
|
| 83 |
+
$$
|
| 84 |
+
{\mathbf{h}}_{N} = {\mathcal{E}}_{\mathbf{m}}\left( {\mathbf{M}}_{N}\right) \;{\mathbf{C}}_{N} = {\mathcal{D}}_{\mathbf{c}}\left( {\mathbf{h}}_{N}\right)
|
| 85 |
+
$$
|
| 86 |
+
|
| 87 |
+
$$
|
| 88 |
+
{\widehat{\mathbf{M}}}_{N} = {\mathcal{D}}_{\mathbf{m}}\left( {\mathbf{h}}_{N}\right)
|
| 89 |
+
$$
|
| 90 |
+
|
| 91 |
+
We now enforce ${\mathbf{M}}_{N}$ and ${\widehat{\mathbf{M}}}_{N}$ to be identical and ${\mathbf{C}}_{N}$ reduces to silence. The overall noise example loss term becomes,
|
| 92 |
+
|
| 93 |
+
$$
|
| 94 |
+
{\mathcal{L}}_{\mathrm{N}} = {\begin{Vmatrix}{\mathbf{M}}_{N} - {\widehat{\mathbf{M}}}_{N}\end{Vmatrix}}_{2}^{2} + {\lambda }_{3} \cdot {\begin{Vmatrix}{\mathbf{C}}_{N} - \mathbf{0}\end{Vmatrix}}_{2}^{2} \tag{2}
|
| 95 |
+
$$
|
| 96 |
+
|
| 97 |
+
Overall MAE Cost-function: The overall function cost-function used to train the MAE is now a combination of the above loss terms. The overall cost-function also includes a term ${\mathcal{L}}_{\text{KL-MAE }}$ to enforce that the latent representations close to a zero-mean normal distribution.
|
| 98 |
+
|
| 99 |
+
$$
|
| 100 |
+
{\mathcal{L}}_{\mathrm{{MAE}}} = {\mathcal{L}}_{\mathrm{M}} + {\mathcal{L}}_{\mathrm{{cyc}}} + {\mathcal{L}}_{\mathrm{N}} + {\lambda }_{4} \cdot {\mathcal{L}}_{\text{KL-MAE }}
|
| 101 |
+
$$
|
| 102 |
+
|
| 103 |
+
### 2.3. Advantages of Self-supervision
|
| 104 |
+
|
| 105 |
+
Having seen the network architecture and the cost-functions used, we can now begin to understand the advantages of our proposed SSE approach. We enumerate these advantages below:
|
| 106 |
+
|
| 107 |
+
1. To train our SSE network, we only need access to a small dataset of clean speech examples to train our CAE and ambient mixtures and noise recordings to train our MAE. Thus, we do not require any paired training data unlike supervised speech enhancement methods.
|
| 108 |
+
|
| 109 |
+
2. Once the CAE is trained, the model only relies on mixtures and noise recordings for further training. These recordings can be directly obtained from the place of deployment. Thus, we now have a way of using unseen test mixtures to improve separation performance. This is beneficial particularly when there is a mismatch between the training and deployment environments.
|
| 110 |
+
|
| 111 |
+
3. With this training strategy, we can train our SSE network without any human intervention autonomously to enhance speech signals.
|
| 112 |
+
|
| 113 |
+
4. Once we train the CAE, we do not need access to clean speech examples further. All future training is completely dependent on the pre-trained CAE. When deploying the model in a test location, we need not transport data to different deployment locations. This is particularly advantageous from a security standpoint.
|
| 114 |
+
|
| 115 |
+
5. An added advantage we gain is the reusability of the CAE. The pre-trained CAE can be reused to perform SSE in different speech environments irrespective of the nature of the interfering sounds as seen in our experiments described in Section 3.
|
| 116 |
+
|
| 117 |
+
## 3. Experiments
|
| 118 |
+
|
| 119 |
+
We now present the details of our two experiments to evaluate the performance our trained SSE model.
|
| 120 |
+
|
| 121 |
+
### 3.1. Experimental Setup
|
| 122 |
+
|
| 123 |
+
To perform SSE using our network, we operate on the magnitude spectrograms of the mixtures and clean sounds. To compute these magnitude spectrograms, we use a window and DFT size of 1024 samples at a hop of 256 samples with a Hann window. The resulting magnitude spectrograms have 513 frequency bins for each frame.
|
| 124 |
+
|
| 125 |
+
The CAE networks used for our experiments consist of a cascade of 1D convolutional layers each. The CAE encoder ${\mathcal{E}}_{c}$ consists of a sequence of ${41}\mathrm{D}$ convolutional layers where the size of the hidden dimension sequentially decreases from ${513} \rightarrow {512} \rightarrow {256} \rightarrow {128} \rightarrow {64}$ . The CAE decoder ${\mathcal{D}}_{c}$ also consists of a cascade of 4 transposed convolutional layers where the size of the latent dimensions increase in the reverse order. Thus, the latent space is chosen to have a dimensionality of 64 . We use a stride of 1 sample and a kernel size of 7 for the convolutions. Each convolutional layer is followed by a batch-norm layer and a softplus nonlinearity. In case of the encoder ${\mathcal{E}}_{c}$ , the we also add an EQ norm layer after the soft-plus non-linearity. The task of the EQ-norm layer is to compute the mean of all the frames of its input separately for each input spectrogram in the batch and subtract the same.
|
| 126 |
+
|
| 127 |
+
The architecture of our MAE network also follows a similar strategy. The MAE encoder ${\mathcal{E}}_{m}$ comprises 6 1D convolutional layers where the hidden layer sizes decrease from ${513} \rightarrow {512} \rightarrow {400} \rightarrow {300} \rightarrow {200} \rightarrow {100} \rightarrow {64}$ . The MAE decoder aims to invert this operation and consists of 1D transposed convolutions that increase in hidden layer sizes in the reverse way. As before, we use a stride and kernel size of 1 and 7 respectively. Also, each convolutional layer is succeeded by a batch-norm and a softplus activation function. Similar to ${\mathcal{E}}_{c}$ , the MAE encoder ${\mathcal{E}}_{m}$ also includes an EQ norm layer after the soft-plus non-linearity.
|
| 128 |
+
|
| 129 |
+
To evaluate the SSE model, we use Perceptual Evaluation of Speech Quality (PESQ) (Rix et al., 2001) and composite metrics that approximate the Mean Opinion Score (MOS) including CSIG: predictor of signal distortion, CBAK: predictor of background intrusiveness, and COVL: predictor of overall speech quality (Hu & Loizou, 2008).
|
| 130 |
+
|
| 131 |
+
### 3.2. Datasets
|
| 132 |
+
|
| 133 |
+
#### 3.2.1. Experiment 1: DAPS
|
| 134 |
+
|
| 135 |
+
The first experiment is aimed at evaluating the performance of our SSE model on real recordings taken in indoor ambient environments. For this experiment, we use the Device And Produced Speech (DAPS) dataset (Mysore, 2014). The dataset consists of real-world recordings of speech taken in environments like bedrooms, offices, conference rooms and living rooms which contribute to the overlapping ambient noise in the recordings. The dataset consists of 10 male and 10 female speakers each reading out 5 scripts. Each of these 100 recordings are available in a clean format and also in noisy environments. We divide the scripts into 3 disjoint segments: clean, mix and test. Similarly, the speakers are also divided into 3 disjoint segments: clean, mix and test The scripts and speakers from the clean segments are used to train the CAE. The mix and test segments are used to train the MAE and evaluate the model respectively. Such a bifurcation leads to a completely different set of speech examples and speakers across the 3 segments. We choose these speakers and scripts randomly and ensure that the male and female speakers are distributed evenly across the segments.
|
| 136 |
+
|
| 137 |
+
#### 3.2.2. EXPERIMENT 2: BBC SOUND EFFECTS
|
| 138 |
+
|
| 139 |
+
The second experiment deals with evaluating our SSE model on ambient street noise available in the BBC Sound Effects dataset(BBC,2015). For the speech signals, we use the signals from the DAPS dataset. We use the speech clips
|
| 140 |
+
|
| 141 |
+
<table><tr><td rowspan="2">Environment</td><td colspan="4">PESQ</td><td colspan="4">CSIG</td><td colspan="4">CBAK</td><td colspan="4">COVL</td></tr><tr><td>SS</td><td>0%</td><td>30%</td><td>50%</td><td>ss</td><td>0%</td><td>30%</td><td>50%</td><td>SS</td><td>0%</td><td>30%</td><td>50%</td><td>SS</td><td>0%</td><td>30%</td><td>50%</td></tr><tr><td>ipad_livingroom 1</td><td>1.30</td><td>1.43</td><td>1.43</td><td>1.47</td><td>1.65</td><td>2.50</td><td>2.46</td><td>2.25</td><td>1.56</td><td>1.82</td><td>1.88</td><td>1.98</td><td>1.32</td><td>1.91</td><td>1.89</td><td>1.80</td></tr><tr><td>ipad_bedroom1</td><td>1.37</td><td>1.49</td><td>1.51</td><td>1.52</td><td>1.56</td><td>2.53</td><td>2.31</td><td>2.34</td><td>1.56</td><td>1.89</td><td>1.98</td><td>1.96</td><td>1.30</td><td>1.96</td><td>1.86</td><td>1.88</td></tr><tr><td>ipad_confroom 1</td><td>1.37</td><td>1.52</td><td>1.59</td><td>1.59</td><td>1.62</td><td>2.67</td><td>2.32</td><td>2.28</td><td>1.66</td><td>1.93</td><td>2.01</td><td>2.06</td><td>1.35</td><td>2.04</td><td>1.91</td><td>1.89</td></tr><tr><td>ipad_office1</td><td>1.22</td><td>1.37</td><td>1.39</td><td>1.37</td><td>1.46</td><td>2.32</td><td>2.15</td><td>1.84</td><td>1.40</td><td>1.83</td><td>1.86</td><td>1.85</td><td>1.17</td><td>1.78</td><td>1.71</td><td>1.53</td></tr><tr><td>ipad_office2</td><td>1.33</td><td>1.37</td><td>1.33</td><td>1.42</td><td>1.52</td><td>2.46</td><td>2.23</td><td>2.39</td><td>1.44</td><td>1.76</td><td>1.71</td><td>1.91</td><td>1.25</td><td>1.84</td><td>1.71</td><td>1.85</td></tr><tr><td>ipadflat_confroom1</td><td>1.45</td><td>1.38</td><td>1.47</td><td>1.54</td><td>1.36</td><td>2.20</td><td>2.33</td><td>2.25</td><td>1.64</td><td>1.74</td><td>1.93</td><td>2.00</td><td>1.22</td><td>1.71</td><td>1.84</td><td>1.85</td></tr><tr><td>ipadflat_office1</td><td>1.26</td><td>1.35</td><td>1.36</td><td>1.40</td><td>1.15</td><td>2.46</td><td>2.10</td><td>1.82</td><td>1.42</td><td>1.85</td><td>1.84</td><td>1.91</td><td>1.06</td><td>1.85</td><td>1.67</td><td>1.54</td></tr><tr><td>iphone_livingroom 1</td><td>1.38</td><td>1.30</td><td>1.42</td><td>1.40</td><td>1.24</td><td>2.11</td><td>2.27</td><td>2.09</td><td>1.57</td><td>1.78</td><td>1.85</td><td>1.90</td><td>1.14</td><td>1.64</td><td>1.79</td><td>1.69</td></tr><tr><td>iphone_bedroom1</td><td>1.43</td><td>1.33</td><td>1.43</td><td>1.47</td><td>1.13</td><td>2.14</td><td>2.13</td><td>1.88</td><td>1.58</td><td>1.79</td><td>1.91</td><td>1.93</td><td>1.08</td><td>1.68</td><td>1.73</td><td>1.62</td></tr></table>
|
| 142 |
+
|
| 143 |
+
Table 1. DAPS experiment results. We compare the results of our SSE model with those of spectral subtraction (SS). We consider three versions of our our SSE model based on the amount of pure noise examples seen by the model during training viz., 0%, 30% and 50% as a percentage of the training data. Higher scores are better for all metrics. We see that our SSE models consistently outperform SS on all the metrics. In addition, increasing the noise percentage also improves upon the quality of the extracted speech signal and suppression of the interfering noises.
|
| 144 |
+
|
| 145 |
+
<table><tr><td rowspan="2">City</td><td rowspan="2">SNR (dB)</td><td colspan="4">PESQ</td><td colspan="4">CSIG</td><td colspan="4">CBAK</td><td colspan="4">COVL</td></tr><tr><td>Mixture</td><td>0%</td><td>30%</td><td>50%</td><td>Mixture</td><td>0%</td><td>30%</td><td>50%</td><td>Mixture</td><td>0%</td><td>30%</td><td>50%</td><td>Mixture</td><td>0%</td><td>30%</td><td>50%</td></tr><tr><td rowspan="2">London</td><td>5</td><td>1.09</td><td>1.32</td><td>1.31</td><td>1.36</td><td>1.96</td><td>2.02</td><td>2.03</td><td>1.97</td><td>1.69</td><td>1.99</td><td>2.06</td><td>2.13</td><td>1.43</td><td>1.58</td><td>1.61</td><td>1.58</td></tr><tr><td>10</td><td>1.18</td><td>1.52</td><td>1.59</td><td>1.60</td><td>2.41</td><td>2.44</td><td>2.49</td><td>2.48</td><td>1.99</td><td>2.26</td><td>2.41</td><td>2.48</td><td>1.73</td><td>1.92</td><td>1.98</td><td>1.94</td></tr><tr><td rowspan="2">Paris</td><td>5</td><td>1.09</td><td>1.21</td><td>1.23</td><td>1.22</td><td>1.77</td><td>1.83</td><td>1.92</td><td>1.87</td><td>1.69</td><td>1.90</td><td>1.97</td><td>1.98</td><td>1.29</td><td>1.42</td><td>1.46</td><td>1.43</td></tr><tr><td>10</td><td>1.18</td><td>1.48</td><td>1.48</td><td>1.50</td><td>2.03</td><td>2.23</td><td>2.22</td><td>2.28</td><td>1.98</td><td>2.19</td><td>2.28</td><td>2.34</td><td>1.53</td><td>1.79</td><td>1.81</td><td>1.79</td></tr></table>
|
| 146 |
+
|
| 147 |
+
Table 2. BBC experiment results. Similar to the DAPS experiment, we compare the results of our SSE model at three different noise percentages $0\% ,{30}\%$ and ${50}\%$ . Considering the significant presence of non-stationary sounds in street noise recordings, we do not use spectral subtraction as our baseline method. Instead we report the metric values for the mixtures for comparison. As before, increasing the percentage of pure noise examples enhances the noise suppression (as seen by the CBAK scores) and the quality of the extracted speech (PESQ). from the clean segment to train the CAE and the speech clips from the mixture segment for the MAE. Mixture audios are composed by mixing the clean speech sounds with ambient noises from two cities (London and Paris) at 2 SNR settings ( 5 and 10dB) each. For each city, we choose 10 ambient noise files which add up to 45 minutes of noises approximately. The same noise files are used to produce mix and test segments. We emphasize that the network has never encountered mixtures of the test speakers or their utterances with the noise files used during training.
|
| 148 |
+
|
| 149 |
+
### 3.3. Results and Discussion
|
| 150 |
+
|
| 151 |
+
Table 1 presents the results of our experiments on the DAPS dataset. We use spectral subtraction (SS) as our baseline method and compare it with three versions of our SSE model (based on the percentage of pure noise recordings encountered during training). We observe a consistent improvement in performance over SS in all the metrics, and the model also improves as it comes across a higher percentage of pure noise sounds. The environments livingroom1 and office1 are relatively more reverberant compared to the other environments. Via informal listening tests, we observed that the final results are dereverberated as well. Thus, we can potentially use this training strategy for other allied tasks like dereverberation or bandwidth extension.
|
| 152 |
+
|
| 153 |
+
Table 2 presents the results of our SSE experiments on the BBC dataset. Since the BBC noise recordings include non-stationary sounds from the streets, we compare SSE models with the mixture metrics. We also observe that the performance improvement is greater in the case of mixtures having a higher signal-to-noise ratios. As before, a higher noise percentage improves upon SSE performance further.
|
| 154 |
+
|
| 155 |
+
## 4. Conclusion
|
| 156 |
+
|
| 157 |
+
In this paper we developed and investigated the idea of self-supervision in a single-channel speech enhancement setup. To accomplish this, we first trained an autoencoder on clean speech signals and learned an appropriate latent representation. This latent representation was then used in a downstream speech enhancement task to train an autoen-coder for noisy speech mixtures so that the two autoencoders shared their latent spaces. This allowed us to map the domain of noisy speech mixtures to the domain of clean sounds autonomously and without clean targets. Our experiments demonstrate the efficacy of our training approach in ambient indoor environments and in the presence of street noises.
|
| 158 |
+
|
| 159 |
+
## References
|
| 160 |
+
|
| 161 |
+
${BBC}$ sound effects library,2015. URL http: //www.sound-ideas.com/sound-effects/ bbc-sound-effects.html.
|
| 162 |
+
|
| 163 |
+
Doersch, C. and Zisserman, A. Multi-task self-supervised visual learning. In Proceedings of the IEEE International Conference on Computer Vision, pp. 2051-2060, 2017.
|
| 164 |
+
|
| 165 |
+
Hu, Y. and Loizou, P. C. Evaluation of objective quality measures for speech enhancement. IEEE Transactions on Audio, Speech, and Language Processing, 16(1):229-238, 2008.
|
| 166 |
+
|
| 167 |
+
Kolesnikov, A., Zhai, X., and Beyer, L. Revisiting self-supervised visual representation learning. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pp. 1920-1929, 2019.
|
| 168 |
+
|
| 169 |
+
Kong, Q., Wang, Y., Song, X., Cao, Y., Wang, W., and Plumbley, M. D. Source separation with weakly labelled data: An approach to computational auditory scene analysis. In IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 101-105. IEEE, 2020.
|
| 170 |
+
|
| 171 |
+
Lan, Z., Chen, M., Goodman, S., Gimpel, K., Sharma, P., and Soricut, R. Albert: A lite bert for self-supervised learning of language representations. arXiv preprint arXiv:1909.11942, 2019.
|
| 172 |
+
|
| 173 |
+
Liu, M.-Y., Breuel, T., and Kautz, J. Unsupervised image-to-image translation networks. In Advances in neural information processing systems, pp. 700-708, 2017.
|
| 174 |
+
|
| 175 |
+
Michelashvili, M., Benaim, S., and Wolf, L. Semi-supervised monaural singing voice separation with a masking network trained on synthetic mixtures. In ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 291-295. IEEE, 2019.
|
| 176 |
+
|
| 177 |
+
Mysore, G. J. Can we automatically transform speech recorded on common consumer devices in real-world environments into professional production quality speech?—a dataset, insights, and challenges. IEEE Signal Processing Letters, 22(8):1006-1010, 2014.
|
| 178 |
+
|
| 179 |
+
Pascual, S., Bonafonte, A., and Serra, J. Segan: Speech enhancement generative adversarial network. arXiv preprint arXiv:1703.09452, 2017.
|
| 180 |
+
|
| 181 |
+
Pascual, S., Ravanelli, M., Serrà, J., Bonafonte, A., and Ben-gio, Y. Learning problem-agnostic speech representations from multiple self-supervised tasks. Proc. Interspeech 2019, pp. 161-165, 2019.
|
| 182 |
+
|
| 183 |
+
Pishdadian, F., Wichern, G., and Le Roux, J. Learning to separate sounds from weakly labeled scenes. In IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 91-95. IEEE, 2020.
|
| 184 |
+
|
| 185 |
+
Ravanelli, M. and Bengio, Y. Learning speaker representations with mutual information. Proc. Interspeech 2019, pp. 1153-1157, 2019.
|
| 186 |
+
|
| 187 |
+
Rix, A. W., Beerends, J. G., Hollier, M. P., and Hekstra, A. P. Perceptual evaluation of speech quality (PESQ)-a new method for speech quality assessment of telephone networks and codecs. In 2001 IEEE International Conference on Acoustics, Speech, and Signal Processing. Proceedings, volume 2, pp. 749-752 vol.2, 2001.
|
| 188 |
+
|
| 189 |
+
Stoller, D., Ewert, S., and Dixon, S. Adversarial semi-supervised audio source separation applied to singing voice extraction. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 2391-2395. IEEE, 2018.
|
| 190 |
+
|
| 191 |
+
Venkataramani, S., Tzinis, E., and Smaragdis, P. A style transfer approach to source separation. In 2019 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA), pp. 170-174. IEEE, 2019.
|
| 192 |
+
|
| 193 |
+
Weninger, F., Erdogan, H., Watanabe, S., Vincent, E., Le Roux, J., Hershey, J. R., and Schuller, B. Speech enhancement with lstm recurrent neural networks and its application to noise-robust asr. In International Conference on Latent Variable Analysis and Signal Separation, pp. 91-99. Springer, 2015.
|
| 194 |
+
|
| 195 |
+
$\mathrm{{Xu}},\mathrm{Y}.,\mathrm{{Du}},\mathrm{J}.,\mathrm{{Dai}},\mathrm{L}. - \mathrm{R}.$ , and Lee, $\mathrm{C}. - \mathrm{H}$ . An experimental study on speech enhancement based on deep neural networks. IEEE Signal processing letters, 21(1):65-68, 2013.
|
papers/ICML/ICML 2020/ICML 2020 Workshop/ICML 2020 Workshop SAS/DxtEfUpf2q7/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,206 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ SELF-SUPERVISED LEARNING FOR SPEECH ENHANCEMENT
|
| 2 |
+
|
| 3 |
+
Yu-Che Wang ${}^{ * }{}^{1}$ Shrikant Venkataramani ${}^{ * }{}^{1}$ Paris Smaragdis ${}^{1}{}^{2}{}^{3}$
|
| 4 |
+
|
| 5 |
+
§ ABSTRACT
|
| 6 |
+
|
| 7 |
+
Supervised learning for single-channel speech enhancement requires carefully labeled training examples where the noisy mixture is input into the network and the network is trained to produce an output close to the ideal target. To relax the conditions on the training data, we consider the task of training speech enhancement networks in a self-supervised manner. We first use a limited training set of clean speech sounds and learn a latent representation by autoencoding on their magnitude spectrograms. We then autoencode on speech mixtures recorded in noisy environments and train the resulting autoencoder to share a latent representation with the clean examples. We show that using this training schema, we can now map noisy speech to its clean version using a network that is autonomously trainable without requiring labeled training examples or human intervention.
|
| 8 |
+
|
| 9 |
+
§ 1. INTRODUCTION
|
| 10 |
+
|
| 11 |
+
Given a mixture of a speech signal co-occurring in a background of ambient noise, the goal of single-channel speech enhancement is to extract the speech signal in the given mixture. With recent advancements in Neural Networks (NNs) and deep learning, several neural network based approaches have been proposed for single-channel speech enhancement (Xu et al., 2013; Weninger et al., 2015; Pascual et al., 2017). These networks and approaches are predominantly trained in a supervised manner. The noisy mixture signal is fed as an input to the NN. The NN is then trained to estimate the corresponding clean speech signal in the mixture at its output. Thus, to train NNs for supervised speech enhancement, we require access to a vast training set of paired examples of noisy mixtures and their corresponding clean speech versions. As a result, supervised learning approaches to speech enhancement and source separation suffer from the following drawbacks.
|
| 12 |
+
|
| 13 |
+
1. Clean targets can often be difficult or expensive to obtain. For example, bird calls recorded in a forest are often found to be in the presence of interfering sounds like the ones from animals, trees and thunderstorms. Alternatively, machine fault recordings are often taken when the machine is in operation to identify potential damages and it is infeasible to record these sounds in an isolated manner to obtain clean recorded versions.
|
| 14 |
+
|
| 15 |
+
2. These networks cannot be used as stand-alone learning machines that autonomously learn to denoise speech mixtures in ambient recording environments.
|
| 16 |
+
|
| 17 |
+
3. The trained speech enhancement systems can potentially be deployed in previously unseen conditions. Thus, there is a strong possibility of a mismatch between the training and test conditions. In such cases, we do not have the ability to use the recorded test mixtures to improve the performance of our model in the unseen test setting.
|
| 18 |
+
|
| 19 |
+
To relax the constraints of paired training data, a few recent approaches interpret the problem of denoising and source separation as a style-transfer problem wherein, the goal is to map from the domain of noisy mixtures to the domain of clean sounds (Stoller et al., 2018; Michelashvili et al., 2019; Venkataramani et al., 2019). These approaches only require a training set of mixtures and a training set of clean sounds, but the clean sounds can be unpaired and unrelated to the mixtures. However, these methods rely on training a pair of autoencoders jointly, one for each domain in order to learn the mapping and can be tedious to train. Other approaches have tried to relax the constraints by learning to enhance noisy mixtures in a "weakly supervised" setting. Instead of using representative clean training examples to identify a source, these approaches use alternate techniques for identification. For example, (Kong et al., 2020; Pishda-dian et al., 2020) assume that in addition to the mixtures, we have access to information about when the source we wish to isolate is active in the mixture. Generating the timing information about the activity of the source requires training an event detection network which relies either on human listening or on clean training examples. Furthermore, these methods cannot be reused if the test conditions do not match the conditions under which the network was trained.
|
| 20 |
+
|
| 21 |
+
*Equal contribution ${}^{1}$ University of Illinois at Urbana-Champaign ${}^{2}$ Adobe Research ${}^{3}$ Supported by NSF grant #1453104. Correspondence to: Yu-Che Wang <yuchecw2@illinois.edu>.
|
| 22 |
+
|
| 23 |
+
Proceedings of the ${37}^{\text{ th }}$ International Conference on Machine Learning, Vienna, Austria, PMLR 119, 2020. Copyright 2020 by the author(s).
|
| 24 |
+
|
| 25 |
+
To relax constraints on training data, a recent learning paradigm gaining popularity in the fields of computer vision and natural language processing is the idea of self-supervised learning (Kolesnikov et al., 2019; Doersch & Zisserman, 2017; Lan et al., 2019). Instead of constructing large labeled datasets and using them for supervised learning, we use the relationships, correlations and similarities between the training examples to construct the corresponding paired labels for the training set. Thus, we can learn suitable representations and mappings from autonomously labeled training examples. In the case of audio, this strategy has been recently explored to learn unsupervised representations and perform speech recognition, speaker identification and other allied tasks (Pascual et al., 2019; Ravanelli & Bengio, 2019). However, self-supervision and unsupervised representation learning have not been explored for other audio applications including speech enhancement.
|
| 26 |
+
|
| 27 |
+
The goal of this paper is to develop and investigate the use of a self-supervised learning approach for speech denois-ing. To do so, we assume that we have access to a training set of clean speech examples. We first use these examples to learn a suitable representation for the clean sounds in an unsupervised manner. Thereafter, we use the learned representation along with noisy speech recordings to learn a mapping from the domain of mixtures to the domain of clean sounds. These developments allow us to devise speech enhancement systems that can learn autonomously in noisy ambient environments without human intervention, thereby alleviating the various drawbacks and constraints of supervised speech enhancement networks.
|
| 28 |
+
|
| 29 |
+
§ 2. SELF-SUPERVISION FOR SPEECH ENHANCEMENT
|
| 30 |
+
|
| 31 |
+
As briefly discussed in Section 1, NN based supervised speech enhancement relies on the availability of paired training examples. This imposes several limitations on the trained networks and using self-supervision can relax these constraints. But first, we begin with a description of how we can train our NNs to perform speech enhancement in a self-supervised manner. To identify the source we wish to isolate from the mixtures, we assume that we have access to a dataset of clean sounds that represent the source. For example, if the goal is to isolate human speech from ambient noisy recordings, we assume that we have access to a dataset of a few clean speech examples. These examples can be completely unrelated to the mixture recordings used and contain a completely different set of speakers and utterances.
|
| 32 |
+
|
| 33 |
+
Over the last decade, a popular method to perform Self-supervised Speech Enhancement (SSE) problem is the idea of Non-negative Matrix Factorization (NMF). In the case of NMF based methods, the problem of SSE (also known as semi-supervised speech enhancement) was solved as a two-step procedure. In the first step, we perform an NMF decomposition of the clean sounds to learn representative spectral models for the speech signal. In the second step, we iteratively fit these models on unseen noisy speech recordings to isolate the underlying speech component from the ambient noise. However, NMF based SSE requires the use of an iterative fitting procedure for each test example during inference. We improve upon this by training NNs for SSE To train NNs for SSE, we use a similar two-step approach.
|
| 34 |
+
|
| 35 |
+
< g r a p h i c s >
|
| 36 |
+
|
| 37 |
+
Figure 1. Block diagram of our self-supervised speech enhancement system. We first train the CAE to learn a latent representation for the clean sounds. We then autoencode on the mixtures and enforce that the MAE shares the latent space with the CAE using our cycle-consistency loss terms. Once both the autoencoders are trained, the diagonal path through ${\mathcal{E}}_{\mathbf{m}}$ and ${\mathcal{D}}_{\mathbf{c}}$ gives the denoised outputs at inference time.
|
| 38 |
+
|
| 39 |
+
1. In step I, we use the clean training examples to learn an unsupervised representation for the clean speech sounds. Essentially, we train an autoencoder NN on the magnitude spectrograms of the clean sounds and learn a suitable representation. We refer to this autoencoder as the Clean AutoEncoder (CAE).
|
| 40 |
+
|
| 41 |
+
2. In step II, we use ambient mixture recordings to train an autoencoder NN on the mixture spectrograms. We refer to this autoencoder as the Mixture AutoEn-coder (MAE). The representations learned by the CAE is then used to modify the cost-functions used to train the MAE network so as to learn a shared space between the CAE and MAE representations. This allows us to learn a mapping from the domain of mixtures to the domain of clean sounds without paired training examples.
|
| 42 |
+
|
| 43 |
+
§ 2.1. NETWORK ARCHITECTURE
|
| 44 |
+
|
| 45 |
+
Having described the overall outline of our SSE approach, we now begin with a description of the finer details. Figure 1 shows the block diagram of the proposed SSE approach. The network basically consists of a pair of Variational AutoEn-coders (VAEs) and is motivated by the architecture for unsupervised domain translation (Liu et al.,2017). Here, ${\mathcal{E}}_{\mathbf{c}}$ and ${\mathcal{D}}_{\mathbf{c}}$ denote the encoder and decoder for the CAE respectively. The magnitude spectrogram of the clean speech signal is given as the input to the CAE and the CAE is trained to reconstruct the input magnitude spectrogram. Once we learn the unsupervised representation, we use ambient noisy mixture recordings and the CAE to train the MAE. ${\mathcal{E}}_{\mathbf{m}}$ and ${\mathcal{D}}_{\mathbf{m}}$ represent the encoder and decoder for the mixture autoen-coder. The cost-functions described in Section 2.2 enforce that the MAE learns a latent representation that is shared with the latent representation of the CAE. Once the MAE is also trained, the path ${\mathcal{E}}_{\mathbf{m}} \rightarrow {\mathcal{D}}_{\mathbf{c}}$ gives the enhanced speech component corresponding to the mixture spectrogram $\mathbf{M}$ .
|
| 46 |
+
|
| 47 |
+
§ 2.2. COST-FUNCTION
|
| 48 |
+
|
| 49 |
+
We now describe the cost-functions used to train our network.
|
| 50 |
+
|
| 51 |
+
§ 2.2.1. TRAINING THE CAE
|
| 52 |
+
|
| 53 |
+
As seen earlier, the first step of the SSE is to train the CAE and learn a suitable representation for the clean sounds. To achieve this, we train the CAE by minimizing an appropriate measure of discrepancy between the input spectrogram $\mathbf{C}$ and its reconstruction $\widehat{\mathbf{C}}$ . Here, we use the ${L2}$ norm of the error given by ${\mathcal{L}}_{\mathrm{{CAE}}} = \parallel \mathbf{C} - \widehat{\mathbf{C}}{\parallel }_{2}^{2} + {\lambda }_{1} \cdot {\mathcal{L}}_{\mathrm{{KL}} - \mathrm{{CAE}}}$ . Being a VAE, the goal of ${\mathcal{L}}_{\text{ KL-CAE }}$ is to learn a latent representation that is close to a zero-mean normal distribution.
|
| 54 |
+
|
| 55 |
+
§ 2.2.2. TRAINING THE MAE
|
| 56 |
+
|
| 57 |
+
Once we train the CAE, we now use the ambient mixture recordings, the CAE and ambient noise recordings to train the MAE. Since the MAE encounters different types of input signals, the cost-functions used to train the MAE can be divided into the following terms.
|
| 58 |
+
|
| 59 |
+
Reconstruction Loss: Given the mixture spectrogram M of a speech signal in the background of ambient noise, we feed $\mathbf{M}$ as an input to the MAE. We train the MAE to reconstruct the mixture spectrogram at its output and produce a reconstruction $\widehat{\mathbf{M}}$ . As before, we use the ${L2}$ norm of the error given by ${\mathcal{L}}_{\mathrm{M}} = \parallel \mathbf{M} - \widehat{\mathbf{M}}{\parallel }_{2}^{2}$ as our cost-function.
|
| 60 |
+
|
| 61 |
+
Cycle Loss: We now describe the cost-function terms used to enforce a shared latent representation between the MAE and the CAE. To achieve this, we use the CAE and incorporate the following cycle-consistency terms into our cost-function. Given a mixture spectrogram $\mathbf{M}$ , let ${\mathbf{h}}_{M}$ denote the corresponding latent representation at the output of the MAE encoder ${\mathcal{E}}_{\mathbf{m}}$ . We can pass the latent representation ${\mathbf{h}}_{M}$ through the CAE decoder ${\mathcal{D}}_{\mathbf{c}}$ to get the clean version of the mixture spectrogram ${\mathbf{C}}_{M}$ . This resulting spectrogram can be mapped back into the latent space through the CAE encoder ${\mathcal{E}}_{\mathbf{c}}$ to get the latent representation ${\widehat{\mathbf{h}}}_{M}$ . This can again be passed through the MAE decoder ${\mathcal{D}}_{\mathbf{m}}$ to get the reconstruction $\widehat{\mathbf{M}}$ . Summarizing these in the form of equations, we now have,
|
| 62 |
+
|
| 63 |
+
$$
|
| 64 |
+
{\mathbf{h}}_{M} = {\mathcal{E}}_{\mathbf{m}}\left( \mathbf{M}\right) \;{\mathbf{C}}_{M} = {\mathcal{D}}_{\mathbf{c}}\left( {\mathbf{h}}_{M}\right)
|
| 65 |
+
$$
|
| 66 |
+
|
| 67 |
+
$$
|
| 68 |
+
{\widehat{\mathbf{h}}}_{M} = {\mathcal{E}}_{\mathbf{c}}\left( {\mathbf{C}}_{M}\right) \;\widehat{\mathbf{M}} = {\mathcal{D}}_{\mathbf{m}}\left( {\widehat{\mathbf{h}}}_{M}\right)
|
| 69 |
+
$$
|
| 70 |
+
|
| 71 |
+
With these relationships, we now enforce that the cycle reconstruction of the mixture spectrogram $\widehat{\mathbf{M}}$ resembles the input mixture spectrogram M. Likewise, we also enforce that the two latent representations before and after the cycle loop through the CAE are close. Thus, the overall cycle loss term can be given as,
|
| 72 |
+
|
| 73 |
+
$$
|
| 74 |
+
{\mathcal{L}}_{\text{ cyc }} = \parallel \mathbf{M} - \widehat{\mathbf{M}}{\parallel }_{2}^{2} + {\lambda }_{2} \cdot {\begin{Vmatrix}{\mathbf{h}}_{M} - {\widehat{\mathbf{h}}}_{M}\end{Vmatrix}}_{2}^{2} \tag{1}
|
| 75 |
+
$$
|
| 76 |
+
|
| 77 |
+
Noise Example Loss: As we discuss in Section 2.3, one of the advantages of SSE is its ability to autonomously train in an ambient environment and learn to separate speech signals from their noisy backgrounds. To do so, we assume that the model also sees glimpses of the background without any speech signal. Such clips can be easily separated from clips that contain a mixture of speech and background noise using a simple thresholding operation on the energy of the signals. Given a noise input spectrogram ${\mathbf{M}}_{N},{\mathbf{h}}_{N}$ denotes the corresponding latent representation and ${\mathbf{C}}_{N}$ denotes the clean version of the noise spectrogram. The latent representation can be reconstructed through the MAE decoder to get ${\widehat{\mathbf{M}}}_{N}$ . As before, we now have the following relationships,
|
| 78 |
+
|
| 79 |
+
$$
|
| 80 |
+
{\mathbf{h}}_{N} = {\mathcal{E}}_{\mathbf{m}}\left( {\mathbf{M}}_{N}\right) \;{\mathbf{C}}_{N} = {\mathcal{D}}_{\mathbf{c}}\left( {\mathbf{h}}_{N}\right)
|
| 81 |
+
$$
|
| 82 |
+
|
| 83 |
+
$$
|
| 84 |
+
{\widehat{\mathbf{M}}}_{N} = {\mathcal{D}}_{\mathbf{m}}\left( {\mathbf{h}}_{N}\right)
|
| 85 |
+
$$
|
| 86 |
+
|
| 87 |
+
We now enforce ${\mathbf{M}}_{N}$ and ${\widehat{\mathbf{M}}}_{N}$ to be identical and ${\mathbf{C}}_{N}$ reduces to silence. The overall noise example loss term becomes,
|
| 88 |
+
|
| 89 |
+
$$
|
| 90 |
+
{\mathcal{L}}_{\mathrm{N}} = {\begin{Vmatrix}{\mathbf{M}}_{N} - {\widehat{\mathbf{M}}}_{N}\end{Vmatrix}}_{2}^{2} + {\lambda }_{3} \cdot {\begin{Vmatrix}{\mathbf{C}}_{N} - \mathbf{0}\end{Vmatrix}}_{2}^{2} \tag{2}
|
| 91 |
+
$$
|
| 92 |
+
|
| 93 |
+
Overall MAE Cost-function: The overall function cost-function used to train the MAE is now a combination of the above loss terms. The overall cost-function also includes a term ${\mathcal{L}}_{\text{ KL-MAE }}$ to enforce that the latent representations close to a zero-mean normal distribution.
|
| 94 |
+
|
| 95 |
+
$$
|
| 96 |
+
{\mathcal{L}}_{\mathrm{{MAE}}} = {\mathcal{L}}_{\mathrm{M}} + {\mathcal{L}}_{\mathrm{{cyc}}} + {\mathcal{L}}_{\mathrm{N}} + {\lambda }_{4} \cdot {\mathcal{L}}_{\text{ KL-MAE }}
|
| 97 |
+
$$
|
| 98 |
+
|
| 99 |
+
§ 2.3. ADVANTAGES OF SELF-SUPERVISION
|
| 100 |
+
|
| 101 |
+
Having seen the network architecture and the cost-functions used, we can now begin to understand the advantages of our proposed SSE approach. We enumerate these advantages below:
|
| 102 |
+
|
| 103 |
+
1. To train our SSE network, we only need access to a small dataset of clean speech examples to train our CAE and ambient mixtures and noise recordings to train our MAE. Thus, we do not require any paired training data unlike supervised speech enhancement methods.
|
| 104 |
+
|
| 105 |
+
2. Once the CAE is trained, the model only relies on mixtures and noise recordings for further training. These recordings can be directly obtained from the place of deployment. Thus, we now have a way of using unseen test mixtures to improve separation performance. This is beneficial particularly when there is a mismatch between the training and deployment environments.
|
| 106 |
+
|
| 107 |
+
3. With this training strategy, we can train our SSE network without any human intervention autonomously to enhance speech signals.
|
| 108 |
+
|
| 109 |
+
4. Once we train the CAE, we do not need access to clean speech examples further. All future training is completely dependent on the pre-trained CAE. When deploying the model in a test location, we need not transport data to different deployment locations. This is particularly advantageous from a security standpoint.
|
| 110 |
+
|
| 111 |
+
5. An added advantage we gain is the reusability of the CAE. The pre-trained CAE can be reused to perform SSE in different speech environments irrespective of the nature of the interfering sounds as seen in our experiments described in Section 3.
|
| 112 |
+
|
| 113 |
+
§ 3. EXPERIMENTS
|
| 114 |
+
|
| 115 |
+
We now present the details of our two experiments to evaluate the performance our trained SSE model.
|
| 116 |
+
|
| 117 |
+
§ 3.1. EXPERIMENTAL SETUP
|
| 118 |
+
|
| 119 |
+
To perform SSE using our network, we operate on the magnitude spectrograms of the mixtures and clean sounds. To compute these magnitude spectrograms, we use a window and DFT size of 1024 samples at a hop of 256 samples with a Hann window. The resulting magnitude spectrograms have 513 frequency bins for each frame.
|
| 120 |
+
|
| 121 |
+
The CAE networks used for our experiments consist of a cascade of 1D convolutional layers each. The CAE encoder ${\mathcal{E}}_{c}$ consists of a sequence of ${41}\mathrm{D}$ convolutional layers where the size of the hidden dimension sequentially decreases from ${513} \rightarrow {512} \rightarrow {256} \rightarrow {128} \rightarrow {64}$ . The CAE decoder ${\mathcal{D}}_{c}$ also consists of a cascade of 4 transposed convolutional layers where the size of the latent dimensions increase in the reverse order. Thus, the latent space is chosen to have a dimensionality of 64 . We use a stride of 1 sample and a kernel size of 7 for the convolutions. Each convolutional layer is followed by a batch-norm layer and a softplus nonlinearity. In case of the encoder ${\mathcal{E}}_{c}$ , the we also add an EQ norm layer after the soft-plus non-linearity. The task of the EQ-norm layer is to compute the mean of all the frames of its input separately for each input spectrogram in the batch and subtract the same.
|
| 122 |
+
|
| 123 |
+
The architecture of our MAE network also follows a similar strategy. The MAE encoder ${\mathcal{E}}_{m}$ comprises 6 1D convolutional layers where the hidden layer sizes decrease from ${513} \rightarrow {512} \rightarrow {400} \rightarrow {300} \rightarrow {200} \rightarrow {100} \rightarrow {64}$ . The MAE decoder aims to invert this operation and consists of 1D transposed convolutions that increase in hidden layer sizes in the reverse way. As before, we use a stride and kernel size of 1 and 7 respectively. Also, each convolutional layer is succeeded by a batch-norm and a softplus activation function. Similar to ${\mathcal{E}}_{c}$ , the MAE encoder ${\mathcal{E}}_{m}$ also includes an EQ norm layer after the soft-plus non-linearity.
|
| 124 |
+
|
| 125 |
+
To evaluate the SSE model, we use Perceptual Evaluation of Speech Quality (PESQ) (Rix et al., 2001) and composite metrics that approximate the Mean Opinion Score (MOS) including CSIG: predictor of signal distortion, CBAK: predictor of background intrusiveness, and COVL: predictor of overall speech quality (Hu & Loizou, 2008).
|
| 126 |
+
|
| 127 |
+
§ 3.2. DATASETS
|
| 128 |
+
|
| 129 |
+
§ 3.2.1. EXPERIMENT 1: DAPS
|
| 130 |
+
|
| 131 |
+
The first experiment is aimed at evaluating the performance of our SSE model on real recordings taken in indoor ambient environments. For this experiment, we use the Device And Produced Speech (DAPS) dataset (Mysore, 2014). The dataset consists of real-world recordings of speech taken in environments like bedrooms, offices, conference rooms and living rooms which contribute to the overlapping ambient noise in the recordings. The dataset consists of 10 male and 10 female speakers each reading out 5 scripts. Each of these 100 recordings are available in a clean format and also in noisy environments. We divide the scripts into 3 disjoint segments: clean, mix and test. Similarly, the speakers are also divided into 3 disjoint segments: clean, mix and test The scripts and speakers from the clean segments are used to train the CAE. The mix and test segments are used to train the MAE and evaluate the model respectively. Such a bifurcation leads to a completely different set of speech examples and speakers across the 3 segments. We choose these speakers and scripts randomly and ensure that the male and female speakers are distributed evenly across the segments.
|
| 132 |
+
|
| 133 |
+
§ 3.2.2. EXPERIMENT 2: BBC SOUND EFFECTS
|
| 134 |
+
|
| 135 |
+
The second experiment deals with evaluating our SSE model on ambient street noise available in the BBC Sound Effects dataset(BBC,2015). For the speech signals, we use the signals from the DAPS dataset. We use the speech clips
|
| 136 |
+
|
| 137 |
+
max width=
|
| 138 |
+
|
| 139 |
+
2*Environment 4|c|PESQ 4|c|CSIG 4|c|CBAK 4|c|COVL
|
| 140 |
+
|
| 141 |
+
2-17
|
| 142 |
+
SS 0% 30% 50% ss 0% 30% 50% SS 0% 30% 50% SS 0% 30% 50%
|
| 143 |
+
|
| 144 |
+
1-17
|
| 145 |
+
ipad_livingroom 1 1.30 1.43 1.43 1.47 1.65 2.50 2.46 2.25 1.56 1.82 1.88 1.98 1.32 1.91 1.89 1.80
|
| 146 |
+
|
| 147 |
+
1-17
|
| 148 |
+
ipad_bedroom1 1.37 1.49 1.51 1.52 1.56 2.53 2.31 2.34 1.56 1.89 1.98 1.96 1.30 1.96 1.86 1.88
|
| 149 |
+
|
| 150 |
+
1-17
|
| 151 |
+
ipad_confroom 1 1.37 1.52 1.59 1.59 1.62 2.67 2.32 2.28 1.66 1.93 2.01 2.06 1.35 2.04 1.91 1.89
|
| 152 |
+
|
| 153 |
+
1-17
|
| 154 |
+
ipad_office1 1.22 1.37 1.39 1.37 1.46 2.32 2.15 1.84 1.40 1.83 1.86 1.85 1.17 1.78 1.71 1.53
|
| 155 |
+
|
| 156 |
+
1-17
|
| 157 |
+
ipad_office2 1.33 1.37 1.33 1.42 1.52 2.46 2.23 2.39 1.44 1.76 1.71 1.91 1.25 1.84 1.71 1.85
|
| 158 |
+
|
| 159 |
+
1-17
|
| 160 |
+
ipadflat_confroom1 1.45 1.38 1.47 1.54 1.36 2.20 2.33 2.25 1.64 1.74 1.93 2.00 1.22 1.71 1.84 1.85
|
| 161 |
+
|
| 162 |
+
1-17
|
| 163 |
+
ipadflat_office1 1.26 1.35 1.36 1.40 1.15 2.46 2.10 1.82 1.42 1.85 1.84 1.91 1.06 1.85 1.67 1.54
|
| 164 |
+
|
| 165 |
+
1-17
|
| 166 |
+
iphone_livingroom 1 1.38 1.30 1.42 1.40 1.24 2.11 2.27 2.09 1.57 1.78 1.85 1.90 1.14 1.64 1.79 1.69
|
| 167 |
+
|
| 168 |
+
1-17
|
| 169 |
+
iphone_bedroom1 1.43 1.33 1.43 1.47 1.13 2.14 2.13 1.88 1.58 1.79 1.91 1.93 1.08 1.68 1.73 1.62
|
| 170 |
+
|
| 171 |
+
1-17
|
| 172 |
+
|
| 173 |
+
Table 1. DAPS experiment results. We compare the results of our SSE model with those of spectral subtraction (SS). We consider three versions of our our SSE model based on the amount of pure noise examples seen by the model during training viz., 0%, 30% and 50% as a percentage of the training data. Higher scores are better for all metrics. We see that our SSE models consistently outperform SS on all the metrics. In addition, increasing the noise percentage also improves upon the quality of the extracted speech signal and suppression of the interfering noises.
|
| 174 |
+
|
| 175 |
+
max width=
|
| 176 |
+
|
| 177 |
+
2*City 2*SNR (dB) 4|c|PESQ 4|c|CSIG 4|c|CBAK 4|c|COVL
|
| 178 |
+
|
| 179 |
+
3-18
|
| 180 |
+
Mixture 0% 30% 50% Mixture 0% 30% 50% Mixture 0% 30% 50% Mixture 0% 30% 50%
|
| 181 |
+
|
| 182 |
+
1-18
|
| 183 |
+
2*London 5 1.09 1.32 1.31 1.36 1.96 2.02 2.03 1.97 1.69 1.99 2.06 2.13 1.43 1.58 1.61 1.58
|
| 184 |
+
|
| 185 |
+
2-18
|
| 186 |
+
10 1.18 1.52 1.59 1.60 2.41 2.44 2.49 2.48 1.99 2.26 2.41 2.48 1.73 1.92 1.98 1.94
|
| 187 |
+
|
| 188 |
+
1-18
|
| 189 |
+
2*Paris 5 1.09 1.21 1.23 1.22 1.77 1.83 1.92 1.87 1.69 1.90 1.97 1.98 1.29 1.42 1.46 1.43
|
| 190 |
+
|
| 191 |
+
2-18
|
| 192 |
+
10 1.18 1.48 1.48 1.50 2.03 2.23 2.22 2.28 1.98 2.19 2.28 2.34 1.53 1.79 1.81 1.79
|
| 193 |
+
|
| 194 |
+
1-18
|
| 195 |
+
|
| 196 |
+
Table 2. BBC experiment results. Similar to the DAPS experiment, we compare the results of our SSE model at three different noise percentages $0\% ,{30}\%$ and ${50}\%$ . Considering the significant presence of non-stationary sounds in street noise recordings, we do not use spectral subtraction as our baseline method. Instead we report the metric values for the mixtures for comparison. As before, increasing the percentage of pure noise examples enhances the noise suppression (as seen by the CBAK scores) and the quality of the extracted speech (PESQ). from the clean segment to train the CAE and the speech clips from the mixture segment for the MAE. Mixture audios are composed by mixing the clean speech sounds with ambient noises from two cities (London and Paris) at 2 SNR settings ( 5 and 10dB) each. For each city, we choose 10 ambient noise files which add up to 45 minutes of noises approximately. The same noise files are used to produce mix and test segments. We emphasize that the network has never encountered mixtures of the test speakers or their utterances with the noise files used during training.
|
| 197 |
+
|
| 198 |
+
§ 3.3. RESULTS AND DISCUSSION
|
| 199 |
+
|
| 200 |
+
Table 1 presents the results of our experiments on the DAPS dataset. We use spectral subtraction (SS) as our baseline method and compare it with three versions of our SSE model (based on the percentage of pure noise recordings encountered during training). We observe a consistent improvement in performance over SS in all the metrics, and the model also improves as it comes across a higher percentage of pure noise sounds. The environments livingroom1 and office1 are relatively more reverberant compared to the other environments. Via informal listening tests, we observed that the final results are dereverberated as well. Thus, we can potentially use this training strategy for other allied tasks like dereverberation or bandwidth extension.
|
| 201 |
+
|
| 202 |
+
Table 2 presents the results of our SSE experiments on the BBC dataset. Since the BBC noise recordings include non-stationary sounds from the streets, we compare SSE models with the mixture metrics. We also observe that the performance improvement is greater in the case of mixtures having a higher signal-to-noise ratios. As before, a higher noise percentage improves upon SSE performance further.
|
| 203 |
+
|
| 204 |
+
§ 4. CONCLUSION
|
| 205 |
+
|
| 206 |
+
In this paper we developed and investigated the idea of self-supervision in a single-channel speech enhancement setup. To accomplish this, we first trained an autoencoder on clean speech signals and learned an appropriate latent representation. This latent representation was then used in a downstream speech enhancement task to train an autoen-coder for noisy speech mixtures so that the two autoencoders shared their latent spaces. This allowed us to map the domain of noisy speech mixtures to the domain of clean sounds autonomously and without clean targets. Our experiments demonstrate the efficacy of our training approach in ambient indoor environments and in the presence of street noises.
|
papers/ICML/ICML 2020/ICML 2020 Workshop/ICML 2020 Workshop SAS/Jc65VYwYVB/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,203 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Understanding Self-Attention of Self-Supervised Audio Transformers
|
| 2 |
+
|
| 3 |
+
Shu-wen Yang ${}^{1}$ Andy T. Liu ${}^{12}$ Hung-yi Lee ${}^{12}$
|
| 4 |
+
|
| 5 |
+
## Abstract
|
| 6 |
+
|
| 7 |
+
Self-supervised Audio Transformers (SAT) enable great success in many downstream speech applications like ASR, but how they work has not been widely explored yet. In this work, we present multiple strategies for the analysis of attention mechanisms in SAT. We categorize attentions into explainable categories, where we discover each category possesses its own unique functionality. We provide a visualization tool for understanding multi-head self-attention, importance ranking strategies for identifying critical attention, and attention refinement techniques to improve model performance.
|
| 8 |
+
|
| 9 |
+
## 1. Introduction
|
| 10 |
+
|
| 11 |
+
Adapting the idea of self-supervised learning (Devlin et al., 2018; Yang et al., 2019; Liu et al., 2019; Lan et al., 2019) to continuous speech has received much attention in recent work (Liu et al., 2020; Jiang et al., 2019; Song et al., 2019; Wang et al., 2020; Baevski et al., 2019b;a), where Transformer Encoders with multi-head self-attention (Vaswani et al., 2017) are pre-trained on a large amount of audio data in a self-supervised scheme. Once pre-trained, they are used to improve various downstream supervised tasks, including phone classification, speaker recognition, SLU, and ASR. Despite the great success of these Self-supervised Audio Transformers ${\left( \mathrm{{SAT}}\right) }^{1}$ , their internal attention are often neglected and not explored, as we have little understanding of how they work, or the knowledge they acquire from a large amount of unlabeled data. Understanding how SAT models draw conclusions is crucial for both their improvement and application. In the area of natural language processing (NLP), explaining and interpreting pre-trained black-box models like BERT have been a well-explored topic (Aken et al., 2020; Hao et al., 2019; Kovaleva et al., 2019; Clark et al., 2019; Tenney et al., 2019a;b). However, the analysis of models that are pre-trained on speech has not seen such widespread exploration, and remains an important and challenging endeavor for the speech community.
|
| 12 |
+
|
| 13 |
+
In this work, we propose to analyze the multi-head self-attention mechanism of SAT through the following methods: visualization, categorization, functionality study, and importance ranking. We found that the self-attentions of SAT models tend to converge into three categories: global attentions, vertical attentions, and diagonal attentions. Diagonal attentions either highly attend to $\pm t$ neighbor or are highly correlated with phoneme boundaries; vertical attentions often concentrate on specific phonemes. As for noisy global attentions, we provide a visualization tool to draw insights about their implicit operations. Through our quantized ranking analysis, we conclude that diagonal attentions outrank the most in terms of importance, followed by vertical attentions. Last but not least, we introduce attention refinement methods which allow us to improve learned representations by partially removing global attentions or constraining attention span, resulting in a faster inference time and higher performance.
|
| 14 |
+
|
| 15 |
+
## 2. Self-Supervised Audio Transformers
|
| 16 |
+
|
| 17 |
+
The main ideology of NLP BERT pre-training (Devlin et al., 2018; Yang et al., 2019; Liu et al., 2019; Lan et al., 2019) is to corrupt the input word tokens by randomly masking or permuting them with a probability policy, layers of Transformer Encoder (Vaswani et al., 2017) are trained together with a classifier that estimates the masked words at the output. Primarily inspired by this idea, previous works (Liu et al., 2020; Song et al., 2019; Jiang et al., 2019; Wang et al., 2020; Baevski et al., 2019b) proposed self-supervised learning for audio with Transformer Encoders. In this work, we refer to these types of models as Self-Supervised Audio Transformers, SAT. Unlike BERT where the inputs are discrete text tokens, the inputs of SATs are acoustic features (e.g., MFCC, FBANK, Mel-Spectrogram), which form much longer sequences and could be extremely similar to their neighbor features since speech signal is continuously varying. Some SATs take continuous acoustic features as input directly (Liu et al., 2020; Song et al., 2019), while some conduct vector quantization in advance (Baevski et al., 2019b;a). Also, different from BERT where the model is trained by estimating discrete tokens, SATs change to minimize reconstruction error between the real frame and the predicted frame (Liu et al., 2020; Jiang et al., 2019) or classification error for the real frame among sampled distracting frames (Baevski et al., 2019b;a).
|
| 18 |
+
|
| 19 |
+
---
|
| 20 |
+
|
| 21 |
+
${}^{1}$ College of Electrical Engineering and Computer Science, National Taiwan University ${}^{2}$ Graduate Institute of Communication Engineering, National Taiwan University. Correspondence to: Shu-wen Yang <r08944041@ntu.edu.tw>, Andy T. Liu <f07942089@ntu.edu.tw>, Hung-yi Lee <hungy-ilee@ntu.edu.tw>.
|
| 22 |
+
|
| 23 |
+
Published at the workshop on Self-supervision in Audio and Speech at the ${37}^{\text{th }}$ International Conference on Machine Learning, Vienna, Austria. Copyright 2020 by the author(s).
|
| 24 |
+
|
| 25 |
+
${}^{1}$ These pre-trained transformer encoders have several different names in their original papers. In this paper we refer to them as SAT for simplicity.
|
| 26 |
+
|
| 27 |
+
---
|
| 28 |
+
|
| 29 |
+
Among all the variants of SATs, we address our focus on SATs that take continuous acoustic features as input with reconstruction loss. In our analysis, we particularly follow the framework described in Mockingjay (Liu et al., 2020). In Mockinjgay, two techniques of downsampling and consecutive masking are introduced to resolve these issues. Downsampling is applied on input features to adapt SATs to long sequences. To reduce the length of frames by a factor of ${R}_{\text{factor }}$ , consecutive frames of ${R}_{\text{factor }}$ amount are reshaped and stacked into one frame (Sperber et al., 2018; Pham et al., 2019). On the other hand, consecutive masking is applied during pre-training to avoid the model from exploiting the local smoothness of acoustic frames. Instead of masking a single frame, consecutive frames of ${C}_{num}$ are masked to zero. To study the attentions of SAT models, we use the prevailing framework of the LARGE model described in Mockingjay, which consists of 12 layers of Transformer Encoders. We train three models on the LibriSpeech (Panayotov et al., 2015) train-clean-360 subset with identical settings as in Mockingjay, except for ${C}_{\text{num }} \in \{ 3,6,9\}$ , and we name them as M3, M6, M9.
|
| 30 |
+
|
| 31 |
+
## 3. Notations
|
| 32 |
+
|
| 33 |
+
We first define notation for self-attention mechanism and SAT representations. Given a length $T$ sequence of vectors $\mathbf{x} = {x}_{1},\ldots ,{x}_{T} \in {\mathbb{R}}^{d}$ , we denote ${A}_{u}^{h} \in {\mathbb{R}}^{T \times T}$ as attention weights for all query-key pairs of a head $h$ when propagating an utterance $u$ . Hence, ${A}_{u}^{h}\left\lbrack {q, k}\right\rbrack \in \mathbb{R}$ is the attention weight of ${x}_{q}$ attending to ${x}_{k}$ . We use $q$ for timestamp of query; $k$ for timestamp of key, where $1 \leq q, k \leq T$ . As a result, ${A}_{u}^{h}\left\lbrack q\right\rbrack \in$ ${\mathbb{R}}^{T}$ is the attention distribution formed by ${x}_{q}$ , which is a row if we view ${A}_{u}^{h}$ as a map. When analyzing the representations of a $L$ -layer SAT, we denote ${\mathbf{x}}^{l} = {x}_{1}^{l},\ldots ,{x}_{T}^{l} \in {\mathbb{R}}^{d}$ as the representations of a given layer, where $0 \leq l \leq L$ and ${\mathbf{x}}^{\mathbf{0}}$ represents input features.
|
| 34 |
+
|
| 35 |
+
## 4. Visualization and Categorization
|
| 36 |
+
|
| 37 |
+
We plot out ${A}_{u}^{h} \in {\mathbb{R}}^{T \times T}$ as an attention map, where ${A}_{u}^{h}\left\lbrack {0,0}\right\rbrack$ starts from the upper-left corner, like Fig ${1}^{2}$ . SAT attentions tend to converge into three categories: (1) global: flat attention distributions; (2) vertical: attention maps with vertical lines, and (3) diagonal: attention maps with clear diagonal. Because attention maps of a head are similar across utterances with respect to the three categories, we study self-attention on the basis of head instead of a single attention map. To classify heads into three categories, we define three metrics to quantify a head $h$ : globalness $G$ , verticality $V$ and diagnality $D$ in equations1,2,3, respectively.
|
| 38 |
+
|
| 39 |
+
$$
|
| 40 |
+
G\left( h\right) = \underset{u \sim U}{\mathbb{E}}\left\lbrack {\frac{1}{T}\mathop{\sum }\limits_{{q = 1}}^{T}\mathbb{H}\left( {{A}_{u}^{h}\left\lbrack q\right\rbrack }\right) }\right\rbrack \tag{1}
|
| 41 |
+
$$
|
| 42 |
+
|
| 43 |
+
$$
|
| 44 |
+
V\left( h\right) = \underset{u \sim U}{\mathbb{E}}\left\lbrack {-\mathbb{H}\left( {\frac{1}{T}\mathop{\sum }\limits_{{q = 1}}^{T}{A}_{u}^{h}\left\lbrack q\right\rbrack }\right) }\right\rbrack \tag{2}
|
| 45 |
+
$$
|
| 46 |
+
|
| 47 |
+
$$
|
| 48 |
+
D\left( h\right) = \underset{u \sim U}{\mathbb{E}}\left\lbrack {-\frac{1}{{T}^{2}}\mathop{\sum }\limits_{{q = 1}}^{T}\mathop{\sum }\limits_{{k = 1}}^{T}\left| {q - k}\right| \cdot {A}_{u}^{h}\left\lbrack {q, k}\right\rbrack }\right\rbrack \tag{3}
|
| 49 |
+
$$
|
| 50 |
+
|
| 51 |
+

|
| 52 |
+
|
| 53 |
+
Figure 1. Attention maps of heads favored by G, V, D, visualized with the same utterance. (a)(c)(e) are average cases; (b)(d)(f) are extreme cases found by maximizing the metrics.
|
| 54 |
+
|
| 55 |
+
where $\mathbb{H}$ is the standard definition of entropy, and $U$ is a speech corpus. Based on G, V, D, we would have three ranking lists for all heads. If among the three ranking lists, a head has the highest rank based on the list of $\mathrm{G}$ , it would be categorized as global, and so on. We use ranking instead of values because the metrics may not have the same numerical scale. Fig 1 shows two attention maps for each category.
|
| 56 |
+
|
| 57 |
+
Diagonal attentions attend to local neighbors for every query. Some exhibit a highly focused behavior like Fig 1(f) and some are block diagonal like Fig 1(e). Interestingly, no SAT contains highly focused diagonal attention at main diagonal. They shift either to the left or right, and larger masking span ${C}_{num}$ is accompanied by a larger shift, possibly due to SAT models trying to get useful information from further frames. The functionality of block diagonal attentions is discussed in section 5 . Vertical attentions like Fig 1(c)(d) always attend to similar locations for all queries given an utterance; global attentions like Fig 1(a)(b) behave randomly. These two categories are discussed in section 6 . Finally, we visualize the head distribution ${}^{2}$ according to metrics and find the model trained with a larger masking span ${C}_{\text{num }}$ has more global heads. On the contrary, M3 contains the most diagonal heads, suggesting that smaller ${C}_{num}$ makes SAT focus on local structure more.
|
| 58 |
+
|
| 59 |
+
---
|
| 60 |
+
|
| 61 |
+
${}^{2}$ Supplementary materials: https://github.com/ leo19941227/Self-Attention-on-SATs
|
| 62 |
+
|
| 63 |
+
---
|
| 64 |
+
|
| 65 |
+

|
| 66 |
+
|
| 67 |
+
Figure 2. Four images on the left side are plotted with the same utterance. (a) a block diagonal attention map. (b) a block diagonal map plotted with true phoneme boundaries. Two orange dotted lines show two examples of boundaries. (c) similarity matrix for (a). (d) similarity matrix for MFCCs. (e) precision-recall curve for M3, M6, M9 attentions and MFCCs.
|
| 68 |
+
|
| 69 |
+
## 5. Phoneme Segmentation
|
| 70 |
+
|
| 71 |
+
There are attention maps of clear block diagonal like Fig 2(a). The borders of blocks might be phoneme boundaries, as illustrated in Fig 2(b). It seems diagonal attention knows phoneme intervals. We conduct phoneme segmentation to examine the correlation.
|
| 72 |
+
|
| 73 |
+
We mainly follow the algorithm proposed in (Bhati et al., 2017), which first calculates a similarity matrix from a sequence of features, containing all pairwise distances between features, and then extract boundary points from the similarity matrix. For segmentation with an attention map, its rows are considered as the feature sequence for computing the similarity matrix ${}^{2}$ . Examples of similarity matrices are shown in Fig 2(c)(d) when segmentation features are the attentions map and MFCCs, respectively. We slightly modify the boundary-point-extraction algorithm ${}^{2}$ in (Bhati et al., 2017), the modification makes algorithm a little more stable, but only little performance difference is found.
|
| 74 |
+
|
| 75 |
+
TIMIT (Garofolo et al., 1993) is used for evaluating the phoneme segmentation since it provides ground-truth phoneme boundaries. We follow the setup in (Stan et al., 2016) that uses a small subset of training set as validation to adjust a few algorithm parameters and evaluate on test set. We use a ${20}\mathrm{\;{ms}}$ tolerance window and evaluate with R-value (Räsänen et al., 2009) and precision-recall curve. We hand-pick a visually block diagonal head for each of M3, M6, and M9. We choose MFCC as baseline feature since it is the most prevailing feature (Bhati et al., 2017; 2019; Scharenborg et al., 2010; Mporas et al., 2008) for segmentation. Little performance difference is found between MFCC and ${\mathbf{x}}^{\mathbf{0}}$ (Mel-scale spectrogram).
|
| 76 |
+
|
| 77 |
+
Fig 2(e) verifies the correlation between block diagonal attentions and phoneme boundaries, that attentions clearly surpass MFCC under the same setting. As for R-value, under the strict hit counting scenario (Räsänen et al., 2009), MFCC achieves 76.68; M3, M6, M9 achieve 79.99, 78.43, 78.19, respectively. Interestingly, larger masking span ${C}_{num}$ leads to poorer performance. The reason is that when ${C}_{num} = 3$ , masked portions are typically within a phoneme interval, the model learned to utilize features in the same interval to reconstruct. On the other hand, ${C}_{\text{num }} = 9$ can sometimes mask an entire phoneme interval, the model then tries to retrieve information beyond the interval.
|
| 78 |
+
|
| 79 |
+
Worth mentioning, similarity matrices on MFCCs and learned block diagonal attentions have a fundamental difference that the former show high activation on similar but distant frames in Fig 2(d), while the latter are more aware of phoneme neighborhood structure. Figures similar to Fig 2(d) are shown ${}^{2}$ when we compute similarity matrix on Mel-scale spectrogram or SAT representations, suggesting that despite there are similar frames located far apart, block diagonal heads learned to ignore distant information and focus on neighborhood structure.
|
| 80 |
+
|
| 81 |
+
## 6. Phoneme Relation Map
|
| 82 |
+
|
| 83 |
+
To study the functionality of global and vertical heads, we propose to align attentions to phoneme relations to see whether some heads focus on looking for specific phoneme relations in the utterances. For a sequence of input features ${\mathbf{x}}^{\mathbf{0}}$ , there exists frame-wise phoneme labels $\mathbf{y} \in {\mathbf{Y}}^{T}$ , where $\mathbf{Y}$ is a predefined phone set. We consider ${x}_{q}^{l}$ attending to ${x}_{k}^{l}$ as when observing phoneme ${y}_{q}$ the head would look for phoneme ${y}_{k}$ . We quantify a phoneme relation ${Y}_{m} \rightarrow {Y}_{n}$ inside a head $h$ by summing up all attention weights ${A}_{u}^{h}\left\lbrack {q, k}\right\rbrack$ whose phoneme relation ${y}_{q} \rightarrow {y}_{k}$ equals ${Y}_{m} \rightarrow {Y}_{n}$ , over the entire speech corpus. More specifically, we plot a phoneme relation map (PRM) ${P}_{h} \in {\mathbb{R}}^{\left| \mathbf{Y}\right| \times \left| \mathbf{Y}\right| }$ by the following equations:
|
| 84 |
+
|
| 85 |
+
$$
|
| 86 |
+
{P}_{h}^{\prime }\left\lbrack {m, n}\right\rbrack = \underset{u \sim U}{\mathbb{E}}\left\lbrack {\frac{1}{T}\mathop{\sum }\limits_{{q = 1}}^{T}\mathop{\sum }\limits_{{k = 1}}^{T}{\mathbb{I}}_{{y}_{q} = {Y}_{m}} \cdot {\mathbb{I}}_{{y}_{k} = {Y}_{n}} \cdot {A}_{u}^{h}\left\lbrack {q, k}\right\rbrack }\right\rbrack
|
| 87 |
+
$$
|
| 88 |
+
|
| 89 |
+
(4)
|
| 90 |
+
|
| 91 |
+
$$
|
| 92 |
+
{P}_{h}\left\lbrack {m, n}\right\rbrack = \frac{{P}_{h}^{\prime }\left\lbrack {m, n}\right\rbrack - {P}_{U}\left\lbrack {m, n}\right\rbrack }{{P}_{U}\left\lbrack {m, n}\right\rbrack } \tag{5}
|
| 93 |
+
$$
|
| 94 |
+
|
| 95 |
+
where $1 \leq m, n \leq \left| \mathbf{Y}\right| ,\mathbb{I}$ is indicator function, ${P}_{h}^{\prime },{P}_{U} \in$ ${\mathbb{R}}^{\left| \mathbf{Y}\right| \times \left| \mathbf{Y}\right| }$ and ${P}_{U}$ is the distribution of all possible phoneme relations ${}^{2}$ in speech corpus $U$ , normalizing the effect of dominating relations like ${sil} \rightarrow {sil}$ which appears in all utterances. As a result, positive values in ${P}_{h}$ represent preference for specific phoneme relations; negative values represent the opposite.
|
| 96 |
+
|
| 97 |
+
PRMs are plotted using TIMIT (Garofolo et al., 1993) with 39 phonemes, and results of several heads are shown ${}^{2}$ in Fig 3. Since diagonal heads are interpretable themselves, we focus on vertical and global heads. There are several opera-
|
| 98 |
+
|
| 99 |
+

|
| 100 |
+
|
| 101 |
+
Figure 3. Some observed operations from PRMs: (a) sil attends to sil; non-sil attends to non-sil (b) sil attends to non-sil; non-sil attends to sil, ch, sh (c) attends to identity, the same phoneme as query (d) not attends to identity (e) attends to sil (f) not attends to sil (g) attends to ch, jh, s, sh (h) not attends to s, sh.
|
| 102 |
+
|
| 103 |
+

|
| 104 |
+
|
| 105 |
+
Figure 4. (a) The relation between verticality $\mathrm{V}$ of a head and its extreme concentration value. Each dot represents a head h. (b) zooms in the bottom-left of (a) since outliers dominate too much. PRMs of representative heads are marked by red squares: Fig 3(e)(g)(f)(h) are for 1,2,3,4 respectively; Fig 4 (c) is for 5 .
|
| 106 |
+
|
| 107 |
+
tions: attending to silence, identity, specific phonemes, and not attending to these (not operations). We observe tendency of vertical heads either focus or neglect specific phonemes for all queries, and we bridge their connections. For later discussion, we use focus and neglect to refer to these types of behaviours. While a PRM characterizes all phoneme relations of a head, we further define concentration ${C}_{h} \in {\mathbb{R}}^{\left| Y\right| }$ of a head $h$ , where each ${C}_{h}\left\lbrack n\right\rbrack \in \mathbb{R}$ quantifies the amount of focus (when positive) or neglect (when negative) of a head on specific phoneme ${Y}_{n}$ , over all queries:
|
| 108 |
+
|
| 109 |
+
$$
|
| 110 |
+
{C}_{h}\left\lbrack n\right\rbrack = \frac{1}{\left| Y\right| }\mathop{\sum }\limits_{{m = 1}}^{\left| Y\right| }{P}_{h}\left\lbrack {m, n}\right\rbrack \tag{6}
|
| 111 |
+
$$
|
| 112 |
+
|
| 113 |
+
Fig 4 verifies the connection between verticality and concentration. We report the maximum focus or neglect for each head. Fig 4(a) points out that heads with high verticality do focus on specific phonemes; Fig 4(b) points out even a slight increase of the verticality $\mathrm{V}$ of a head has correlation to concentration, for both focus and neglect. Some low-verticality heads with extreme neglect at the bottom-left of Fig 4(b) are diagonal heads, which always attend to their neighbors dynamically and show extreme neglect for all phonemes.
|
| 114 |
+
|
| 115 |
+
## 7. Importance Ranking
|
| 116 |
+
|
| 117 |
+
To evaluate the importance of different attention patterns, we conducted two pruning-based probing experiments. We ablate partial functionality of self-attention directly at inference time in two aspects: (1) ablates an entire head; (2) ablates the visible span for all heads. If an attention pattern is essential, ablating it should exhibit immediate loss in terms of the quality of final representations. We examine representation quality by three probing tasks: spectrogram reconstruction, phoneme classification, and speaker recognition. For the first task, we examine the richness in terms of spectrogram details of refined representations. We reuse the reconstruction head during pre-training and measure L1 loss compared to the original. For the latter two tasks, we examine the usefulness of refined representations on downstream tasks. For phoneme and speaker classifications, we train the downstream models using LibriSpeech (Panayotov et al., 2015) train-clean-100 subset and fixed ${50}\mathrm{k}$ steps. In frame-level setting, we use single-hidden MLP; in utterance-level setting, we use mean-pooling followed by a linear transform. Phoneme classification is conducted under frame-level setting; speaker recognition is under frame-level and utterance-level. Following (Liu et al., 2020), phoneme labels are obtained by the Montreal Force Aligner (McAuliffe et al., 2017), and all evaluations are done on the LibriSpeech test-clean subset.
|
| 118 |
+
|
| 119 |
+
### 7.1. Head-based Pruning
|
| 120 |
+
|
| 121 |
+
For each head $h$ , we first compute values of $G\left( h\right) , V\left( h\right)$ , $D\left( h\right)$ , and cumulatively prune heads from high to low for each metric by setting ${A}_{u}^{h} = 0$ , resulting in three curves as shown in Fig 5(a)(b)(c). We rank the importance of the three categories by observing which pruning results in a larger performance drop. We find ranking results are consistent for different ${C}_{num}$ , so we only show the result of M3. There are several findings: (1) Diagonal heads are the most important. Performances on all three tasks drop significantly with only 24 heads pruned. (2) Vertical heads rank second. While pruning them does not hurt reconstruction or phoneme classification much, it drops faster than global heads in speaker recognition. This suggests that vertical attentions have more relation to speaker identity. (3) Global heads have the least importance that pruning them has the least effect on all tasks. (4) Both global and vertical heads are harmful to the phonetic structure. Fig 5(b) shows that pruning them even improve the phoneme classification accuracy. For vertical heads, we speculate that the vertical heads might focus on distant phonemes when forming a new representation independently (disrespectfully) of query phoneme, which might corrupt the local phonetic structure. (5) In Fig 5(b), when we prune according to diagonality, phoneme accuracy drops dramatically for the first 24 heads pruned, while it surprisingly increases as we prune more heads. This is because when pruning more than 24 diagonal heads, we start to prune the heads that are more vertical or global than diagonal, supporting the previous finding that vertical and global attentions are harmful for phonemic information.
|
| 122 |
+
|
| 123 |
+

|
| 124 |
+
|
| 125 |
+
Figure 5. Performance curves of attention pruning. Curves marked by dots are frame-level setting; otherwise utterance-level setting.
|
| 126 |
+
|
| 127 |
+
We further show the result of ranking the importance of heads based on their maximum attention weights, denoted as weight in Fig 5(a)(b)(c), which has been shown to be a strong baseline in the previous work (Hao et al., 2020). Fig 5(c) shows pruning based on globalness has less influence than weight. Fig 6(a) visualizes the difference between two ranking strategies. Although they agree on which heads are essential, they slightly diverge on which are not. Their decision boundaries in terms of the first 24 heads to be pruned are shown by red and blue lines in Fig 6(a), which is the direct cause of their performance differences. Global-ness prunes blue dots while leaving red dots unpruned; and vise versa for weight (while they all prune green dots). Since globalness-based pruning results in a better performance than weight-based pruning, this suggests that heads of red dots are more important than blue dots. We select the head with the highest ranking difference from both red and blue dots, and plot their PRMs in Fig 6(b) and (c), respectively. We find that while Fig 6(b) shows strong neglect, (c) does not possess observable operation. In fact, heads of red dots are mostly with clear neglect ${}^{2}$ . We argue that this is the main reason why globalness performs better after pruning, that heads with neglect are essential to speaker identity, and glob-
|
| 128 |
+
|
| 129 |
+

|
| 130 |
+
|
| 131 |
+
Figure 6. (a) Different ranking of a head according to globalness and attention weight. Each dot is a head, and the one with higher ranking number is more important. The first 24 heads to be pruned are green and blue for globalness, green and red for weight. (b) and (c) are PRMs of heads in red and blue dots, respectively.
|
| 132 |
+
|
| 133 |
+
alness defined by entropy is able to recognize neglect and score them higher. On the other hand, weight is confused by attentions with large weights but without meaningful operation, suggesting that weight do not always reflect the importance of heads. We speculate that these heads might learn to neglect less useful frames, like sil in Fig 6(b), and focus more on other frames with more speaker information (Wang et al., 2018). Based on the above observations, we choose globalness as our refinement metric. Fig 5(d)(e)(f) show pruning results for M3, M6, M9. The importance of global heads become less for larger ${C}_{\text{num }}$ , and we keep observing performance boost for phoneme classification. Despite all three models drop for speaker recognition, the drop is mitigated dramatically in utterance-level setting (a more common scenario), suggesting that global heads are not necessary when speaker classification is performed on utterance level. In conclusion, we can prune SAT heads for more than ${50}\%$ without sacrificing performance.
|
| 134 |
+
|
| 135 |
+
### 7.2. Span-based Pruning
|
| 136 |
+
|
| 137 |
+
Since most of the heads have attention span over a long range (no matter what category it belongs to), we further conduct attention-span pruning to examine if global information is genuinely not helpful for extracting phonetic information. We limit the visible span of all heads by length $r$ , either to the left or right. That is, we set ${A}_{u}^{h}\left\lbrack {q, k}\right\rbrack = 0$ for any $\left| {q - k}\right| > r$ . Results are presented in Fig 5(g)(h)(i).
|
| 138 |
+
|
| 139 |
+
## 8. Conclusion
|
| 140 |
+
|
| 141 |
+
In this paper, we present multiple strategies for analyzing the self-attention mechanism in SATs, including phoneme segmentation, phoneme relation map, and two aspects of pruning. We find several attention functionality and operations. We identify critical attentions and show our visualization tool useful for understanding pruning behavior. Finally, we conclude that we can refine representations and speed up inference time for a given SAT in two aspects: removing global heads or constraining attention span.
|
| 142 |
+
|
| 143 |
+
References
|
| 144 |
+
|
| 145 |
+
Aken, B. v., Winter, B., Löser, A., and Gers, F. A. Vis-bert: Hidden-state visualizations for transformers. In Companion Proceedings of the Web Conference 2020, pp. 207-211, 2020.
|
| 146 |
+
|
| 147 |
+
Baevski, A., Auli, M., and Mohamed, A. Effectiveness of self-supervised pre-training for speech recognition. arXiv preprint arXiv:1911.03912, 2019a.
|
| 148 |
+
|
| 149 |
+
Baevski, A., Schneider, S., and Auli, M. vq-wav2vec: Self-supervised learning of discrete speech representations. arXiv preprint arXiv:1910.05453, 2019b.
|
| 150 |
+
|
| 151 |
+
Bhati, S., Nayak, S., and Murty, K. S. R. Unsupervised segmentation of speech signals using kernel-gram matrices. In National Conference on Computer Vision, Pattern Recognition, Image Processing, and Graphics, pp. 139- 149. Springer, 2017.
|
| 152 |
+
|
| 153 |
+
Bhati, S., Nayak, S., Murty, K. S. R., and Dehak, N. Unsupervised acoustic segmentation and clustering using siamese network embeddings. In INTERSPEECH, pp. 2668-2672, 2019.
|
| 154 |
+
|
| 155 |
+
Clark, K., Khandelwal, U., Levy, O., and Manning, C. D. What does BERT look at? an analysis of BERT's attention. In Proceedings of the 2019 ACL Workshop Black-boxNLP: Analyzing and Interpreting Neural Networks for NLP, pp. 276-286, Florence, Italy, August 2019. Association for Computational Linguistics. doi: 10.18653/ v1/W19-4828. URL https://www.aclweb.org/ anthology/W19-4828.
|
| 156 |
+
|
| 157 |
+
Devlin, J., Chang, M.-W., Lee, K., and Toutanova, K. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018.
|
| 158 |
+
|
| 159 |
+
Garofolo, J. S., Lamel, L. F., Fisher, W. M., Fiscus, J. G., and Pallett, D. S. Darpa timit acoustic-phonetic continous speech corpus cd-rom. nist speech disc 1-1.1. STIN, 93: 27403, 1993.
|
| 160 |
+
|
| 161 |
+
Hao, Y., Dong, L., Wei, F., and Xu, K. Visualizing and understanding the effectiveness of BERT. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 4143-4152, Hong Kong, China, November 2019. Association for Computational Linguistics. doi: 10.18653/v1/D19-1424. URL https: //www.aclweb.org/anthology/D19-1424.
|
| 162 |
+
|
| 163 |
+
Hao, Y., Dong, L., Wei, F., and Xu, K. Self-attention attribution: Interpreting information interactions inside transformer. arXiv preprint arXiv:2004.11207, 2020.
|
| 164 |
+
|
| 165 |
+
Jiang, D., Lei, X., Li, W., Luo, N., Hu, Y., Zou, W., and Li, X. Improving transformer-based speech recognition using unsupervised pre-training. arXiv preprint arXiv:1910.09932, 2019.
|
| 166 |
+
|
| 167 |
+
Kovaleva, O., Romanov, A., Rogers, A., and Rumshisky, A. Revealing the dark secrets of BERT. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pp. 4365-4374, Hong Kong, China, November 2019. Association for Computational Linguistics. doi: 10.18653/v1/D19-1445. URL https: //www.aclweb.org/anthology/D19-1445.
|
| 168 |
+
|
| 169 |
+
Lan, Z., Chen, M., Goodman, S., Gimpel, K., Sharma, P., and Soricut, R. Albert: A lite bert for self-supervised learning of language representations. arXiv preprint arXiv:1909.11942, 2019.
|
| 170 |
+
|
| 171 |
+
Liu, A. T., Yang, S.-w., Chi, P.-H., Hsu, P.-c., and Lee, H.-y. Mockingjay: Unsupervised speech representation learning with deep bidirectional transformer encoders. In ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 6419-6423. IEEE, 2020.
|
| 172 |
+
|
| 173 |
+
Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., and Stoyanov, V. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692, 2019.
|
| 174 |
+
|
| 175 |
+
McAuliffe, M., Socolof, M., Mihuc, S., Wagner, M., and Sonderegger, M. Montreal forced aligner: Trainable text-speech alignment using kaldi. In Interspeech, volume 2017, pp. 498-502, 2017.
|
| 176 |
+
|
| 177 |
+
Mporas, I., Ganchev, T., and Fakotakis, N. Phonetic segmentation using multiple speech features. International Journal of Speech Technology, 11(2):73-85, 2008.
|
| 178 |
+
|
| 179 |
+
Panayotov, V., Chen, G., Povey, D., and Khudanpur, S. Librispeech: An asr corpus based on public domain audio books. 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 5206-5210, 2015.
|
| 180 |
+
|
| 181 |
+
Pham, N.-Q., Nguyen, T.-S., Niehues, J., Müller, M., Stüker, S., and Waibel, A. Very deep self-attention networks for end-to-end speech recognition. arXiv preprint arXiv:1904.13377, 2019.
|
| 182 |
+
|
| 183 |
+
Räsänen, O. J., Laine, U. K., and Altosaar, T. An improved speech segmentation quality measure: the r-value. In Tenth Annual Conference of the International Speech Communication Association, 2009.
|
| 184 |
+
|
| 185 |
+
Scharenborg, O., Wan, V., and Ernestus, M. Unsupervised speech segmentation: An analysis of the hypothesized phone boundaries. The Journal of the Acoustical Society of America, 127(2):1084-1095, 2010.
|
| 186 |
+
|
| 187 |
+
Song, X., Wang, G., Wu, Z., Huang, Y., Su, D., Yu, D., and Meng, H. Speech-xlnet: Unsupervised acoustic model pretraining for self-attention networks. arXiv preprint arXiv:1910.10387, 2019.
|
| 188 |
+
|
| 189 |
+
Sperber, M., Niehues, J., Neubig, G., Stüker, S., and Waibel, A. Self-attentional acoustic models. Interspeech 2018, Sep 2018. doi: 10.21437/ interspeech.2018-1910. URL http://dx.doi.org/ 10.21437/Interspeech.2018-1910.
|
| 190 |
+
|
| 191 |
+
Stan, A., Valentini-Botinhao, C., Orza, B., and Giurgiu, M. Blind speech segmentation using spectrogram image-based features and mel cepstral coefficients. In 2016 IEEE Spoken Language Technology Workshop (SLT), pp. 597-602, 2016.
|
| 192 |
+
|
| 193 |
+
Tenney, I., Das, D., and Pavlick, E. Bert rediscovers the classical nlp pipeline. arXiv preprint arXiv:1905.05950, ${2019}\mathrm{a}$ .
|
| 194 |
+
|
| 195 |
+
Tenney, I., Xia, P., Chen, B., Wang, A., Poliak, A., McCoy, R. T., Kim, N., Durme, B. V., Bowman, S. R., Das, D., and Pavlick, E. What do you learn from context? probing for sentence structure in contextual-ized word representations. In International Conference on Learning Representations, 2019b. URL https: //openreview.net/forum?id=SJzSgnRcKX.
|
| 196 |
+
|
| 197 |
+
Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Ł., and Polosukhin, I. Attention is all you need. In Advances in neural information processing systems, pp. 5998-6008, 2017.
|
| 198 |
+
|
| 199 |
+
Wang, P., Wei, L., Cao, Y., Xie, J., and Nie, Z. Large-scale unsupervised pre-training for end-to-end spoken language understanding. In ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 7999-8003, 2020.
|
| 200 |
+
|
| 201 |
+
Wang, Q., Okabe, K., Lee, K. A., Yamamoto, H., and Koshi-naka, T. Attention mechanism in speaker recognition: What does it learn in deep speaker embedding? In 2018 IEEE Spoken Language Technology Workshop (SLT), pp. 1052-1059, 2018.
|
| 202 |
+
|
| 203 |
+
Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R. R., and Le, Q. V. Xlnet: Generalized autoregressive pretraining for language understanding. In Advances in neural information processing systems, pp. 5753-5763, 2019.
|
papers/ICML/ICML 2020/ICML 2020 Workshop/ICML 2020 Workshop SAS/Jc65VYwYVB/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,133 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ UNDERSTANDING SELF-ATTENTION OF SELF-SUPERVISED AUDIO TRANSFORMERS
|
| 2 |
+
|
| 3 |
+
Shu-wen Yang ${}^{1}$ Andy T. Liu ${}^{12}$ Hung-yi Lee ${}^{12}$
|
| 4 |
+
|
| 5 |
+
§ ABSTRACT
|
| 6 |
+
|
| 7 |
+
Self-supervised Audio Transformers (SAT) enable great success in many downstream speech applications like ASR, but how they work has not been widely explored yet. In this work, we present multiple strategies for the analysis of attention mechanisms in SAT. We categorize attentions into explainable categories, where we discover each category possesses its own unique functionality. We provide a visualization tool for understanding multi-head self-attention, importance ranking strategies for identifying critical attention, and attention refinement techniques to improve model performance.
|
| 8 |
+
|
| 9 |
+
§ 1. INTRODUCTION
|
| 10 |
+
|
| 11 |
+
Adapting the idea of self-supervised learning (Devlin et al., 2018; Yang et al., 2019; Liu et al., 2019; Lan et al., 2019) to continuous speech has received much attention in recent work (Liu et al., 2020; Jiang et al., 2019; Song et al., 2019; Wang et al., 2020; Baevski et al., 2019b;a), where Transformer Encoders with multi-head self-attention (Vaswani et al., 2017) are pre-trained on a large amount of audio data in a self-supervised scheme. Once pre-trained, they are used to improve various downstream supervised tasks, including phone classification, speaker recognition, SLU, and ASR. Despite the great success of these Self-supervised Audio Transformers ${\left( \mathrm{{SAT}}\right) }^{1}$ , their internal attention are often neglected and not explored, as we have little understanding of how they work, or the knowledge they acquire from a large amount of unlabeled data. Understanding how SAT models draw conclusions is crucial for both their improvement and application. In the area of natural language processing (NLP), explaining and interpreting pre-trained black-box models like BERT have been a well-explored topic (Aken et al., 2020; Hao et al., 2019; Kovaleva et al., 2019; Clark et al., 2019; Tenney et al., 2019a;b). However, the analysis of models that are pre-trained on speech has not seen such widespread exploration, and remains an important and challenging endeavor for the speech community.
|
| 12 |
+
|
| 13 |
+
In this work, we propose to analyze the multi-head self-attention mechanism of SAT through the following methods: visualization, categorization, functionality study, and importance ranking. We found that the self-attentions of SAT models tend to converge into three categories: global attentions, vertical attentions, and diagonal attentions. Diagonal attentions either highly attend to $\pm t$ neighbor or are highly correlated with phoneme boundaries; vertical attentions often concentrate on specific phonemes. As for noisy global attentions, we provide a visualization tool to draw insights about their implicit operations. Through our quantized ranking analysis, we conclude that diagonal attentions outrank the most in terms of importance, followed by vertical attentions. Last but not least, we introduce attention refinement methods which allow us to improve learned representations by partially removing global attentions or constraining attention span, resulting in a faster inference time and higher performance.
|
| 14 |
+
|
| 15 |
+
§ 2. SELF-SUPERVISED AUDIO TRANSFORMERS
|
| 16 |
+
|
| 17 |
+
The main ideology of NLP BERT pre-training (Devlin et al., 2018; Yang et al., 2019; Liu et al., 2019; Lan et al., 2019) is to corrupt the input word tokens by randomly masking or permuting them with a probability policy, layers of Transformer Encoder (Vaswani et al., 2017) are trained together with a classifier that estimates the masked words at the output. Primarily inspired by this idea, previous works (Liu et al., 2020; Song et al., 2019; Jiang et al., 2019; Wang et al., 2020; Baevski et al., 2019b) proposed self-supervised learning for audio with Transformer Encoders. In this work, we refer to these types of models as Self-Supervised Audio Transformers, SAT. Unlike BERT where the inputs are discrete text tokens, the inputs of SATs are acoustic features (e.g., MFCC, FBANK, Mel-Spectrogram), which form much longer sequences and could be extremely similar to their neighbor features since speech signal is continuously varying. Some SATs take continuous acoustic features as input directly (Liu et al., 2020; Song et al., 2019), while some conduct vector quantization in advance (Baevski et al., 2019b;a). Also, different from BERT where the model is trained by estimating discrete tokens, SATs change to minimize reconstruction error between the real frame and the predicted frame (Liu et al., 2020; Jiang et al., 2019) or classification error for the real frame among sampled distracting frames (Baevski et al., 2019b;a).
|
| 18 |
+
|
| 19 |
+
${}^{1}$ College of Electrical Engineering and Computer Science, National Taiwan University ${}^{2}$ Graduate Institute of Communication Engineering, National Taiwan University. Correspondence to: Shu-wen Yang <r08944041@ntu.edu.tw>, Andy T. Liu <f07942089@ntu.edu.tw>, Hung-yi Lee <hungy-ilee@ntu.edu.tw>.
|
| 20 |
+
|
| 21 |
+
Published at the workshop on Self-supervision in Audio and Speech at the ${37}^{\text{ th }}$ International Conference on Machine Learning, Vienna, Austria. Copyright 2020 by the author(s).
|
| 22 |
+
|
| 23 |
+
${}^{1}$ These pre-trained transformer encoders have several different names in their original papers. In this paper we refer to them as SAT for simplicity.
|
| 24 |
+
|
| 25 |
+
Among all the variants of SATs, we address our focus on SATs that take continuous acoustic features as input with reconstruction loss. In our analysis, we particularly follow the framework described in Mockingjay (Liu et al., 2020). In Mockinjgay, two techniques of downsampling and consecutive masking are introduced to resolve these issues. Downsampling is applied on input features to adapt SATs to long sequences. To reduce the length of frames by a factor of ${R}_{\text{ factor }}$ , consecutive frames of ${R}_{\text{ factor }}$ amount are reshaped and stacked into one frame (Sperber et al., 2018; Pham et al., 2019). On the other hand, consecutive masking is applied during pre-training to avoid the model from exploiting the local smoothness of acoustic frames. Instead of masking a single frame, consecutive frames of ${C}_{num}$ are masked to zero. To study the attentions of SAT models, we use the prevailing framework of the LARGE model described in Mockingjay, which consists of 12 layers of Transformer Encoders. We train three models on the LibriSpeech (Panayotov et al., 2015) train-clean-360 subset with identical settings as in Mockingjay, except for ${C}_{\text{ num }} \in \{ 3,6,9\}$ , and we name them as M3, M6, M9.
|
| 26 |
+
|
| 27 |
+
§ 3. NOTATIONS
|
| 28 |
+
|
| 29 |
+
We first define notation for self-attention mechanism and SAT representations. Given a length $T$ sequence of vectors $\mathbf{x} = {x}_{1},\ldots ,{x}_{T} \in {\mathbb{R}}^{d}$ , we denote ${A}_{u}^{h} \in {\mathbb{R}}^{T \times T}$ as attention weights for all query-key pairs of a head $h$ when propagating an utterance $u$ . Hence, ${A}_{u}^{h}\left\lbrack {q,k}\right\rbrack \in \mathbb{R}$ is the attention weight of ${x}_{q}$ attending to ${x}_{k}$ . We use $q$ for timestamp of query; $k$ for timestamp of key, where $1 \leq q,k \leq T$ . As a result, ${A}_{u}^{h}\left\lbrack q\right\rbrack \in$ ${\mathbb{R}}^{T}$ is the attention distribution formed by ${x}_{q}$ , which is a row if we view ${A}_{u}^{h}$ as a map. When analyzing the representations of a $L$ -layer SAT, we denote ${\mathbf{x}}^{l} = {x}_{1}^{l},\ldots ,{x}_{T}^{l} \in {\mathbb{R}}^{d}$ as the representations of a given layer, where $0 \leq l \leq L$ and ${\mathbf{x}}^{\mathbf{0}}$ represents input features.
|
| 30 |
+
|
| 31 |
+
§ 4. VISUALIZATION AND CATEGORIZATION
|
| 32 |
+
|
| 33 |
+
We plot out ${A}_{u}^{h} \in {\mathbb{R}}^{T \times T}$ as an attention map, where ${A}_{u}^{h}\left\lbrack {0,0}\right\rbrack$ starts from the upper-left corner, like Fig ${1}^{2}$ . SAT attentions tend to converge into three categories: (1) global: flat attention distributions; (2) vertical: attention maps with vertical lines, and (3) diagonal: attention maps with clear diagonal. Because attention maps of a head are similar across utterances with respect to the three categories, we study self-attention on the basis of head instead of a single attention map. To classify heads into three categories, we define three metrics to quantify a head $h$ : globalness $G$ , verticality $V$ and diagnality $D$ in equations1,2,3, respectively.
|
| 34 |
+
|
| 35 |
+
$$
|
| 36 |
+
G\left( h\right) = \underset{u \sim U}{\mathbb{E}}\left\lbrack {\frac{1}{T}\mathop{\sum }\limits_{{q = 1}}^{T}\mathbb{H}\left( {{A}_{u}^{h}\left\lbrack q\right\rbrack }\right) }\right\rbrack \tag{1}
|
| 37 |
+
$$
|
| 38 |
+
|
| 39 |
+
$$
|
| 40 |
+
V\left( h\right) = \underset{u \sim U}{\mathbb{E}}\left\lbrack {-\mathbb{H}\left( {\frac{1}{T}\mathop{\sum }\limits_{{q = 1}}^{T}{A}_{u}^{h}\left\lbrack q\right\rbrack }\right) }\right\rbrack \tag{2}
|
| 41 |
+
$$
|
| 42 |
+
|
| 43 |
+
$$
|
| 44 |
+
D\left( h\right) = \underset{u \sim U}{\mathbb{E}}\left\lbrack {-\frac{1}{{T}^{2}}\mathop{\sum }\limits_{{q = 1}}^{T}\mathop{\sum }\limits_{{k = 1}}^{T}\left| {q - k}\right| \cdot {A}_{u}^{h}\left\lbrack {q,k}\right\rbrack }\right\rbrack \tag{3}
|
| 45 |
+
$$
|
| 46 |
+
|
| 47 |
+
< g r a p h i c s >
|
| 48 |
+
|
| 49 |
+
Figure 1. Attention maps of heads favored by G, V, D, visualized with the same utterance. (a)(c)(e) are average cases; (b)(d)(f) are extreme cases found by maximizing the metrics.
|
| 50 |
+
|
| 51 |
+
where $\mathbb{H}$ is the standard definition of entropy, and $U$ is a speech corpus. Based on G, V, D, we would have three ranking lists for all heads. If among the three ranking lists, a head has the highest rank based on the list of $\mathrm{G}$ , it would be categorized as global, and so on. We use ranking instead of values because the metrics may not have the same numerical scale. Fig 1 shows two attention maps for each category.
|
| 52 |
+
|
| 53 |
+
Diagonal attentions attend to local neighbors for every query. Some exhibit a highly focused behavior like Fig 1(f) and some are block diagonal like Fig 1(e). Interestingly, no SAT contains highly focused diagonal attention at main diagonal. They shift either to the left or right, and larger masking span ${C}_{num}$ is accompanied by a larger shift, possibly due to SAT models trying to get useful information from further frames. The functionality of block diagonal attentions is discussed in section 5 . Vertical attentions like Fig 1(c)(d) always attend to similar locations for all queries given an utterance; global attentions like Fig 1(a)(b) behave randomly. These two categories are discussed in section 6 . Finally, we visualize the head distribution ${}^{2}$ according to metrics and find the model trained with a larger masking span ${C}_{\text{ num }}$ has more global heads. On the contrary, M3 contains the most diagonal heads, suggesting that smaller ${C}_{num}$ makes SAT focus on local structure more.
|
| 54 |
+
|
| 55 |
+
${}^{2}$ Supplementary materials: https://github.com/ leo19941227/Self-Attention-on-SATs
|
| 56 |
+
|
| 57 |
+
< g r a p h i c s >
|
| 58 |
+
|
| 59 |
+
Figure 2. Four images on the left side are plotted with the same utterance. (a) a block diagonal attention map. (b) a block diagonal map plotted with true phoneme boundaries. Two orange dotted lines show two examples of boundaries. (c) similarity matrix for (a). (d) similarity matrix for MFCCs. (e) precision-recall curve for M3, M6, M9 attentions and MFCCs.
|
| 60 |
+
|
| 61 |
+
§ 5. PHONEME SEGMENTATION
|
| 62 |
+
|
| 63 |
+
There are attention maps of clear block diagonal like Fig 2(a). The borders of blocks might be phoneme boundaries, as illustrated in Fig 2(b). It seems diagonal attention knows phoneme intervals. We conduct phoneme segmentation to examine the correlation.
|
| 64 |
+
|
| 65 |
+
We mainly follow the algorithm proposed in (Bhati et al., 2017), which first calculates a similarity matrix from a sequence of features, containing all pairwise distances between features, and then extract boundary points from the similarity matrix. For segmentation with an attention map, its rows are considered as the feature sequence for computing the similarity matrix ${}^{2}$ . Examples of similarity matrices are shown in Fig 2(c)(d) when segmentation features are the attentions map and MFCCs, respectively. We slightly modify the boundary-point-extraction algorithm ${}^{2}$ in (Bhati et al., 2017), the modification makes algorithm a little more stable, but only little performance difference is found.
|
| 66 |
+
|
| 67 |
+
TIMIT (Garofolo et al., 1993) is used for evaluating the phoneme segmentation since it provides ground-truth phoneme boundaries. We follow the setup in (Stan et al., 2016) that uses a small subset of training set as validation to adjust a few algorithm parameters and evaluate on test set. We use a ${20}\mathrm{\;{ms}}$ tolerance window and evaluate with R-value (Räsänen et al., 2009) and precision-recall curve. We hand-pick a visually block diagonal head for each of M3, M6, and M9. We choose MFCC as baseline feature since it is the most prevailing feature (Bhati et al., 2017; 2019; Scharenborg et al., 2010; Mporas et al., 2008) for segmentation. Little performance difference is found between MFCC and ${\mathbf{x}}^{\mathbf{0}}$ (Mel-scale spectrogram).
|
| 68 |
+
|
| 69 |
+
Fig 2(e) verifies the correlation between block diagonal attentions and phoneme boundaries, that attentions clearly surpass MFCC under the same setting. As for R-value, under the strict hit counting scenario (Räsänen et al., 2009), MFCC achieves 76.68; M3, M6, M9 achieve 79.99, 78.43, 78.19, respectively. Interestingly, larger masking span ${C}_{num}$ leads to poorer performance. The reason is that when ${C}_{num} = 3$ , masked portions are typically within a phoneme interval, the model learned to utilize features in the same interval to reconstruct. On the other hand, ${C}_{\text{ num }} = 9$ can sometimes mask an entire phoneme interval, the model then tries to retrieve information beyond the interval.
|
| 70 |
+
|
| 71 |
+
Worth mentioning, similarity matrices on MFCCs and learned block diagonal attentions have a fundamental difference that the former show high activation on similar but distant frames in Fig 2(d), while the latter are more aware of phoneme neighborhood structure. Figures similar to Fig 2(d) are shown ${}^{2}$ when we compute similarity matrix on Mel-scale spectrogram or SAT representations, suggesting that despite there are similar frames located far apart, block diagonal heads learned to ignore distant information and focus on neighborhood structure.
|
| 72 |
+
|
| 73 |
+
§ 6. PHONEME RELATION MAP
|
| 74 |
+
|
| 75 |
+
To study the functionality of global and vertical heads, we propose to align attentions to phoneme relations to see whether some heads focus on looking for specific phoneme relations in the utterances. For a sequence of input features ${\mathbf{x}}^{\mathbf{0}}$ , there exists frame-wise phoneme labels $\mathbf{y} \in {\mathbf{Y}}^{T}$ , where $\mathbf{Y}$ is a predefined phone set. We consider ${x}_{q}^{l}$ attending to ${x}_{k}^{l}$ as when observing phoneme ${y}_{q}$ the head would look for phoneme ${y}_{k}$ . We quantify a phoneme relation ${Y}_{m} \rightarrow {Y}_{n}$ inside a head $h$ by summing up all attention weights ${A}_{u}^{h}\left\lbrack {q,k}\right\rbrack$ whose phoneme relation ${y}_{q} \rightarrow {y}_{k}$ equals ${Y}_{m} \rightarrow {Y}_{n}$ , over the entire speech corpus. More specifically, we plot a phoneme relation map (PRM) ${P}_{h} \in {\mathbb{R}}^{\left| \mathbf{Y}\right| \times \left| \mathbf{Y}\right| }$ by the following equations:
|
| 76 |
+
|
| 77 |
+
$$
|
| 78 |
+
{P}_{h}^{\prime }\left\lbrack {m,n}\right\rbrack = \underset{u \sim U}{\mathbb{E}}\left\lbrack {\frac{1}{T}\mathop{\sum }\limits_{{q = 1}}^{T}\mathop{\sum }\limits_{{k = 1}}^{T}{\mathbb{I}}_{{y}_{q} = {Y}_{m}} \cdot {\mathbb{I}}_{{y}_{k} = {Y}_{n}} \cdot {A}_{u}^{h}\left\lbrack {q,k}\right\rbrack }\right\rbrack
|
| 79 |
+
$$
|
| 80 |
+
|
| 81 |
+
(4)
|
| 82 |
+
|
| 83 |
+
$$
|
| 84 |
+
{P}_{h}\left\lbrack {m,n}\right\rbrack = \frac{{P}_{h}^{\prime }\left\lbrack {m,n}\right\rbrack - {P}_{U}\left\lbrack {m,n}\right\rbrack }{{P}_{U}\left\lbrack {m,n}\right\rbrack } \tag{5}
|
| 85 |
+
$$
|
| 86 |
+
|
| 87 |
+
where $1 \leq m,n \leq \left| \mathbf{Y}\right| ,\mathbb{I}$ is indicator function, ${P}_{h}^{\prime },{P}_{U} \in$ ${\mathbb{R}}^{\left| \mathbf{Y}\right| \times \left| \mathbf{Y}\right| }$ and ${P}_{U}$ is the distribution of all possible phoneme relations ${}^{2}$ in speech corpus $U$ , normalizing the effect of dominating relations like ${sil} \rightarrow {sil}$ which appears in all utterances. As a result, positive values in ${P}_{h}$ represent preference for specific phoneme relations; negative values represent the opposite.
|
| 88 |
+
|
| 89 |
+
PRMs are plotted using TIMIT (Garofolo et al., 1993) with 39 phonemes, and results of several heads are shown ${}^{2}$ in Fig 3. Since diagonal heads are interpretable themselves, we focus on vertical and global heads. There are several opera-
|
| 90 |
+
|
| 91 |
+
< g r a p h i c s >
|
| 92 |
+
|
| 93 |
+
Figure 3. Some observed operations from PRMs: (a) sil attends to sil; non-sil attends to non-sil (b) sil attends to non-sil; non-sil attends to sil, ch, sh (c) attends to identity, the same phoneme as query (d) not attends to identity (e) attends to sil (f) not attends to sil (g) attends to ch, jh, s, sh (h) not attends to s, sh.
|
| 94 |
+
|
| 95 |
+
< g r a p h i c s >
|
| 96 |
+
|
| 97 |
+
Figure 4. (a) The relation between verticality $\mathrm{V}$ of a head and its extreme concentration value. Each dot represents a head h. (b) zooms in the bottom-left of (a) since outliers dominate too much. PRMs of representative heads are marked by red squares: Fig 3(e)(g)(f)(h) are for 1,2,3,4 respectively; Fig 4 (c) is for 5 .
|
| 98 |
+
|
| 99 |
+
tions: attending to silence, identity, specific phonemes, and not attending to these (not operations). We observe tendency of vertical heads either focus or neglect specific phonemes for all queries, and we bridge their connections. For later discussion, we use focus and neglect to refer to these types of behaviours. While a PRM characterizes all phoneme relations of a head, we further define concentration ${C}_{h} \in {\mathbb{R}}^{\left| Y\right| }$ of a head $h$ , where each ${C}_{h}\left\lbrack n\right\rbrack \in \mathbb{R}$ quantifies the amount of focus (when positive) or neglect (when negative) of a head on specific phoneme ${Y}_{n}$ , over all queries:
|
| 100 |
+
|
| 101 |
+
$$
|
| 102 |
+
{C}_{h}\left\lbrack n\right\rbrack = \frac{1}{\left| Y\right| }\mathop{\sum }\limits_{{m = 1}}^{\left| Y\right| }{P}_{h}\left\lbrack {m,n}\right\rbrack \tag{6}
|
| 103 |
+
$$
|
| 104 |
+
|
| 105 |
+
Fig 4 verifies the connection between verticality and concentration. We report the maximum focus or neglect for each head. Fig 4(a) points out that heads with high verticality do focus on specific phonemes; Fig 4(b) points out even a slight increase of the verticality $\mathrm{V}$ of a head has correlation to concentration, for both focus and neglect. Some low-verticality heads with extreme neglect at the bottom-left of Fig 4(b) are diagonal heads, which always attend to their neighbors dynamically and show extreme neglect for all phonemes.
|
| 106 |
+
|
| 107 |
+
§ 7. IMPORTANCE RANKING
|
| 108 |
+
|
| 109 |
+
To evaluate the importance of different attention patterns, we conducted two pruning-based probing experiments. We ablate partial functionality of self-attention directly at inference time in two aspects: (1) ablates an entire head; (2) ablates the visible span for all heads. If an attention pattern is essential, ablating it should exhibit immediate loss in terms of the quality of final representations. We examine representation quality by three probing tasks: spectrogram reconstruction, phoneme classification, and speaker recognition. For the first task, we examine the richness in terms of spectrogram details of refined representations. We reuse the reconstruction head during pre-training and measure L1 loss compared to the original. For the latter two tasks, we examine the usefulness of refined representations on downstream tasks. For phoneme and speaker classifications, we train the downstream models using LibriSpeech (Panayotov et al., 2015) train-clean-100 subset and fixed ${50}\mathrm{k}$ steps. In frame-level setting, we use single-hidden MLP; in utterance-level setting, we use mean-pooling followed by a linear transform. Phoneme classification is conducted under frame-level setting; speaker recognition is under frame-level and utterance-level. Following (Liu et al., 2020), phoneme labels are obtained by the Montreal Force Aligner (McAuliffe et al., 2017), and all evaluations are done on the LibriSpeech test-clean subset.
|
| 110 |
+
|
| 111 |
+
§ 7.1. HEAD-BASED PRUNING
|
| 112 |
+
|
| 113 |
+
For each head $h$ , we first compute values of $G\left( h\right) ,V\left( h\right)$ , $D\left( h\right)$ , and cumulatively prune heads from high to low for each metric by setting ${A}_{u}^{h} = 0$ , resulting in three curves as shown in Fig 5(a)(b)(c). We rank the importance of the three categories by observing which pruning results in a larger performance drop. We find ranking results are consistent for different ${C}_{num}$ , so we only show the result of M3. There are several findings: (1) Diagonal heads are the most important. Performances on all three tasks drop significantly with only 24 heads pruned. (2) Vertical heads rank second. While pruning them does not hurt reconstruction or phoneme classification much, it drops faster than global heads in speaker recognition. This suggests that vertical attentions have more relation to speaker identity. (3) Global heads have the least importance that pruning them has the least effect on all tasks. (4) Both global and vertical heads are harmful to the phonetic structure. Fig 5(b) shows that pruning them even improve the phoneme classification accuracy. For vertical heads, we speculate that the vertical heads might focus on distant phonemes when forming a new representation independently (disrespectfully) of query phoneme, which might corrupt the local phonetic structure. (5) In Fig 5(b), when we prune according to diagonality, phoneme accuracy drops dramatically for the first 24 heads pruned, while it surprisingly increases as we prune more heads. This is because when pruning more than 24 diagonal heads, we start to prune the heads that are more vertical or global than diagonal, supporting the previous finding that vertical and global attentions are harmful for phonemic information.
|
| 114 |
+
|
| 115 |
+
< g r a p h i c s >
|
| 116 |
+
|
| 117 |
+
Figure 5. Performance curves of attention pruning. Curves marked by dots are frame-level setting; otherwise utterance-level setting.
|
| 118 |
+
|
| 119 |
+
We further show the result of ranking the importance of heads based on their maximum attention weights, denoted as weight in Fig 5(a)(b)(c), which has been shown to be a strong baseline in the previous work (Hao et al., 2020). Fig 5(c) shows pruning based on globalness has less influence than weight. Fig 6(a) visualizes the difference between two ranking strategies. Although they agree on which heads are essential, they slightly diverge on which are not. Their decision boundaries in terms of the first 24 heads to be pruned are shown by red and blue lines in Fig 6(a), which is the direct cause of their performance differences. Global-ness prunes blue dots while leaving red dots unpruned; and vise versa for weight (while they all prune green dots). Since globalness-based pruning results in a better performance than weight-based pruning, this suggests that heads of red dots are more important than blue dots. We select the head with the highest ranking difference from both red and blue dots, and plot their PRMs in Fig 6(b) and (c), respectively. We find that while Fig 6(b) shows strong neglect, (c) does not possess observable operation. In fact, heads of red dots are mostly with clear neglect ${}^{2}$ . We argue that this is the main reason why globalness performs better after pruning, that heads with neglect are essential to speaker identity, and glob-
|
| 120 |
+
|
| 121 |
+
< g r a p h i c s >
|
| 122 |
+
|
| 123 |
+
Figure 6. (a) Different ranking of a head according to globalness and attention weight. Each dot is a head, and the one with higher ranking number is more important. The first 24 heads to be pruned are green and blue for globalness, green and red for weight. (b) and (c) are PRMs of heads in red and blue dots, respectively.
|
| 124 |
+
|
| 125 |
+
alness defined by entropy is able to recognize neglect and score them higher. On the other hand, weight is confused by attentions with large weights but without meaningful operation, suggesting that weight do not always reflect the importance of heads. We speculate that these heads might learn to neglect less useful frames, like sil in Fig 6(b), and focus more on other frames with more speaker information (Wang et al., 2018). Based on the above observations, we choose globalness as our refinement metric. Fig 5(d)(e)(f) show pruning results for M3, M6, M9. The importance of global heads become less for larger ${C}_{\text{ num }}$ , and we keep observing performance boost for phoneme classification. Despite all three models drop for speaker recognition, the drop is mitigated dramatically in utterance-level setting (a more common scenario), suggesting that global heads are not necessary when speaker classification is performed on utterance level. In conclusion, we can prune SAT heads for more than ${50}\%$ without sacrificing performance.
|
| 126 |
+
|
| 127 |
+
§ 7.2. SPAN-BASED PRUNING
|
| 128 |
+
|
| 129 |
+
Since most of the heads have attention span over a long range (no matter what category it belongs to), we further conduct attention-span pruning to examine if global information is genuinely not helpful for extracting phonetic information. We limit the visible span of all heads by length $r$ , either to the left or right. That is, we set ${A}_{u}^{h}\left\lbrack {q,k}\right\rbrack = 0$ for any $\left| {q - k}\right| > r$ . Results are presented in Fig 5(g)(h)(i).
|
| 130 |
+
|
| 131 |
+
§ 8. CONCLUSION
|
| 132 |
+
|
| 133 |
+
In this paper, we present multiple strategies for analyzing the self-attention mechanism in SATs, including phoneme segmentation, phoneme relation map, and two aspects of pruning. We find several attention functionality and operations. We identify critical attentions and show our visualization tool useful for understanding pruning behavior. Finally, we conclude that we can refine representations and speed up inference time for a given SAT in two aspects: removing global heads or constraining attention span.
|
papers/ICML/ICML 2020/ICML 2020 Workshop/ICML 2020 Workshop SAS/MEQ_DSSJam_/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,307 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Finding, visualizing, and quantifying latent structure across diverse animal vocal repertoires
|
| 2 |
+
|
| 3 |
+
Tim Sainburg ${}^{1}$ Marving Thielk ${}^{1}$ Timothy Q. Gentner ${}^{1}$
|
| 4 |
+
|
| 5 |
+
## Abstract
|
| 6 |
+
|
| 7 |
+
Animals produce vocalizations that range in complexity from a single repeated call to hundreds of unique vocal elements patterned in sequences unfolding over hours. Characterizing complex vocalizations can require considerable effort and a deep intuition about each species' vocal behavior. Even with a great deal of experience, human characterizations of animal communication can be affected by human perceptual biases. We present a set of computational methods for projecting animal vocalizations into low dimensional latent representational spaces that are directly learned from the spectrograms of vocal signals. We apply these methods to diverse datasets from over 20 species, including humans, bats, songbirds, mice, cetaceans, and nonhuman primates. Latent projections uncover complex features of data in visually intuitive and quantifiable ways, enabling high-powered comparative analyses of unbiased acoustic features. We introduce methods for analyzing vocalizations as both discrete sequences and as continuous latent variables. Each method can be used to disentangle complex spectro-temporal structure and observe long-timescale organization in communication.
|
| 8 |
+
|
| 9 |
+
## 1. Introduction
|
| 10 |
+
|
| 11 |
+
Vocal communication is a common social behavior among many species, in which acoustic signals are transmitted from sender to receiver to convey information such as identity, individual fitness, or the presence of danger. Across diverse fields, a set of shared research questions seeks to uncover the structure and mechanism of vocal communication: What information is carried within signals? How are signals produced and perceived? How does the communicative transmission of information affect fitness and reproductive success? Many methods are available to address these questions quantitatively, most of which are founded on underlying principles of abstraction and characterization of 'units' in the vocal time series (Kershenbaum et al., 2016). For example, segmentation of birdsong into temporally discrete elements followed by clustering into discrete categories has played a crucial role in understanding syntactic structure in birdsong (Kershenbaum et al., 2016; Berwick et al., 2011; Sainburg et al., 2019; Katahira et al., 2013; Markowitz et al., 2013; Cody et al., 2016; Hedley, 2016; Koumura & Okanoya, 2016; Gentner & Hulse, 1998).
|
| 12 |
+
|
| 13 |
+
The characterization and abstraction of vocal communication signals remains both an art and a science. In a recent survey, Kershenbaum et. al., (2016) outline four common steps used in many analyses to abstract and describe vocal sequences: (1) the collection of data, (2) segmentation of vocalizations into units, (3) characterization of sequences, and (4) identification of meaning. A number of heuristics guide these steps, but it is largely up to the experimenter to determine which heuristics to apply and how. This application typically requires expert-level knowledge, which in turn can be difficult and time-consuming to acquire, and often unique to the structure of each species' vocal repertoire. For instance, what constitutes a 'unit' of humpback whale song? Do these units generalize to other species? Should they? When such intuitions are available they should be considered, of course, but they are generally rare in comparison to the wide range of communication signals observed naturally. As a result, communication remains understudied in most of the thousands of vocally communicating species. Even in well-documented model species, characterizations of vocalizations are often influenced by human perceptual and cognitive biases (Suzuki et al., 2006; Tyack, 1998; Janik, 1999; Kershenbaum et al., 2016). We explore a class of unsupervised, computational, machine learning techniques that avoid many of the foregoing limitations, and provide an alternative method to characterize vocal communication signals. Machine learning methods are designed to capture statistical patterns in complex datasets and have flourished in many domains (LeCun et al., 2015; Bengio et al., 2013; Radford et al., 2015; Becht et al., 2019; Brown & De Bivort, 2018; Becht et al., 2019). These techniques are therefore well suited to quantitatively investigate complex statistical structure in vocal repertoires that otherwise rely upon expert intuitions. In this paper, we demonstrate the utility of unsupervised latent models, statistical models that learn latent (compressed) representations of complex data, in describing animal communication.
|
| 14 |
+
|
| 15 |
+
---
|
| 16 |
+
|
| 17 |
+
${}^{1}$ University of California, San Diego, USA. Correspondence to: Tim Sainburg <tsainbur@ucsd.edu>.
|
| 18 |
+
|
| 19 |
+
Proceedings of the ${37}^{\text{th }}$ International Conference on Machine Learning, Vienna, Austria, PMLR 119, 2020. Copyright 2020 by the author(s).
|
| 20 |
+
|
| 21 |
+
---
|
| 22 |
+
|
| 23 |
+

|
| 24 |
+
|
| 25 |
+
Figure 1. Graph-based dimensionality reduction. Current non-linear dimensionality reduction algorithms like TSNE, UMAP, and ISOMAP work by building a graph representing the relationships between high-dimensional data points, projecting those data points into a low-dimensional space, and then finds and embedding that retains the structure of the graph. This figure is for visualization, the spectrograms do not actually correspond to the points in the 3D space.
|
| 26 |
+
|
| 27 |
+
### 1.1. Latent models of acoustic communication
|
| 28 |
+
|
| 29 |
+
Dimensionality reduction refers to the compression of high-dimensional data into a smaller number of dimensions, while retaining the structure and variance present in the original high-dimensional data. Each point in the high-dimensional input space can be projected into the lower-dimensional 'latent' feature space, and dimensions of the latent space can be thought of as features of the dataset. Animal vocalizations are good targets for dimensionality reduction. They appear naturally as sound pressure waveforms with rich, multi-dimensional temporal and spectral variations, but can generally be explained by lower-dimensional dynamics (Perl et al., 2011; Gardner et al., 2001; Arneodo et al., 2012). Dimensionality reduction, therefore, offers a way to infer a smaller set of latent dimensions (or features) that can explain much of the variance in high-dimensional vocalizations.
|
| 30 |
+
|
| 31 |
+
The common practice of developing a set of basis-features on which vocalizations can be quantitatively compared (also called Predefined Acoustic Features, or PAFs) is a form of dimensionality reduction and comes standard in most animal vocalization analysis software (e.g. Luscinia (Lachlan et al., 2018), Sound Analysis Pro (Tchernichovski & Mitra, 2004; Tchernichovski et al., 2000), BioSound (Elie & The-unissen, 2018), Avisoft (Specht, 2002), and Raven (Charif et al., 2010)). Birdsong, for example, is often analyzed on the basis of features such as amplitude envelope, Weiner entropy, spectral continuity, pitch, duration, and frequency modulation (Tchernichovski & Mitra, 2004; Kershenbaum et al., 2016). Grouping elements of animal vocalizations (e.g. syllables of birdsong, mouse ultrasonic vocalizations) into abstracted discrete categories is also a form of dimensionality reduction, where each category is a single orthogonal dimension. In machine learning parlance, the process of determining the relevant features, or dimensions, of a particular dataset, is called feature engineering.
|
| 32 |
+
|
| 33 |
+
An attractive alternative to feature engineering is to project animal vocalizations into low-dimensional feature spaces that are determined directly from the structure of the data. Many methods for data-driven dimensionality reduction are available. PCA, for example, projects data onto a lower-dimensional surface that maximizes the variance of the projected data (Dunlop et al., 2007; Kershenbaum et al., 2016), while multidimensional scaling (MDS) projects data onto a lower-dimensional surface that maximally preserves the pairwise distances between data points. Both PCA and MDS are capable of learning manifolds that are linear or near-linear transformations of the original high-dimensional data space (Tenenbaum et al., 2000).
|
| 34 |
+
|
| 35 |
+
More recently developed graph-based methods extend dimensionality reduction to infer latent manifolds as nonlinear transformations of the original high-dimensional space using ideas from topology (e.g. ISOMAP, UMAP, t-SNE; Tenenbaum et al. (2000); McInnes et al. (2018); Maaten & Hinton (2008)). Like their linear predecessors, these non-linear dimensionality reduction algorithms also try to find a low-dimensional manifold that captures variation in the higher-dimensional input data, but the graph-based methods allow the manifold to be continuously deformed, by for example stretching, twisting, and/or shrinking, in high dimensional space. These algorithms work by building a topological representation of the data and then learning a low-dimensional embedding that preserves the structure of the topological representation (Fig 1). For example, while MDS learns a low-dimensional embedding that preserves the pairwise distance between points in Euclidean space, ISOMAP (Tenenbaum et al., 2000), one of the original topological non-linear dimensionality reduction algorithms, infers a graphical representation of the data and then performs MDS on the pairwise distances between points within the graph (geodesics) rather than in Euclidean space.
|
| 36 |
+
|
| 37 |
+
In this paper, we describe a class of nonlinear latent models that learn complex feature-spaces of vocalizations, requiring few a priori assumptions about the features that best describe a species' vocalizations. We show that these methods reveal informative, low-dimensional, feature-spaces that enable the formulation and testing of hypotheses about animal communication. We apply our method to diverse datasets consisting of over 20 species, including humans, bats, songbirds, mice, cetaceans, and nonhuman primates. We introduce methods for treating vocalizations both as sequences of temporally discrete elements such as syllables, as is traditional in studying animal communication (Kershenbaum et al., 2016), as well as temporally continuous trajectories, as is becoming increasingly common in representing neural sequences (Cunningham & Byron, 2014). Using both methods, we show that latent projections produce visually-intuitive and quantifiable representations that capture complex acoustic features. We show comparatively that the spectrotemporal characteristics of vocal units vary from species to species in how distributionally discrete they are and discuss the relative utility of different ways to represent different communicative signals.
|
| 38 |
+
|
| 39 |
+
## 2. Results
|
| 40 |
+
|
| 41 |
+
### 2.1. Discrete latent projections of animal vocalizations
|
| 42 |
+
|
| 43 |
+
To explore the broad utility of latent models in capturing features of vocal repertoires, we analyzed nineteen datasets consisting of 400 hours of vocalizations and over 3,000,000 discrete vocal units from 29 unique species. Each vocalization dataset was temporally segmented into discrete units (e.g. syllables, notes), either based upon segmentation boundaries provided by the dataset (where available), or using a novel dynamic-thresholding segmentation algorithm that segments syllables of vocalizations between detected pauses in the vocal stream. Each dataset was chosen because it contains large repertoires of vocalizations from relatively acoustically isolated individuals that can be cleanly separated into temporally-discrete vocal units. With each temporally discrete vocal unit we computed a spectrographic representation. We then projected the spectrograms into latent feature spaces using UMAP (e.g. Figs 2, 3). From these latent feature spaces, we analyzed datasets for classic vocal features of animal communication signals, speech features, stereotypy/clusterability, and sequential organization.
|
| 44 |
+
|
| 45 |
+
Individual identity Many species produce caller-specific vocalizations that facilitate the identification of individuals when other sensory cues, such as sight, are not available. The features of vocalizations facilitating individual identification vary between species. We projected identity call datasets (i.e., sets of calls thought to carry individual identity information) from four different species into UMAP latent spaces (one per species) to observe whether individual identity falls out naturally within the latent space.
|
| 46 |
+
|
| 47 |
+

|
| 48 |
+
|
| 49 |
+
Figure 2. Individual identity is captured in projections for some datasets. Each plot shows vocal elements discretized, spectro-grammed, and then embedded into a 2D UMAP space, where each point in the scatterplot represents a single element (e.g. syllable of birdsong). Scatterplots are colored by individual identity. The borders around each plot are example spectrograms pointing toward different regions of the scatterplot. (A) Rhesus macaque coo calls. (B) Zebra finch distance calls. (C) Fruit bat infant isolation calls. (D) Marmoset phee calls.
|
| 50 |
+
|
| 51 |
+
We looked at four datasets where both caller and call-type are available. Caller identity is evident in latent projections of all four datasets (Fig 2). The first dataset is comprised of Macaque coo calls, where identity information is thought to be distributed across multiple features including fundamental frequency, duration, and Weiner entropy (Fukushima et al., 2015). Indeed, the latent projection of coo calls clustered tightly by individual identity (silhouette score $= {0.378}$ ; Fig 2A). The same is true for Zebra finch distance calls (Elie & Theunissen, 2016) (silhouette score = 0.615; Fig 2B). Egyptian fruit bat pup isolation calls, which in other bat species are discriminable by adult females (Bohn et al., 2007; Engler et al., 2017; Bohn et al., 2007) clearly show regions of UMAP space densely occupied by single individual's vocalizations, but no clear clusters (silhouette score = -0.078; Fig 2C). In the marmoset phee call dataset (Miller et al., 2010) it is perhaps interesting that given the range of potential features thought to carry individual identity (Fukushima et al., 2015), phee calls appear to lie along a single continuum where each individual's calls occupy overlapping regions of the continuum (silhouette score $= - {0.062}$ ; Fig 2D). The silhouette score for each species was well above chance $\left( {\mathrm{H}\left( 2\right) > {20},\mathrm{p} < {10}^{-5}}\right)$ . These patterns predict that some calls, such as macaque coo calls, would be more easily discriminable by conspecifics than other calls, such as marmoset phee calls.
|
| 52 |
+
|
| 53 |
+
#### 2.1.1. VARIATION IN DISCRETE DISTRIBUTIONS AND STEREOTYPY
|
| 54 |
+
|
| 55 |
+

|
| 56 |
+
|
| 57 |
+
Figure 3. UMAP projections of vocal repertoires across diverse species. Each plot shows vocal elements segmented, spectro-grammed, and then embedded into a 2D UMAP space, where each point in the scatterplot represents a single element (e.g. syllable of birdsong). Scatterplots are colored by element categories over individual vocalizations as defined by the authors of each dataset, where available. (A) Human phonemes. (B) Egyptian fruit bat calls (color is context). (C) Cassin's vireo syllables. (O) Clusterability (Hopkin's metric) for each dataset. Lower is more clusterable. Hopkin's metric is computed over UMAP projected vocalizations for each species. Error bars show the 95% confidence interval across individuals. Color represents species category (red: mammal, blue: songbird).
|
| 58 |
+
|
| 59 |
+
In species as phylogenetically diverse as songbirds and rock hyraxes, analyzing the sequential organization of communication relies upon similar methods of segmentation and categorization of discrete vocal elements (Kershenbaum et al., 2016). In species such as the Bengalese finch, where syllables are highly stereotyped, clustering syllables into discrete categories is a natural way to abstract song. The utility of clustering song elements in other species, however, is more contentious because discrete category boundaries are not as easily discerned (Tyack, 1998; Suzuki et al., 2006; Goffinet et al., 2019; Hertz et al., 2019).
|
| 60 |
+
|
| 61 |
+
To compare broad structural characteristics across a wide sampling of species, we projected vocalizations from 14 datasets of different species vocalizations, ranging across songbirds, cetaceans, primates, and rodents into UMAP space (Fig 3). To do so, we sampled from a diverse range of datasets, each of which was recorded from a different species in a different setting. Some datasets were recorded from single isolated individuals in a sound isolated chamber in a laboratory setting, while others were recorded from large numbers of freely behaving individuals in the wild. In addition, the units of vocalization from each dataset are variable. We used the smallest units of each vocalization that could be easily segmented, for example, syllables, notes, and phonemes. Thus, this comparison across species is not well-controlled. Still, such a dataset enabling a broad comparison in a well-controlled manner does not exist. Latent projections of such diverse recordings, while limited in a number of ways, have the potential to provide a glimpse into broad structure into vocal repertoires, yielding novel insights into broad trends in animal communication. For each dataset, we computed spectrograms of isolated elements, and projected those spectrograms into UMAP space (Fig 3). Where putative element labels are available, we plot them in color over each dataset.
|
| 62 |
+
|
| 63 |
+
Visually inspecting the latent projections of vocalizations reveals appreciable variability in how the repertoires of different species cluster in latent space. For example, mouse USVs appear as a single cluster (Fig 3I), while zebra finch syllables appear as multiple discrete clusters (Fig 3M, F), and gibbon song sits somewhere in between (Fig 3L). This suggests that the spectro-temporal acoustic diversity of vocal repertoires fall along a continuum ranging from unclustered and uni-modal to highly clustered.
|
| 64 |
+
|
| 65 |
+
We quantified this effect using a linear mixed-effects model comparing the Hopkin's statistic across UMAP projections of vocalizations from single individuals $\left( {\mathrm{n} = {289}}\right)$ , controlling for the number of vocalizations produced by each individual as well as random variability at the level of species. We included each of the species in Fig 3 except giant otter and gibbon vocalizations, as individual identity was not available for those datasets. We find that songbird vocalizations are significantly more clustered than mammalian vocalizations $\left( {{\chi }^{2}\left( 1\right) = {20},\mathrm{p} < {10}^{-5}}\right)$ . The stereotypy of songbird (and other avian) vocal elements is well documented (Williams, 2004; Smith et al., 1997) and at least in zebra finches is related to the high temporal precision in the singing-related neural activity of vocal-motor brain regions (Hahnloser et al., 2002; Fee et al., 2004; Chi & Margoliash, 2001).
|
| 66 |
+
|
| 67 |
+

|
| 68 |
+
|
| 69 |
+
Figure 4. HDBSCAN density-based clustering. Clusters are found by generating a graphical representation of data, and then clustering on the graph. The data shown in this figure are from the latent projections from Fig 1. Notably, the three clusters in Fig 1. are clustered into only two clusters using HDBSCAN, exhibiting a potential shortcoming of the HDBSCAN algorithm. The grey colormap in the condensed trees represent the number of points in the branch of the tree. $\Lambda$ is a value used to compute the persistence of clusters in the condensed trees.
|
| 70 |
+
|
| 71 |
+
#### 2.1.2. CLUSTERING VOCAL ELEMENT CATEGORIES
|
| 72 |
+
|
| 73 |
+
UMAP projections of birdsongs largely fall more neatly into discriminable clusters (Fig 3). If clusters in latent space are highly similar to experimenter-labeled element categories, unsupervised latent clustering could provide an automated and less time-intensive alternative to hand-labeling elements of vocalizations. To examine this, we compared how well clusters in latent space correspond to experimenter-labeled categories in three human-labeled datasets: two separate Bengalese finch datasets (Nicholson et al., 2017; Koumura, 2016), and one Cassin's vireo dataset (Hedley, 2016). We compared four different labeling techniques: a hierarchical density-based clustering algorithm (HDBSCAN; (Campello et al., 2013; McInnes et al., 2017)) applied to UMAP projections of spectrograms, HDBSCAN applied to PCA projections of spectrograms ${}^{1}$ , k-means (Pedregosa et al.,2011) clustering applied over UMAP, and k-means clustering applied over spectrograms. We found that HDBSCAN clustering outperformed other clustering algorithms on all metrics for all datasets (See full manuscript).
|
| 74 |
+
|
| 75 |
+
Like the contrast between MDS and UMAP, the k-means clustering algorithm works directly on the Euclidean distances between data points, whereas HDBSCAN operates on a graph-based transform of the input data (Fig 4). Briefly, HDBSCAN first defines a 'mutual reachability' distance between elements, a measure of the distance between points in the dataset weighted by the local sparsity/density of each point (measured as the distance to a $k$ th nearest neighbor). HDBSCAN then builds a graph, where each edge between vertices (points in the dataset) is the mutual reachability between those points, and then prunes the edges to construct a minimum spanning tree (a graph containing the minimum set of edges needed to connect all of the vertices). The minimum spanning tree is converted into a hierarchy of clusters of points sorted by mutual reachability distance, and then condensed iteratively into a smaller hierarchy of putative clusters. Finally, clusters are chosen as those that persist and are stable over the greatest range in the hierarchy.
|
| 76 |
+
|
| 77 |
+
#### 2.1.3. ABSTRACTING AND VISUALIZING SEQUENTIAL ORGANIZATION
|
| 78 |
+
|
| 79 |
+
As acoustic signals, animal vocalizations have an inherent temporal structure that can extend across time scales from short easily discretized elements such as notes, to longer duration syllables, phrases, songs, bouts, etc. The latent projection methods described above can be used to abstract corpora of song elements well-suited to temporal pattern analyses (Sainburg et al., 2019), and to make more direct measures of continuous vocalization time series. Moreover, their automaticity enables the high throughput necessary to satisfy intensive data requirements for most quantitative sequence models.
|
| 80 |
+
|
| 81 |
+
In practice, modeling sequential organization can be applied to any discrete dataset of vocal elements, whether labeled by hand or algorithmically. Latent projections of vocal element have the added benefit of allowing visualization of the sequential organization that can be compared to abstracted models. As an example of this, we derived a corpus of symbolically segmented vocalizations from a dataset of Bengalese finch song using latent projections and clustering (Fig 5). Bengalese finch song bouts comprise a small number (~5-15) of highly stereotyped syllables produced in well-defined temporal sequences a few dozen syllables long (Katahira et al., 2013). We first projected syllables from a single Bengalese finch into UMAP latent space, then visualized transitions between vocal elements in latent space as line segments between points (Fig 5B), revealing highly regular patterns. To abstract this organization to a grammatical model, we clustered latent projections into discrete categories using HDBSCAN. Each bout is then treated as a sequence of symbolically labeled syllables (e.g. $B \rightarrow B \rightarrow C \rightarrow A$ ; Fig 5D) and the entire dataset rendered as a corpus of transcribed song (Fig 5E). Using the transcribed corpus, one can abstract statistical and grammatical models of song, such as the Markov model shown in Fig $5\mathrm{C}$ or the information-theoretic analysis in Sainburg et al., (2019).
|
| 82 |
+
|
| 83 |
+
---
|
| 84 |
+
|
| 85 |
+
${}^{1}$ HDBSCAN is applied to 100-dimensional PCA projections rather than spectrograms directly because HDBSCAN does not perform well in high-dimensional spaces (McInnes et al., 2017).
|
| 86 |
+
|
| 87 |
+
---
|
| 88 |
+
|
| 89 |
+

|
| 90 |
+
|
| 91 |
+
Figure 5. Latent visualizations of Bengalese finch song sequences. (A) Syllables of Bengalese finch songs from one individual are projected into 2D UMAP latent space and clustered using HDB-SCAN. (B) Transitions between elements of song are visualized as line segments, where the color of the line segment represents its position within a bout. (C) The syllable categories and transitions in (A) and (B) can be abstracted to transition probabilities between syllable categories, as in a Markov model. (D) An example vocalization from the same individual, with syllable clusters from (A) shown above each syllable. (E) A series of song bouts. Each row is one bout, showing overlapping structure in syllable sequences. Bouts are sorted by similarity to help show structure in song.
|
| 92 |
+
|
| 93 |
+
### 2.2. Temporally continuous latent trajectories
|
| 94 |
+
|
| 95 |
+
Not all vocal repertoires are made up of elements that fall into highly discrete clusters in latent space (Fig 3). For several of the datasets we analysed, categorically discrete elements are not readily apparent, making analyses such as the cluster-based analyses performed in Figure 5 more difficult. In addition, many vocalizations are difficult to segment temporally, and determining what features to use for segmentation requires careful consideration (Kershenbaum et al., 2016). In many bird songs, for example, clear pauses exist between song elements that enable one to distinguish syllables. In other vocalizations, however, experimenters must rely on less well-defined physical features for segmentation (Janik, 1999; Kershenbaum et al., 2016), which may in turn invoke a range of biases and unwarranted assumptions. At the same time, much of the research on animal vocal production, perception, and sequential organization relies on identifying "units" of a vocal repertoire (Kershenbaum et al., 2016). To better understand the effects of temporal discretization and categorical segmentation in our analyses, we considered vocalizations as continuous trajectories in latent space and compared the resulting representations to those that treat vocal segments as single points (as in the previous Bengalese finch example in Fig 5). We show here explorations of two datasets: Bengalese finch (Fig 6) and human speech (Fig 7). In both dataset, we find that continuous latent trajectories capture short and long timescale structure in vocal sequences without requiring vocal elements to be segmented or labeled.
|
| 96 |
+
|
| 97 |
+
2.2.1. Comparing discrete and continuous REPRESENTATIONS OF SONG IN THE BENGALESE FINCH Bengalese finch song provides a relatively easy visual comparison between the discrete and continuous treatments of song, because it consists of a small number of unique highly stereotyped syllables (Fig 6). With a single bout of Bengalese finch song, which contains several dozen syllables, we generated a latent trajectory of song as UMAP projections of temporally-rolling windows of the bout spectrogram (See Projections section). To explore this latent space, we varied the window length between 1 and ${100}\mathrm{\;{ms}}$ (Fig 6A-L). At each window size, we compared UMAP projections (Fig 6A-C) to PCA projections (Fig 6D-F). In both PCA and UMAP, trajectories are more clearly visible as window size increases across the range tested, and overall the UMAP trajectories show more well-defined structure than the PCA trajectories. To compare continuous projections to discrete syllables, we re-colored the continuous trajectories by the discrete syllable labels obtained from the dataset. Again, as the window size increases, each syllable converges to a more distinct trajectory in UMAP space (Fig 6G-I). To visualize the discrete syllable labels and the continuous latent projections in relation to song, we converted the 2D projections into colorspace and show them as a continuous trajectory alongside the song spectrograms and discrete labels in Figure $6\mathrm{M},\mathrm{N}$ . Colorspace representations of the $2\mathrm{D}$ projections consist of treating the two UMAP dimensions as either a red, green, or blue channel in RGB (3D) colorspace, and holding the third channel constant. This creates a colormap projection of the two UMAP dimensions.
|
| 98 |
+
|
| 99 |
+

|
| 100 |
+
|
| 101 |
+
Figure 6. Continuous UMAP projections of Bengalese finch song from a single bout produced by one individual. (A-C) Bengalese finch song is segmented into either $1\mathrm{\;{ms}}\left( \mathrm{\;A}\right) ,{20}\mathrm{\;{ms}}\left( \mathrm{\;B}\right)$ , or ${100}\mathrm{\;{ms}}$ (C) rolling windows of song, which are projected into UMAP. Color represents time within the bout of song. (D-F) The same plots as in (A), projected into PCA instead of UMAP. (G-I) The same plots as (A-C) colored by hand-labeled element categories (unlabelled points are not shown). (J-L) The same plot as (D-E) colored by hand-labeled syllable categories. (M) UMAP projections represented in colorspace over a bout spectrogram. The top three rows are the UMAP projections from (A-C) projected into RGB colorspace to show the position within UMAP space over time as over the underlying spectrogram data. The fourth row are the hand labels. The final row is a bout spectrogram. (N) a subset of the bout shown in (M). In G-L, unlabeled points (points that are in between syllables) are not shown for visual clarity.
|
| 102 |
+
|
| 103 |
+
#### 2.2.2. LATENT TRAJECTORIES OF HUMAN SPEECH
|
| 104 |
+
|
| 105 |
+
Discrete elements of human speech (i.e. phonemes) are not spoken in isolation and their acoustics are influenced by neighboring sounds, a process termed co-articulation. For example, when producing the words 'day', 'say', or 'way', the position of the tongue, lips, and teeth differ dramatically at the beginning of the phoneme 'ey' due to the preceding 'd', 's', or 'w' phonemes, respectively. This results in differences in the pronunciation of 'ey' across words (Fig 7E). Co-articulation explains much of the acoustic variation observed within phonetic categories. Abstracting to phonetic categories therefore discounts much of this context-dependent acoustic variance.
|
| 106 |
+
|
| 107 |
+
We explored co-articulation in speech, by projecting sets of words differing by a single phoneme (i.e. minimal pairs) into continuous latent spaces, then extracted trajectories of words and phonemes that capture sub-phonetic context-dependency (Fig 7). We obtained the words from the Buckeye corpus of conversational English. We computed spectrograms over all examples of each target word, then projected sliding 4-ms windows from each spectrogram into UMAP latent space to yield a continuous vocal trajectory over each word (Fig 7). We visualized trajectories by their corresponding word and phoneme labels (Fig 7A, B) and computed the average latent trajectory for each word and phoneme (Fig 7C, D). The average trajectories reveal context-dependent variation within phonemes caused by coarticulation. For example, the words 'way', 'day', and 'say' each end in the same phoneme ('ey'; Fig 7A-D), which appears as an overlapping region in the latent space (the red region in Fig 7C). The endings of each average word trajectory vary, however, indicating that the production of 'ey' differs based on its specific context (Fig 7C). The difference between the production of 'ey' can be observed in the average latent trajectory over each word, where the trajectories for 'day' and 'say' end in a sharp transition, while the trajectory for 'way' is more smooth (Fig 7C). Latent space trajectories can reveal other co-articulations as well. In Figure 7E, we show the different trajectories characterizing the phoneme 't' in the context of the word 'take' versus 'talk'. In this case, the 't' phoneme follows a similar trajectory for both words until it nears the next phoneme ('ey' vs. 'ao'), at which point the production of 't' diverges for the different words.
|
| 108 |
+
|
| 109 |
+

|
| 110 |
+
|
| 111 |
+
Figure 7. Speech trajectories showing coarticulation in minimal pairs. (A) Utterances of the words 'day', 'say', and 'way' are projected into a continuous UMAP latent space with a window size of $4\mathrm{\;{ms}}$ . Color represents the corresponding word. (B) The same projections are colored by the corresponding phonemes. (D) The average latent trajectory for each word. (E) The average trajectory for each phoneme. (F) Example spectrograms of words, with latent trajectories above spectrograms and phoneme labels below spectrograms. (G) Average trajectories and corresponding spectrograms for the words 'take' and 'talk' showing the different trajectories for 't' in each word. (H) Average trajectories and the corresponding spectrograms for the words 'then' and 'them' showing the different trajectories for 'eh' in each word.
|
| 112 |
+
|
| 113 |
+
## 3. Discussion
|
| 114 |
+
|
| 115 |
+
We have presented a set of computational methods for projecting vocal communication signals into low-dimensional latent representational spaces, learned directly from the spectrograms of the signals. We demonstrate the flexibility and power of these methods by applying them to a wide sample of animal vocal communication signals, including songbirds, primates, rodents, bats, and cetaceans (Fig 3). Deployed over short timescales of a few hundred milliseconds, our methods capture significant behaviorally-relevant structure in the spectro-temporal acoustics of these diverse species' vocalizations. We find that complex attributes of vocal signals, such as individual identity (Fig 2), species identity, geographic population variability, phonetics, and similarity-based clusters can all be captured by the unsupervised latent space representations we present. We also show that songbirds tend to produce signals that cluster discretely in latent space, whereas mammalian vocalizations are more uniformly distributed, an observation that deserves much closer investigation in more species. Applied to longer timescales, spanning seconds or minutes, the same methods allowed us to visualize sequential organization and test models of vocal sequencing (Fig 5). We demonstrated that in some cases latent approaches confer advantages over hand labeling or supervised learning (See full manuscript/code). Finally, we visualized vocalizations as continuous trajectories in latent space (Figs 6, 7), providing a powerful method for studying sequential organization without discretization (Kershenbaum et al., 2016).
|
| 116 |
+
|
| 117 |
+
Latent models have shown increasing utility in the biological sciences over the past several years. As machine learning algorithms improve, so will their utility in characterizing the complex patterns present in biological systems like animal communication. In neuroscience, latent models already play an important role in characterizing complex neural population dynamics (Cunningham & Byron, 2014). Similarly, latent models are playing an increasingly important role in computational ethology (Brown & De Bivort, 2018), where characterizations of animal movements and behaviors have uncovered complex sequential organization (Marques et al., 2018; Berman et al., 2016; Wiltschko et al., 2015). In animal communication, pattern recognition using various machine learning techniques has been used to characterize vocalizations and label auditory objects (Sainburg et al., 2019; Cohen et al., 2019; Coffey et al., 2019; Van Segbroeck et al., 2017; Goffinet et al., 2019; Kollmorgen et al., 2019; Hertz et al., 2019). Our work furthers this emerging research area by demonstrating the utility of unsupervised latent models for both systematically visualizing and abstracting structure from animal vocalizations across a wide range of species.
|
| 118 |
+
|
| 119 |
+
## Software and Data
|
| 120 |
+
|
| 121 |
+
All software is publicly available and example Jupyter Notebooks are provided for each species vocal repertoire and analyses type (https://github.com/timsainb/ avgn_paper). The data is provided in Supplementary Table 1 of the longform paper.
|
| 122 |
+
|
| 123 |
+
## Acknowledgements
|
| 124 |
+
|
| 125 |
+
Work supported by NSF GRF 2017216247 and an Annette Merle-Smith Fellowship to T.S. and NIH DC0164081 and DC018055 to T.Q.G. We additionally would like to thank Kyle McDonald and his colleagues for motivating some of our visualization techniques with their work on humpback whale song (McDonald, 2019).
|
| 126 |
+
|
| 127 |
+
## References
|
| 128 |
+
|
| 129 |
+
Arneodo, E. M., Perl, Y. S., Goller, F., and Mindlin, G. B. Prosthetic avian vocal organ controlled by a freely behaving bird based on a low dimensional model of the biomechanical periphery. PLoS computational biology, 8 (6), 2012.
|
| 130 |
+
|
| 131 |
+
Becht, E., McInnes, L., Healy, J., Dutertre, C.-A., Kwok, I. W., Ng, L. G., Ginhoux, F., and Newell, E. W. Dimensionality reduction for visualizing single-cell data using umap. Nature biotechnology, 37(1):38, 2019.
|
| 132 |
+
|
| 133 |
+
Bengio, Y., Courville, A., and Vincent, P. Representation learning: A review and new perspectives. IEEE transactions on pattern analysis and machine intelligence, 35(8): 1798-1828, 2013.
|
| 134 |
+
|
| 135 |
+
Berman, G. J., Bialek, W., and Shaevitz, J. W. Predictability and hierarchy in drosophila behavior. Proceedings of the National Academy of Sciences, 113(42):11943-11948, 2016.
|
| 136 |
+
|
| 137 |
+
Berwick, R. C., Okanoya, K., Beckers, G. J., and Bolhuis, J. J. Songs to syntax: the linguistics of birdsong. Trends in cognitive sciences, 15(3):113-121, 2011.
|
| 138 |
+
|
| 139 |
+
Bohn, K. M., Wilkinson, G. S., and Moss, C. F. Discrimination of infant isolation calls by female greater spear-nosed bats, phyllostomus hastatus. Animal behaviour, 73(3): 423-432, 2007.
|
| 140 |
+
|
| 141 |
+
Brown, A. E. and De Bivort, B. Ethology as a physical science. Nature Physics, 14(7):653-657, 2018.
|
| 142 |
+
|
| 143 |
+
Campello, R. J., Moulavi, D., and Sander, J. Density-based clustering based on hierarchical density estimates. In Pacific-Asia conference on knowledge discovery and data mining, pp. 160-172. Springer, 2013.
|
| 144 |
+
|
| 145 |
+
Charif, R., Waack, A., and Strickman, L. Raven pro 1.4 user's manual. Cornell Lab of Ornithology, Ithaca, NY, 25506974, 2010.
|
| 146 |
+
|
| 147 |
+
Chi, Z. and Margoliash, D. Temporal precision and temporal drift in brain and behavior of zebra finch song. Neuron, 32(5):899-910, 2001.
|
| 148 |
+
|
| 149 |
+
Cody, M. L., Stabler, E., Sánchez Castellanos, H. M., and Taylor, C. E. Structure, syntax and "small-world" organization in the complex songs of California thrashers (Toxostoma redivivum). Bioacoustics, 25(1):41-54, 2016.
|
| 150 |
+
|
| 151 |
+
Coffey, K. R., Marx, R. G., and Neumaier, J. F. Deepsqueak: a deep learning-based system for detection and analysis of ultrasonic vocalizations. Neuropsychopharmacology, 44(5):859, 2019.
|
| 152 |
+
|
| 153 |
+
Cohen, Y., Shen, J., Semu, D., Leman, D. P., Liberti, W. A., Perkins, N. L., Liberti, D. C., Kotton, D., and Gardner, T. J. Hidden neural states underlie canary song syntax. bioRxiv, pp. 561761, 2019.
|
| 154 |
+
|
| 155 |
+
Cunningham, J. P. and Byron, M. Y. Dimensionality reduction for large-scale neural recordings. Nature neuroscience, 17(11):1500, 2014.
|
| 156 |
+
|
| 157 |
+
Dunlop, R. A., Noad, M. J., Cato, D. H., and Stokes, D. The social vocalization repertoire of east australian migrating humpback whales (megaptera novaeangliae). The Journal of the Acoustical Society of America, 122(5):2893-2905, 2007.
|
| 158 |
+
|
| 159 |
+
Elie, J. E. and Theunissen, F. E. The vocal repertoire of the domesticated zebra finch: a data-driven approach to decipher the information-bearing acoustic features of communication signals. Animal cognition, 19(2):285- 315, 2016.
|
| 160 |
+
|
| 161 |
+
Elie, J. E. and Theunissen, F. E. Zebra finches identify individuals using vocal signatures unique to each call type. Nature communications, 9(1):4026, 2018.
|
| 162 |
+
|
| 163 |
+
Engler, S., Rose, A., and Knörnschild, M. Isolation call ontogeny in bat pups (glossophaga soricina). Behaviour, 154(3):267-286, 2017.
|
| 164 |
+
|
| 165 |
+
Fee, M. S., Kozhevnikov, A., and Hahnloser, R. Neural mechanisms of vocal sequence generation in the songbird. Ann NY Acad Sci, 1016(1), 2004.
|
| 166 |
+
|
| 167 |
+
Fukushima, M., Doyle, A. M., Mullarkey, M. P., Mishkin, M., and Averbeck, B. B. Distributed acoustic cues for
|
| 168 |
+
|
| 169 |
+
caller identity in macaque vocalization. Royal Society
|
| 170 |
+
|
| 171 |
+
open science, 2(12):150432, 2015.
|
| 172 |
+
|
| 173 |
+
Gardner, T., Cecchi, G., Magnasco, M., Laje, R., and
|
| 174 |
+
|
| 175 |
+
Mindlin, G. B. Simple motor gestures for birdsongs.
|
| 176 |
+
|
| 177 |
+
Physical review letters, 87(20):208101, 2001.
|
| 178 |
+
|
| 179 |
+
Gentner, T. Q. and Hulse, S. H. Perceptual mechanisms for
|
| 180 |
+
|
| 181 |
+
individual vocal recognition in european starlings, sturnus
|
| 182 |
+
|
| 183 |
+
vulgaris. Animal behaviour, 56(3):579-594, 1998.
|
| 184 |
+
|
| 185 |
+
Goffinet, J., Mooney, R., and Pearson, J. Inferring low-
|
| 186 |
+
|
| 187 |
+
dimensional latent descriptions of animal vocalizations.
|
| 188 |
+
|
| 189 |
+
bioRxiv, pp. 811661, 2019.
|
| 190 |
+
|
| 191 |
+
Hahnloser, R. H., Kozhevnikov, A. A., and Fee, M. S. An
|
| 192 |
+
|
| 193 |
+
ultra-sparse code underliesthe generation of neural se-
|
| 194 |
+
|
| 195 |
+
quences in a songbird. Nature, 419(6902):65, 2002.
|
| 196 |
+
|
| 197 |
+
Hedley, R. W. Complexity, predictability and time homo-
|
| 198 |
+
|
| 199 |
+
geneity of syntax in the songs of cassin's vireo (vireo
|
| 200 |
+
|
| 201 |
+
cassinii). PloSone, 11(4):e0150822, 2016.
|
| 202 |
+
|
| 203 |
+
Hertz, S., Weiner, B., Perets, N., and London, M. High
|
| 204 |
+
|
| 205 |
+
order structure in mouse courtship vocalizations. bioRxiv,
|
| 206 |
+
|
| 207 |
+
pp. 728477, 2019.
|
| 208 |
+
|
| 209 |
+
Janik, V. M. Pitfalls in the categorization of behaviour:
|
| 210 |
+
|
| 211 |
+
a comparison of dolphin whistle classification methods.
|
| 212 |
+
|
| 213 |
+
Animal Behaviour, 57(1):133-143, 1999.
|
| 214 |
+
|
| 215 |
+
Katahira, K., Suzuki, K., Kagawa, H., and Okanoya, K. A
|
| 216 |
+
|
| 217 |
+
simple explanation for the evolution of complex song syn-
|
| 218 |
+
|
| 219 |
+
tax in bengalese finches. Biology letters, 9(6):20130842,
|
| 220 |
+
|
| 221 |
+
2013.
|
| 222 |
+
|
| 223 |
+
Kershenbaum, A., Blumstein, D. T., Roch, M. A., Akçay,
|
| 224 |
+
|
| 225 |
+
C., Backus, G., Bee, M. A., Bohn, K., Cao, Y., Carter,
|
| 226 |
+
|
| 227 |
+
G., Cäsar, C., et al. Acoustic sequences in non-human
|
| 228 |
+
|
| 229 |
+
animals: a tutorial review and prospectus. Biological
|
| 230 |
+
|
| 231 |
+
Reviews, 91(1):13-52, 2016.
|
| 232 |
+
|
| 233 |
+
Kollmorgen, S., Hahnloser, R., and Mante, V.
|
| 234 |
+
|
| 235 |
+
Neighborhood-statistics reveal complex dynamics
|
| 236 |
+
|
| 237 |
+
of song acquisition in the zebra finch. bioRxiv, pp.
|
| 238 |
+
|
| 239 |
+
595512, 2019.
|
| 240 |
+
|
| 241 |
+
Koumura, T. BirdsongRecognition. 7 2016.
|
| 242 |
+
|
| 243 |
+
doi: 10.6084/m9.figshare.3470165.v1. URL
|
| 244 |
+
|
| 245 |
+
https://figshare.com/articles/
|
| 246 |
+
|
| 247 |
+
BirdsongRecognition/3470165.
|
| 248 |
+
|
| 249 |
+
Koumura, T. and Okanoya, K. Automatic recognition of ele-
|
| 250 |
+
|
| 251 |
+
ment classes and boundaries in the birdsong with variable
|
| 252 |
+
|
| 253 |
+
sequences. PloSone, 11(7):e0159188, 2016.
|
| 254 |
+
|
| 255 |
+
Lachlan, R. F., Ratmann, O., and Nowicki, S. Cultural
|
| 256 |
+
|
| 257 |
+
conformity generates extremely stable traditions in bird
|
| 258 |
+
|
| 259 |
+
song. Nature communications, 9(1):2417, 2018.
|
| 260 |
+
|
| 261 |
+
LeCun, Y., Bengio, Y., and Hinton, G. Deep learning. nature,
|
| 262 |
+
|
| 263 |
+
521(7553):436-444, 2015.
|
| 264 |
+
|
| 265 |
+
Maaten, L. v. d. and Hinton, G. Visualizing data using t-sne. Journal of machine learning research, 9(Nov): 2579-2605, 2008.
|
| 266 |
+
|
| 267 |
+
Markowitz, J. E., Ivie, E., Kligler, L., and Gardner, T. J. Long-range order in canary song. PLoS computational biology, 9(5):e1003052, 2013.
|
| 268 |
+
|
| 269 |
+
Marques, J. C., Lackner, S., Félix, R., and Orger, M. B. Structure of the zebrafish locomotor repertoire revealed with unsupervised behavioral clustering. Current Biology, 28(2):181-195, 2018.
|
| 270 |
+
|
| 271 |
+
McDonald, K. Data of the humpback whale, June 2019. [Online; posted 05-June-2019].
|
| 272 |
+
|
| 273 |
+
McInnes, L., Healy, J., and Astels, S. hdbscan: Hierarchical density based clustering. J. Open Source Software, 2(11): 205, 2017.
|
| 274 |
+
|
| 275 |
+
McInnes, L., Healy, J., and Melville, J. Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426, 2018.
|
| 276 |
+
|
| 277 |
+
Miller, C. T., Mandel, K., and Wang, X. The communicative content of the common marmoset phee call during antiphonal calling. American journal of primatology, 72 (11):974-980, 2010.
|
| 278 |
+
|
| 279 |
+
Nicholson, D., Queen, J. E., and Sober, S. J. Bengalese Finch song repository. 10 2017. doi: 10.6084/m9.figshare.4805749.v5. URL https: //figshare.com/articles/Bengalese_ Finch_song_repository/4805749.
|
| 280 |
+
|
| 281 |
+
Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V., Thirion, B., Grisel, O., Blondel, M., Prettenhofer, P., Weiss, R., Dubourg, V., Vanderplas, J., Passos, A., Cour-napeau, D., Brucher, M., Perrot, M., and Duchesnay, E. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825-2830, 2011.
|
| 282 |
+
|
| 283 |
+
Perl, Y. S., Arneodo, E. M., Amador, A., Goller, F., and Mindlin, G. B. Reconstruction of physiological instructions from zebra finch song. Physical Review E, 84(5): 051909, 2011.
|
| 284 |
+
|
| 285 |
+
Radford, A., Metz, L., and Chintala, S. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434, 2015.
|
| 286 |
+
|
| 287 |
+
Sainburg, T., Theilman, B., Thielk, M., and Gentner, T. Q. Parallels in the sequential organization of birdsong and human speech. Nature communications, 10(1):1-11, 2019.
|
| 288 |
+
|
| 289 |
+
Smith, G. T., Brenowitz, E. A., Beecher, M. D., and Wing-field, J. C. Seasonal changes in testosterone, neural attributes of song control nuclei, and song structure in wild songbirds. Journal of Neuroscience, 17(15):6001-6010, 1997.
|
| 290 |
+
|
| 291 |
+
Specht, R. Avisoft-saslab pro: sound analysis and synthesis laboratory. Avisoft Bioacoustics, Berlin, 2002.
|
| 292 |
+
|
| 293 |
+
Suzuki, R., Buck, J. R., and Tyack, P. L. Information entropy of humpback whale songs. The Journal of the Acoustical Society of America, 119(3):1849-1866, 2006.
|
| 294 |
+
|
| 295 |
+
Tchernichovski, O. and Mitra, P. P. Sound analysis pro user manual. CCNY, New York, 2004.
|
| 296 |
+
|
| 297 |
+
Tchernichovski, O., Nottebohm, F., Ho, C. E., Pesaran, B., and Mitra, P. P. A procedure for an automated measurement of song similarity. Animal behaviour, 59(6): 1167-1176, 2000.
|
| 298 |
+
|
| 299 |
+
Tenenbaum, J. B., De Silva, V., and Langford, J. C. A global geometric framework for nonlinear dimensionality reduction. science, 290(5500):2319-2323, 2000.
|
| 300 |
+
|
| 301 |
+
Tyack, P. Acoustic communication under the sea. In Animal acoustic communication, pp. 163-220. Springer, 1998.
|
| 302 |
+
|
| 303 |
+
Van Segbroeck, M., Knoll, A. T., Levitt, P., and Narayanan, S. Mupet-mouse ultrasonic profile extraction: a signal processing tool for rapid and unsupervised analysis of ultrasonic vocalizations. Neuron, 94(3):465-485, 2017.
|
| 304 |
+
|
| 305 |
+
Williams, H. Birdsong and singing behavior. ANNALS-NEW YORK ACADEMY OF SCIENCES, pp. 1-30, 2004.
|
| 306 |
+
|
| 307 |
+
Wiltschko, A. B., Johnson, M. J., Iurilli, G., Peterson, R. E., Katon, J. M., Pashkovski, S. L., Abraira, V. E., Adams, R. P., and Datta, S. R. Mapping sub-second structure in mouse behavior. Neuron, 88(6):1121-1135, 2015.
|
papers/ICML/ICML 2020/ICML 2020 Workshop/ICML 2020 Workshop SAS/MEQ_DSSJam_/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,117 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ FINDING, VISUALIZING, AND QUANTIFYING LATENT STRUCTURE ACROSS DIVERSE ANIMAL VOCAL REPERTOIRES
|
| 2 |
+
|
| 3 |
+
Tim Sainburg ${}^{1}$ Marving Thielk ${}^{1}$ Timothy Q. Gentner ${}^{1}$
|
| 4 |
+
|
| 5 |
+
§ ABSTRACT
|
| 6 |
+
|
| 7 |
+
Animals produce vocalizations that range in complexity from a single repeated call to hundreds of unique vocal elements patterned in sequences unfolding over hours. Characterizing complex vocalizations can require considerable effort and a deep intuition about each species' vocal behavior. Even with a great deal of experience, human characterizations of animal communication can be affected by human perceptual biases. We present a set of computational methods for projecting animal vocalizations into low dimensional latent representational spaces that are directly learned from the spectrograms of vocal signals. We apply these methods to diverse datasets from over 20 species, including humans, bats, songbirds, mice, cetaceans, and nonhuman primates. Latent projections uncover complex features of data in visually intuitive and quantifiable ways, enabling high-powered comparative analyses of unbiased acoustic features. We introduce methods for analyzing vocalizations as both discrete sequences and as continuous latent variables. Each method can be used to disentangle complex spectro-temporal structure and observe long-timescale organization in communication.
|
| 8 |
+
|
| 9 |
+
§ 1. INTRODUCTION
|
| 10 |
+
|
| 11 |
+
Vocal communication is a common social behavior among many species, in which acoustic signals are transmitted from sender to receiver to convey information such as identity, individual fitness, or the presence of danger. Across diverse fields, a set of shared research questions seeks to uncover the structure and mechanism of vocal communication: What information is carried within signals? How are signals produced and perceived? How does the communicative transmission of information affect fitness and reproductive success? Many methods are available to address these questions quantitatively, most of which are founded on underlying principles of abstraction and characterization of 'units' in the vocal time series (Kershenbaum et al., 2016). For example, segmentation of birdsong into temporally discrete elements followed by clustering into discrete categories has played a crucial role in understanding syntactic structure in birdsong (Kershenbaum et al., 2016; Berwick et al., 2011; Sainburg et al., 2019; Katahira et al., 2013; Markowitz et al., 2013; Cody et al., 2016; Hedley, 2016; Koumura & Okanoya, 2016; Gentner & Hulse, 1998).
|
| 12 |
+
|
| 13 |
+
The characterization and abstraction of vocal communication signals remains both an art and a science. In a recent survey, Kershenbaum et. al., (2016) outline four common steps used in many analyses to abstract and describe vocal sequences: (1) the collection of data, (2) segmentation of vocalizations into units, (3) characterization of sequences, and (4) identification of meaning. A number of heuristics guide these steps, but it is largely up to the experimenter to determine which heuristics to apply and how. This application typically requires expert-level knowledge, which in turn can be difficult and time-consuming to acquire, and often unique to the structure of each species' vocal repertoire. For instance, what constitutes a 'unit' of humpback whale song? Do these units generalize to other species? Should they? When such intuitions are available they should be considered, of course, but they are generally rare in comparison to the wide range of communication signals observed naturally. As a result, communication remains understudied in most of the thousands of vocally communicating species. Even in well-documented model species, characterizations of vocalizations are often influenced by human perceptual and cognitive biases (Suzuki et al., 2006; Tyack, 1998; Janik, 1999; Kershenbaum et al., 2016). We explore a class of unsupervised, computational, machine learning techniques that avoid many of the foregoing limitations, and provide an alternative method to characterize vocal communication signals. Machine learning methods are designed to capture statistical patterns in complex datasets and have flourished in many domains (LeCun et al., 2015; Bengio et al., 2013; Radford et al., 2015; Becht et al., 2019; Brown & De Bivort, 2018; Becht et al., 2019). These techniques are therefore well suited to quantitatively investigate complex statistical structure in vocal repertoires that otherwise rely upon expert intuitions. In this paper, we demonstrate the utility of unsupervised latent models, statistical models that learn latent (compressed) representations of complex data, in describing animal communication.
|
| 14 |
+
|
| 15 |
+
${}^{1}$ University of California, San Diego, USA. Correspondence to: Tim Sainburg <tsainbur@ucsd.edu>.
|
| 16 |
+
|
| 17 |
+
Proceedings of the ${37}^{\text{ th }}$ International Conference on Machine Learning, Vienna, Austria, PMLR 119, 2020. Copyright 2020 by the author(s).
|
| 18 |
+
|
| 19 |
+
< g r a p h i c s >
|
| 20 |
+
|
| 21 |
+
Figure 1. Graph-based dimensionality reduction. Current non-linear dimensionality reduction algorithms like TSNE, UMAP, and ISOMAP work by building a graph representing the relationships between high-dimensional data points, projecting those data points into a low-dimensional space, and then finds and embedding that retains the structure of the graph. This figure is for visualization, the spectrograms do not actually correspond to the points in the 3D space.
|
| 22 |
+
|
| 23 |
+
§ 1.1. LATENT MODELS OF ACOUSTIC COMMUNICATION
|
| 24 |
+
|
| 25 |
+
Dimensionality reduction refers to the compression of high-dimensional data into a smaller number of dimensions, while retaining the structure and variance present in the original high-dimensional data. Each point in the high-dimensional input space can be projected into the lower-dimensional 'latent' feature space, and dimensions of the latent space can be thought of as features of the dataset. Animal vocalizations are good targets for dimensionality reduction. They appear naturally as sound pressure waveforms with rich, multi-dimensional temporal and spectral variations, but can generally be explained by lower-dimensional dynamics (Perl et al., 2011; Gardner et al., 2001; Arneodo et al., 2012). Dimensionality reduction, therefore, offers a way to infer a smaller set of latent dimensions (or features) that can explain much of the variance in high-dimensional vocalizations.
|
| 26 |
+
|
| 27 |
+
The common practice of developing a set of basis-features on which vocalizations can be quantitatively compared (also called Predefined Acoustic Features, or PAFs) is a form of dimensionality reduction and comes standard in most animal vocalization analysis software (e.g. Luscinia (Lachlan et al., 2018), Sound Analysis Pro (Tchernichovski & Mitra, 2004; Tchernichovski et al., 2000), BioSound (Elie & The-unissen, 2018), Avisoft (Specht, 2002), and Raven (Charif et al., 2010)). Birdsong, for example, is often analyzed on the basis of features such as amplitude envelope, Weiner entropy, spectral continuity, pitch, duration, and frequency modulation (Tchernichovski & Mitra, 2004; Kershenbaum et al., 2016). Grouping elements of animal vocalizations (e.g. syllables of birdsong, mouse ultrasonic vocalizations) into abstracted discrete categories is also a form of dimensionality reduction, where each category is a single orthogonal dimension. In machine learning parlance, the process of determining the relevant features, or dimensions, of a particular dataset, is called feature engineering.
|
| 28 |
+
|
| 29 |
+
An attractive alternative to feature engineering is to project animal vocalizations into low-dimensional feature spaces that are determined directly from the structure of the data. Many methods for data-driven dimensionality reduction are available. PCA, for example, projects data onto a lower-dimensional surface that maximizes the variance of the projected data (Dunlop et al., 2007; Kershenbaum et al., 2016), while multidimensional scaling (MDS) projects data onto a lower-dimensional surface that maximally preserves the pairwise distances between data points. Both PCA and MDS are capable of learning manifolds that are linear or near-linear transformations of the original high-dimensional data space (Tenenbaum et al., 2000).
|
| 30 |
+
|
| 31 |
+
More recently developed graph-based methods extend dimensionality reduction to infer latent manifolds as nonlinear transformations of the original high-dimensional space using ideas from topology (e.g. ISOMAP, UMAP, t-SNE; Tenenbaum et al. (2000); McInnes et al. (2018); Maaten & Hinton (2008)). Like their linear predecessors, these non-linear dimensionality reduction algorithms also try to find a low-dimensional manifold that captures variation in the higher-dimensional input data, but the graph-based methods allow the manifold to be continuously deformed, by for example stretching, twisting, and/or shrinking, in high dimensional space. These algorithms work by building a topological representation of the data and then learning a low-dimensional embedding that preserves the structure of the topological representation (Fig 1). For example, while MDS learns a low-dimensional embedding that preserves the pairwise distance between points in Euclidean space, ISOMAP (Tenenbaum et al., 2000), one of the original topological non-linear dimensionality reduction algorithms, infers a graphical representation of the data and then performs MDS on the pairwise distances between points within the graph (geodesics) rather than in Euclidean space.
|
| 32 |
+
|
| 33 |
+
In this paper, we describe a class of nonlinear latent models that learn complex feature-spaces of vocalizations, requiring few a priori assumptions about the features that best describe a species' vocalizations. We show that these methods reveal informative, low-dimensional, feature-spaces that enable the formulation and testing of hypotheses about animal communication. We apply our method to diverse datasets consisting of over 20 species, including humans, bats, songbirds, mice, cetaceans, and nonhuman primates. We introduce methods for treating vocalizations both as sequences of temporally discrete elements such as syllables, as is traditional in studying animal communication (Kershenbaum et al., 2016), as well as temporally continuous trajectories, as is becoming increasingly common in representing neural sequences (Cunningham & Byron, 2014). Using both methods, we show that latent projections produce visually-intuitive and quantifiable representations that capture complex acoustic features. We show comparatively that the spectrotemporal characteristics of vocal units vary from species to species in how distributionally discrete they are and discuss the relative utility of different ways to represent different communicative signals.
|
| 34 |
+
|
| 35 |
+
§ 2. RESULTS
|
| 36 |
+
|
| 37 |
+
§ 2.1. DISCRETE LATENT PROJECTIONS OF ANIMAL VOCALIZATIONS
|
| 38 |
+
|
| 39 |
+
To explore the broad utility of latent models in capturing features of vocal repertoires, we analyzed nineteen datasets consisting of 400 hours of vocalizations and over 3,000,000 discrete vocal units from 29 unique species. Each vocalization dataset was temporally segmented into discrete units (e.g. syllables, notes), either based upon segmentation boundaries provided by the dataset (where available), or using a novel dynamic-thresholding segmentation algorithm that segments syllables of vocalizations between detected pauses in the vocal stream. Each dataset was chosen because it contains large repertoires of vocalizations from relatively acoustically isolated individuals that can be cleanly separated into temporally-discrete vocal units. With each temporally discrete vocal unit we computed a spectrographic representation. We then projected the spectrograms into latent feature spaces using UMAP (e.g. Figs 2, 3). From these latent feature spaces, we analyzed datasets for classic vocal features of animal communication signals, speech features, stereotypy/clusterability, and sequential organization.
|
| 40 |
+
|
| 41 |
+
Individual identity Many species produce caller-specific vocalizations that facilitate the identification of individuals when other sensory cues, such as sight, are not available. The features of vocalizations facilitating individual identification vary between species. We projected identity call datasets (i.e., sets of calls thought to carry individual identity information) from four different species into UMAP latent spaces (one per species) to observe whether individual identity falls out naturally within the latent space.
|
| 42 |
+
|
| 43 |
+
< g r a p h i c s >
|
| 44 |
+
|
| 45 |
+
Figure 2. Individual identity is captured in projections for some datasets. Each plot shows vocal elements discretized, spectro-grammed, and then embedded into a 2D UMAP space, where each point in the scatterplot represents a single element (e.g. syllable of birdsong). Scatterplots are colored by individual identity. The borders around each plot are example spectrograms pointing toward different regions of the scatterplot. (A) Rhesus macaque coo calls. (B) Zebra finch distance calls. (C) Fruit bat infant isolation calls. (D) Marmoset phee calls.
|
| 46 |
+
|
| 47 |
+
We looked at four datasets where both caller and call-type are available. Caller identity is evident in latent projections of all four datasets (Fig 2). The first dataset is comprised of Macaque coo calls, where identity information is thought to be distributed across multiple features including fundamental frequency, duration, and Weiner entropy (Fukushima et al., 2015). Indeed, the latent projection of coo calls clustered tightly by individual identity (silhouette score $= {0.378}$ ; Fig 2A). The same is true for Zebra finch distance calls (Elie & Theunissen, 2016) (silhouette score = 0.615; Fig 2B). Egyptian fruit bat pup isolation calls, which in other bat species are discriminable by adult females (Bohn et al., 2007; Engler et al., 2017; Bohn et al., 2007) clearly show regions of UMAP space densely occupied by single individual's vocalizations, but no clear clusters (silhouette score = -0.078; Fig 2C). In the marmoset phee call dataset (Miller et al., 2010) it is perhaps interesting that given the range of potential features thought to carry individual identity (Fukushima et al., 2015), phee calls appear to lie along a single continuum where each individual's calls occupy overlapping regions of the continuum (silhouette score $= - {0.062}$ ; Fig 2D). The silhouette score for each species was well above chance $\left( {\mathrm{H}\left( 2\right) > {20},\mathrm{p} < {10}^{-5}}\right)$ . These patterns predict that some calls, such as macaque coo calls, would be more easily discriminable by conspecifics than other calls, such as marmoset phee calls.
|
| 48 |
+
|
| 49 |
+
§ 2.1.1. VARIATION IN DISCRETE DISTRIBUTIONS AND STEREOTYPY
|
| 50 |
+
|
| 51 |
+
< g r a p h i c s >
|
| 52 |
+
|
| 53 |
+
Figure 3. UMAP projections of vocal repertoires across diverse species. Each plot shows vocal elements segmented, spectro-grammed, and then embedded into a 2D UMAP space, where each point in the scatterplot represents a single element (e.g. syllable of birdsong). Scatterplots are colored by element categories over individual vocalizations as defined by the authors of each dataset, where available. (A) Human phonemes. (B) Egyptian fruit bat calls (color is context). (C) Cassin's vireo syllables. (O) Clusterability (Hopkin's metric) for each dataset. Lower is more clusterable. Hopkin's metric is computed over UMAP projected vocalizations for each species. Error bars show the 95% confidence interval across individuals. Color represents species category (red: mammal, blue: songbird).
|
| 54 |
+
|
| 55 |
+
In species as phylogenetically diverse as songbirds and rock hyraxes, analyzing the sequential organization of communication relies upon similar methods of segmentation and categorization of discrete vocal elements (Kershenbaum et al., 2016). In species such as the Bengalese finch, where syllables are highly stereotyped, clustering syllables into discrete categories is a natural way to abstract song. The utility of clustering song elements in other species, however, is more contentious because discrete category boundaries are not as easily discerned (Tyack, 1998; Suzuki et al., 2006; Goffinet et al., 2019; Hertz et al., 2019).
|
| 56 |
+
|
| 57 |
+
To compare broad structural characteristics across a wide sampling of species, we projected vocalizations from 14 datasets of different species vocalizations, ranging across songbirds, cetaceans, primates, and rodents into UMAP space (Fig 3). To do so, we sampled from a diverse range of datasets, each of which was recorded from a different species in a different setting. Some datasets were recorded from single isolated individuals in a sound isolated chamber in a laboratory setting, while others were recorded from large numbers of freely behaving individuals in the wild. In addition, the units of vocalization from each dataset are variable. We used the smallest units of each vocalization that could be easily segmented, for example, syllables, notes, and phonemes. Thus, this comparison across species is not well-controlled. Still, such a dataset enabling a broad comparison in a well-controlled manner does not exist. Latent projections of such diverse recordings, while limited in a number of ways, have the potential to provide a glimpse into broad structure into vocal repertoires, yielding novel insights into broad trends in animal communication. For each dataset, we computed spectrograms of isolated elements, and projected those spectrograms into UMAP space (Fig 3). Where putative element labels are available, we plot them in color over each dataset.
|
| 58 |
+
|
| 59 |
+
Visually inspecting the latent projections of vocalizations reveals appreciable variability in how the repertoires of different species cluster in latent space. For example, mouse USVs appear as a single cluster (Fig 3I), while zebra finch syllables appear as multiple discrete clusters (Fig 3M, F), and gibbon song sits somewhere in between (Fig 3L). This suggests that the spectro-temporal acoustic diversity of vocal repertoires fall along a continuum ranging from unclustered and uni-modal to highly clustered.
|
| 60 |
+
|
| 61 |
+
We quantified this effect using a linear mixed-effects model comparing the Hopkin's statistic across UMAP projections of vocalizations from single individuals $\left( {\mathrm{n} = {289}}\right)$ , controlling for the number of vocalizations produced by each individual as well as random variability at the level of species. We included each of the species in Fig 3 except giant otter and gibbon vocalizations, as individual identity was not available for those datasets. We find that songbird vocalizations are significantly more clustered than mammalian vocalizations $\left( {{\chi }^{2}\left( 1\right) = {20},\mathrm{p} < {10}^{-5}}\right)$ . The stereotypy of songbird (and other avian) vocal elements is well documented (Williams, 2004; Smith et al., 1997) and at least in zebra finches is related to the high temporal precision in the singing-related neural activity of vocal-motor brain regions (Hahnloser et al., 2002; Fee et al., 2004; Chi & Margoliash, 2001).
|
| 62 |
+
|
| 63 |
+
< g r a p h i c s >
|
| 64 |
+
|
| 65 |
+
Figure 4. HDBSCAN density-based clustering. Clusters are found by generating a graphical representation of data, and then clustering on the graph. The data shown in this figure are from the latent projections from Fig 1. Notably, the three clusters in Fig 1. are clustered into only two clusters using HDBSCAN, exhibiting a potential shortcoming of the HDBSCAN algorithm. The grey colormap in the condensed trees represent the number of points in the branch of the tree. $\Lambda$ is a value used to compute the persistence of clusters in the condensed trees.
|
| 66 |
+
|
| 67 |
+
§ 2.1.2. CLUSTERING VOCAL ELEMENT CATEGORIES
|
| 68 |
+
|
| 69 |
+
UMAP projections of birdsongs largely fall more neatly into discriminable clusters (Fig 3). If clusters in latent space are highly similar to experimenter-labeled element categories, unsupervised latent clustering could provide an automated and less time-intensive alternative to hand-labeling elements of vocalizations. To examine this, we compared how well clusters in latent space correspond to experimenter-labeled categories in three human-labeled datasets: two separate Bengalese finch datasets (Nicholson et al., 2017; Koumura, 2016), and one Cassin's vireo dataset (Hedley, 2016). We compared four different labeling techniques: a hierarchical density-based clustering algorithm (HDBSCAN; (Campello et al., 2013; McInnes et al., 2017)) applied to UMAP projections of spectrograms, HDBSCAN applied to PCA projections of spectrograms ${}^{1}$ , k-means (Pedregosa et al.,2011) clustering applied over UMAP, and k-means clustering applied over spectrograms. We found that HDBSCAN clustering outperformed other clustering algorithms on all metrics for all datasets (See full manuscript).
|
| 70 |
+
|
| 71 |
+
Like the contrast between MDS and UMAP, the k-means clustering algorithm works directly on the Euclidean distances between data points, whereas HDBSCAN operates on a graph-based transform of the input data (Fig 4). Briefly, HDBSCAN first defines a 'mutual reachability' distance between elements, a measure of the distance between points in the dataset weighted by the local sparsity/density of each point (measured as the distance to a $k$ th nearest neighbor). HDBSCAN then builds a graph, where each edge between vertices (points in the dataset) is the mutual reachability between those points, and then prunes the edges to construct a minimum spanning tree (a graph containing the minimum set of edges needed to connect all of the vertices). The minimum spanning tree is converted into a hierarchy of clusters of points sorted by mutual reachability distance, and then condensed iteratively into a smaller hierarchy of putative clusters. Finally, clusters are chosen as those that persist and are stable over the greatest range in the hierarchy.
|
| 72 |
+
|
| 73 |
+
§ 2.1.3. ABSTRACTING AND VISUALIZING SEQUENTIAL ORGANIZATION
|
| 74 |
+
|
| 75 |
+
As acoustic signals, animal vocalizations have an inherent temporal structure that can extend across time scales from short easily discretized elements such as notes, to longer duration syllables, phrases, songs, bouts, etc. The latent projection methods described above can be used to abstract corpora of song elements well-suited to temporal pattern analyses (Sainburg et al., 2019), and to make more direct measures of continuous vocalization time series. Moreover, their automaticity enables the high throughput necessary to satisfy intensive data requirements for most quantitative sequence models.
|
| 76 |
+
|
| 77 |
+
In practice, modeling sequential organization can be applied to any discrete dataset of vocal elements, whether labeled by hand or algorithmically. Latent projections of vocal element have the added benefit of allowing visualization of the sequential organization that can be compared to abstracted models. As an example of this, we derived a corpus of symbolically segmented vocalizations from a dataset of Bengalese finch song using latent projections and clustering (Fig 5). Bengalese finch song bouts comprise a small number (5̃-15) of highly stereotyped syllables produced in well-defined temporal sequences a few dozen syllables long (Katahira et al., 2013). We first projected syllables from a single Bengalese finch into UMAP latent space, then visualized transitions between vocal elements in latent space as line segments between points (Fig 5B), revealing highly regular patterns. To abstract this organization to a grammatical model, we clustered latent projections into discrete categories using HDBSCAN. Each bout is then treated as a sequence of symbolically labeled syllables (e.g. $B \rightarrow B \rightarrow C \rightarrow A$ ; Fig 5D) and the entire dataset rendered as a corpus of transcribed song (Fig 5E). Using the transcribed corpus, one can abstract statistical and grammatical models of song, such as the Markov model shown in Fig $5\mathrm{C}$ or the information-theoretic analysis in Sainburg et al., (2019).
|
| 78 |
+
|
| 79 |
+
${}^{1}$ HDBSCAN is applied to 100-dimensional PCA projections rather than spectrograms directly because HDBSCAN does not perform well in high-dimensional spaces (McInnes et al., 2017).
|
| 80 |
+
|
| 81 |
+
< g r a p h i c s >
|
| 82 |
+
|
| 83 |
+
Figure 5. Latent visualizations of Bengalese finch song sequences. (A) Syllables of Bengalese finch songs from one individual are projected into 2D UMAP latent space and clustered using HDB-SCAN. (B) Transitions between elements of song are visualized as line segments, where the color of the line segment represents its position within a bout. (C) The syllable categories and transitions in (A) and (B) can be abstracted to transition probabilities between syllable categories, as in a Markov model. (D) An example vocalization from the same individual, with syllable clusters from (A) shown above each syllable. (E) A series of song bouts. Each row is one bout, showing overlapping structure in syllable sequences. Bouts are sorted by similarity to help show structure in song.
|
| 84 |
+
|
| 85 |
+
§ 2.2. TEMPORALLY CONTINUOUS LATENT TRAJECTORIES
|
| 86 |
+
|
| 87 |
+
Not all vocal repertoires are made up of elements that fall into highly discrete clusters in latent space (Fig 3). For several of the datasets we analysed, categorically discrete elements are not readily apparent, making analyses such as the cluster-based analyses performed in Figure 5 more difficult. In addition, many vocalizations are difficult to segment temporally, and determining what features to use for segmentation requires careful consideration (Kershenbaum et al., 2016). In many bird songs, for example, clear pauses exist between song elements that enable one to distinguish syllables. In other vocalizations, however, experimenters must rely on less well-defined physical features for segmentation (Janik, 1999; Kershenbaum et al., 2016), which may in turn invoke a range of biases and unwarranted assumptions. At the same time, much of the research on animal vocal production, perception, and sequential organization relies on identifying "units" of a vocal repertoire (Kershenbaum et al., 2016). To better understand the effects of temporal discretization and categorical segmentation in our analyses, we considered vocalizations as continuous trajectories in latent space and compared the resulting representations to those that treat vocal segments as single points (as in the previous Bengalese finch example in Fig 5). We show here explorations of two datasets: Bengalese finch (Fig 6) and human speech (Fig 7). In both dataset, we find that continuous latent trajectories capture short and long timescale structure in vocal sequences without requiring vocal elements to be segmented or labeled.
|
| 88 |
+
|
| 89 |
+
2.2.1. Comparing discrete and continuous REPRESENTATIONS OF SONG IN THE BENGALESE FINCH Bengalese finch song provides a relatively easy visual comparison between the discrete and continuous treatments of song, because it consists of a small number of unique highly stereotyped syllables (Fig 6). With a single bout of Bengalese finch song, which contains several dozen syllables, we generated a latent trajectory of song as UMAP projections of temporally-rolling windows of the bout spectrogram (See Projections section). To explore this latent space, we varied the window length between 1 and ${100}\mathrm{\;{ms}}$ (Fig 6A-L). At each window size, we compared UMAP projections (Fig 6A-C) to PCA projections (Fig 6D-F). In both PCA and UMAP, trajectories are more clearly visible as window size increases across the range tested, and overall the UMAP trajectories show more well-defined structure than the PCA trajectories. To compare continuous projections to discrete syllables, we re-colored the continuous trajectories by the discrete syllable labels obtained from the dataset. Again, as the window size increases, each syllable converges to a more distinct trajectory in UMAP space (Fig 6G-I). To visualize the discrete syllable labels and the continuous latent projections in relation to song, we converted the 2D projections into colorspace and show them as a continuous trajectory alongside the song spectrograms and discrete labels in Figure $6\mathrm{M},\mathrm{N}$ . Colorspace representations of the $2\mathrm{D}$ projections consist of treating the two UMAP dimensions as either a red, green, or blue channel in RGB (3D) colorspace, and holding the third channel constant. This creates a colormap projection of the two UMAP dimensions.
|
| 90 |
+
|
| 91 |
+
< g r a p h i c s >
|
| 92 |
+
|
| 93 |
+
Figure 6. Continuous UMAP projections of Bengalese finch song from a single bout produced by one individual. (A-C) Bengalese finch song is segmented into either $1\mathrm{\;{ms}}\left( \mathrm{\;A}\right) ,{20}\mathrm{\;{ms}}\left( \mathrm{\;B}\right)$ , or ${100}\mathrm{\;{ms}}$ (C) rolling windows of song, which are projected into UMAP. Color represents time within the bout of song. (D-F) The same plots as in (A), projected into PCA instead of UMAP. (G-I) The same plots as (A-C) colored by hand-labeled element categories (unlabelled points are not shown). (J-L) The same plot as (D-E) colored by hand-labeled syllable categories. (M) UMAP projections represented in colorspace over a bout spectrogram. The top three rows are the UMAP projections from (A-C) projected into RGB colorspace to show the position within UMAP space over time as over the underlying spectrogram data. The fourth row are the hand labels. The final row is a bout spectrogram. (N) a subset of the bout shown in (M). In G-L, unlabeled points (points that are in between syllables) are not shown for visual clarity.
|
| 94 |
+
|
| 95 |
+
§ 2.2.2. LATENT TRAJECTORIES OF HUMAN SPEECH
|
| 96 |
+
|
| 97 |
+
Discrete elements of human speech (i.e. phonemes) are not spoken in isolation and their acoustics are influenced by neighboring sounds, a process termed co-articulation. For example, when producing the words 'day', 'say', or 'way', the position of the tongue, lips, and teeth differ dramatically at the beginning of the phoneme 'ey' due to the preceding 'd', 's', or 'w' phonemes, respectively. This results in differences in the pronunciation of 'ey' across words (Fig 7E). Co-articulation explains much of the acoustic variation observed within phonetic categories. Abstracting to phonetic categories therefore discounts much of this context-dependent acoustic variance.
|
| 98 |
+
|
| 99 |
+
We explored co-articulation in speech, by projecting sets of words differing by a single phoneme (i.e. minimal pairs) into continuous latent spaces, then extracted trajectories of words and phonemes that capture sub-phonetic context-dependency (Fig 7). We obtained the words from the Buckeye corpus of conversational English. We computed spectrograms over all examples of each target word, then projected sliding 4-ms windows from each spectrogram into UMAP latent space to yield a continuous vocal trajectory over each word (Fig 7). We visualized trajectories by their corresponding word and phoneme labels (Fig 7A, B) and computed the average latent trajectory for each word and phoneme (Fig 7C, D). The average trajectories reveal context-dependent variation within phonemes caused by coarticulation. For example, the words 'way', 'day', and 'say' each end in the same phoneme ('ey'; Fig 7A-D), which appears as an overlapping region in the latent space (the red region in Fig 7C). The endings of each average word trajectory vary, however, indicating that the production of 'ey' differs based on its specific context (Fig 7C). The difference between the production of 'ey' can be observed in the average latent trajectory over each word, where the trajectories for 'day' and 'say' end in a sharp transition, while the trajectory for 'way' is more smooth (Fig 7C). Latent space trajectories can reveal other co-articulations as well. In Figure 7E, we show the different trajectories characterizing the phoneme 't' in the context of the word 'take' versus 'talk'. In this case, the 't' phoneme follows a similar trajectory for both words until it nears the next phoneme ('ey' vs. 'ao'), at which point the production of 't' diverges for the different words.
|
| 100 |
+
|
| 101 |
+
< g r a p h i c s >
|
| 102 |
+
|
| 103 |
+
Figure 7. Speech trajectories showing coarticulation in minimal pairs. (A) Utterances of the words 'day', 'say', and 'way' are projected into a continuous UMAP latent space with a window size of $4\mathrm{\;{ms}}$ . Color represents the corresponding word. (B) The same projections are colored by the corresponding phonemes. (D) The average latent trajectory for each word. (E) The average trajectory for each phoneme. (F) Example spectrograms of words, with latent trajectories above spectrograms and phoneme labels below spectrograms. (G) Average trajectories and corresponding spectrograms for the words 'take' and 'talk' showing the different trajectories for 't' in each word. (H) Average trajectories and the corresponding spectrograms for the words 'then' and 'them' showing the different trajectories for 'eh' in each word.
|
| 104 |
+
|
| 105 |
+
§ 3. DISCUSSION
|
| 106 |
+
|
| 107 |
+
We have presented a set of computational methods for projecting vocal communication signals into low-dimensional latent representational spaces, learned directly from the spectrograms of the signals. We demonstrate the flexibility and power of these methods by applying them to a wide sample of animal vocal communication signals, including songbirds, primates, rodents, bats, and cetaceans (Fig 3). Deployed over short timescales of a few hundred milliseconds, our methods capture significant behaviorally-relevant structure in the spectro-temporal acoustics of these diverse species' vocalizations. We find that complex attributes of vocal signals, such as individual identity (Fig 2), species identity, geographic population variability, phonetics, and similarity-based clusters can all be captured by the unsupervised latent space representations we present. We also show that songbirds tend to produce signals that cluster discretely in latent space, whereas mammalian vocalizations are more uniformly distributed, an observation that deserves much closer investigation in more species. Applied to longer timescales, spanning seconds or minutes, the same methods allowed us to visualize sequential organization and test models of vocal sequencing (Fig 5). We demonstrated that in some cases latent approaches confer advantages over hand labeling or supervised learning (See full manuscript/code). Finally, we visualized vocalizations as continuous trajectories in latent space (Figs 6, 7), providing a powerful method for studying sequential organization without discretization (Kershenbaum et al., 2016).
|
| 108 |
+
|
| 109 |
+
Latent models have shown increasing utility in the biological sciences over the past several years. As machine learning algorithms improve, so will their utility in characterizing the complex patterns present in biological systems like animal communication. In neuroscience, latent models already play an important role in characterizing complex neural population dynamics (Cunningham & Byron, 2014). Similarly, latent models are playing an increasingly important role in computational ethology (Brown & De Bivort, 2018), where characterizations of animal movements and behaviors have uncovered complex sequential organization (Marques et al., 2018; Berman et al., 2016; Wiltschko et al., 2015). In animal communication, pattern recognition using various machine learning techniques has been used to characterize vocalizations and label auditory objects (Sainburg et al., 2019; Cohen et al., 2019; Coffey et al., 2019; Van Segbroeck et al., 2017; Goffinet et al., 2019; Kollmorgen et al., 2019; Hertz et al., 2019). Our work furthers this emerging research area by demonstrating the utility of unsupervised latent models for both systematically visualizing and abstracting structure from animal vocalizations across a wide range of species.
|
| 110 |
+
|
| 111 |
+
§ SOFTWARE AND DATA
|
| 112 |
+
|
| 113 |
+
All software is publicly available and example Jupyter Notebooks are provided for each species vocal repertoire and analyses type (https://github.com/timsainb/ avgn_paper). The data is provided in Supplementary Table 1 of the longform paper.
|
| 114 |
+
|
| 115 |
+
§ ACKNOWLEDGEMENTS
|
| 116 |
+
|
| 117 |
+
Work supported by NSF GRF 2017216247 and an Annette Merle-Smith Fellowship to T.S. and NIH DC0164081 and DC018055 to T.Q.G. We additionally would like to thank Kyle McDonald and his colleagues for motivating some of our visualization techniques with their work on humpback whale song (McDonald, 2019).
|
papers/ICML/ICML 2020/ICML 2020 Workshop/ICML 2020 Workshop SAS/OSVxDDc360z/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,424 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
$\mathbf{{End} - {to} - {End}\;{ASR} : }$
|
| 2 |
+
|
| 3 |
+
from Supervised to Semi-Supervised Learning with Modern Architectures
|
| 4 |
+
|
| 5 |
+
Gabriel Synnaeve ${}^{ * }{}^{1}$ Qiantong Xu ${}^{ * }{}^{1}$ Jacob Kahn ${}^{ * }{}^{1}$ Tatiana Likhomanenko ${}^{ * }{}^{1}$ Edouard Grave ${}^{ * }{}^{1}$ Vineel Pratap ${}^{1}$ Anuroop Sriram ${}^{1}$ Vitaliy Liptchinsky ${}^{1}$ Ronan Collobert ${}^{ * }{}^{1}$
|
| 6 |
+
|
| 7 |
+
## Abstract
|
| 8 |
+
|
| 9 |
+
We study pseudo-labeling for the semi-supervised training of ResNet, Time-Depth Separable Con-vNets, and Transformers for speech recognition, with either CTC or Seq2Seq loss functions. We perform experiments on the standard LIB-RISPEECH dataset, and leverage additional unlabeled data from LIBRIVOX through pseudo-labeling. We show that while Transformer-based acoustic models have superior performance with the supervised dataset alone, semi-supervision improves all models across architectures and loss functions and bridges much of the performance gaps between them. In doing so, we reach a new state-of-the-art for end-to-end acoustic models decoded with an external language model in the standard supervised learning setting, and a new absolute state-of-the-art with semi-supervised training. Finally, we study the effect of leveraging different amounts of unlabeled audio, propose several ways of evaluating the characteristics of unlabeled audio which improve acoustic modeling, and show that acoustic models trained with more audio rely less on external language models.
|
| 10 |
+
|
| 11 |
+
## 1. Introduction
|
| 12 |
+
|
| 13 |
+
End-to-end speech recognition models are simpler to implement and train than bootstrapped systems. Even given recent promising results from these systems, best-results for common benchmarks are still dominated by classical ASR models; systems requiring force alignment may leave some performance aside for each training step. We set out to study end-to-end systems on LIBRISPEECH (Panayotov et al., 2015) and, without any algorithmic contribution, see if they can be made to perform as well as more complex training pipelines. The difficulties involved in properly optimizing acoustic models with Connectionist Temporal Classification (CTC) (Graves et al., 2006) or sequence-to-sequence (Seq2Seq) (Sutskever et al., 2014) (v.s. cross-entropy, for instance) combined with more readily-available regularization techniques for classical pipelines make this comparison challenging. Our best acoustic models nonetheless reach 5.17% WER on test-other, showing that end-to-end models can compete with traditional pipelines.
|
| 14 |
+
|
| 15 |
+

|
| 16 |
+
|
| 17 |
+
Figure 1. WERs on dev-other across AM architectures and loss functions. Left: WERs of different models trained on LIB-RISPEECH with and without beam-search decoding ("no LM" refers to the greedy decoding). Transformer AM architectures outperform others by a large margin. Right: WERs of models trained on LIBRIVOX. All models trained on LIBRIVOX significantly outperform their LIBRISPEECH counterparts. The gap between Transformer AMs and other models is much smaller with LIBRIVOX data.
|
| 18 |
+
|
| 19 |
+
As in other domains, self and semi-supervised learning in ASR, where a pretrained network generates and trains on its own labels, yields improvements (Vesely et al., 2017). In end-to-end ASR, pseudo-labeling and self-training can be quite effective, and its effectiveness is further improved when more data is available (Kahn et al., 2019a). In this setting, we train a model on LIBRISPEECH, then use that model in conjunction with a language model to generate pseudo-labels from unlabeled audio. We show that with this training scheme, our results without an external language model (LM) reach state-of-the-art results that use an external language model, with 2.28% and 4.88% Word Error Rate (WER) on test-clean and test-other respectively. With LM beam-search decoding and rescoring, we reach 2.09% and 4.11% WER on the test set.
|
| 20 |
+
|
| 21 |
+
---
|
| 22 |
+
|
| 23 |
+
*Equal contribution ${}^{1}$ Facebook AI Research, Menlo Park & New York, US, and Paris, France. Correspondence to: Gabriel Synnaeve $<$ gab@fb.com>.
|
| 24 |
+
|
| 25 |
+
Published at the workshop on Self-supervision in Audio and Speech at the ${37}^{\text{th }}$ International Conference on Machine Learning, Vienna, Austria. Copyright 2020 by the author(s).
|
| 26 |
+
|
| 27 |
+
---
|
| 28 |
+
|
| 29 |
+
While many advances in end-to-end ASR come as the result of neural architecture search (Prabhavalkar et al., 2017; Zhou et al., 2018; Chiu et al., 2018b), we additionally show that simple semi-supervision via pseudo-labeling significantly bridges the performance gap between a variety of different model architectures and loss functions, as shown in Figure 1. In particular, with enough unlabeled audio, Transformer, ResNet, and depthwise-separable convolution-based acoustic models give similar performance with both CTC and Seq2Seq loss functions, suggesting that new techniques in semi-supervision may facilitate equally-significant gains in ASR performance while being applicable to a multitude of end-to-end setups.
|
| 30 |
+
|
| 31 |
+
## 2. Models
|
| 32 |
+
|
| 33 |
+
### 2.1. Acoustic Models
|
| 34 |
+
|
| 35 |
+
In this section, we present the three families of acoustic models (AMs) studied. All AMs output probability distributions over tokens. In particular, we use a set of ${10}\mathrm{k}$ word pieces (Schuster & Nakajima, 2012; Kudo & Richardson, 2018) generated from the SentencePiece toolkit ${}^{1}$ . The choice to use a fixed set of ${10}\mathrm{k}$ word pieces is made for the simplicity of the comparative study, not the result of a limitation. Similarly, all AMs take 80-channel log-mel filterbanks as input, with STFTs computed on Hamming windows strided by ${10}\mathrm{\;{ms}}$ . This window size is ${25}\mathrm{\;{ms}}$ for Transformer models and ${30}\mathrm{\;{ms}}$ for TDS and ResNet models. All models are trained end-to-end with either CTC or Seq2Seq loss. Given the huge difference between the amounts of data, we prepare two sets of architectures: one for training only on labeled LIBRISPEECH and one for unlabeled LIBRIVOX.
|
| 36 |
+
|
| 37 |
+
ResNet Acoustic Model ResNets were first introduced in the domain of computer vision (He et al., 2016) and have since been successfully applied to speech recognition (Xiong et al., 2017; Saon et al., 2017; Li et al., 2019b; Wang et al., 2017). ResNets are composed of several blocks of convolutions (in our case only 1-D convolutions), with skip connections. In particular, our ResNet encoder includes 42 convolutional layers each with a kernel size of 3 . The encoder first maps the input to an embedding space of size 1024 using a single convolutional layer with stride 2; 12 blocks of three 1-D convolutions each follow. Each of the convolutional layers is followed by ReLU, dropout and Lay-erNorm (Ba et al., 2016). Both the dropout and the number of hidden units increases with the depth of the network. Specific convolution layers are inserted between ResNet blocks in order to upsample when the hidden representation size increases. Our architecture performs significant pooling with respect to the input ( 16 frames in total, equating to 160 milliseconds) - in addition to the first strided convolutional layer, 3 max pooling layers (each with stride 2) are distributed across the depth of the network (after blocks 3 , 7 and 10). Nearly identical encoder architectures are used in front of CTC and Seq2Seq loss functions; the Seq2Seq encoder has its last bottleneck layer removed and lower dropout in deeper layers. The Seq2Seq self-attention decoder for the ResNet architecture is the same as that used with the TDS convolutional AM described below. To better fit the unlabeled data, we increase the model size by increasing the number of channels in each convolution layer.
|
| 38 |
+
|
| 39 |
+
Time-Depth Separable (TDS) Convolution Acoustic Model We extend the TDS block (Hannun et al., 2019) (which is composed of one 2-D convolution layer and two fully-connected layers with ReLU, LayerNorm and residual connections in between), by increasing the number of channels in the feature maps spanning the two internal fully-connected layers by a factor $F > 1$ , so as to increase model capacity. Following (Hannun et al., 2019), 3 sub-sampling layers, i.e. 1-D convolution layers with stride 2, are adopted to ensure an optimal context size for the encoder. For training with only labeled data, we have three groups of TDS blocks with $F = 3$ after each sub-sampling layers. There are 5,6, and 10 blocks in each group, containing 10, 14, and 18 channels, respectively. To increase model capacity for unlabeled data, the three groups of TDS blocks, having fewer4,5, and 6 blocks and $F = 2$ in each, are equipped with much larger16,32, and 48 channels. All convolutions in both TDS and sub-sampling layers have kernel size of ${21} \times 1$ . Identical encoders are used for CTC and Seq2Seq.
|
| 40 |
+
|
| 41 |
+
Our Seq2Seq self-attention decoder performs $R$ rounds of attention through the same $N$ -layers of RNN-GRU each with a hidden unit size of 512 in conjunction with the same efficient key-value attention as in (Hannun et al., 2019; Vaswani et al., 2017):
|
| 42 |
+
|
| 43 |
+
$$
|
| 44 |
+
{\mathbf{S}}_{t}^{r} = \operatorname{SOFTMAX}\left( {\frac{1}{\sqrt{d}}{\mathbf{K}}^{\top }{\mathbf{Q}}_{t}^{r - 1}}\right) \mathbf{V}, \tag{1}
|
| 45 |
+
$$
|
| 46 |
+
|
| 47 |
+
where $\left\lbrack {\mathbf{K},\mathbf{V}}\right\rbrack$ is 512-dimensional encoder activation and ${\mathbf{Q}}_{t}^{r} = g\left( {{\mathbf{Q}}_{t - 1}^{r},{\mathbf{Q}}_{t}^{r - 1}}\right) + {\mathbf{S}}_{t}^{r}$ is the query vector at time $t$ in round $r$ , generated by the GRU $g\left( \cdot \right)$ . The initial ${\mathbf{Q}}_{t}^{0}$ is a 512-dimensional token embedding, and the final ${\mathbf{Q}}_{t}^{R}$ is linearly projected to output classes for token classification. In our experiments, $N$ and $R$ are both set to either 2 or 3 based on validation performance. We use dropout in all TDS blocks and GRUs to prevent overfitting.
|
| 48 |
+
|
| 49 |
+
---
|
| 50 |
+
|
| 51 |
+
'https://github.com/google/sentencepiece
|
| 52 |
+
|
| 53 |
+
---
|
| 54 |
+
|
| 55 |
+
Transformer-Based Acoustic Model Our transformer-based acoustic models have a small front-end: 3 (LIBRISPEECH AMs) or 6 (LIBRIVOX AM) layers of 1- D convolutions each of kernel width 3 and respective input and output sizes $\left( {{80},{D}_{c}}\right) ,\left( {{D}_{c}/2,{D}_{c}}\right) ,\left\lbrack \left( {{D}_{c}/2,{D}_{c}}\right) \right.$ , $\left( {{D}_{c}/2,{D}_{c}}\right) ,\left( {{D}_{c}/2,{D}_{c}}\right)$ , $\left. {\left( {{D}_{c}/2,{D}_{tr} \times 2}\right) \text{, with}{D}_{c} = }\right)$ 1024 or 2048. Each convolution is followed by a GLU activation function (Dauphin et al., 2017) and are striding by 2 each (for 3 consecutive layers), or every other layer (for 6 layers). The output of the front-end for all models is thus strided by 8 frames ( ${80}\mathrm{\;{ms}}$ ). After the front-end, each Transformer block has 4 attention heads followed by a feedforward network (FFN) with one hidden layer and a ReLU non-linearity. There are two configurations of Transformer blocks: one 24 layer configuration (only for the LIB-RISPEECH CTC AM) with dimension ${D}_{tr} = {1024}$ for the self-attention and 4096 for the FFN, and one 36 layer configuration with dimension ${D}_{tr} = {768}$ for the self-attention and 3072 for the FFN. Specifically, given a sequence of $T$ vectors of dimension $d$ , the input is represented by the matrix ${\mathbf{H}}^{\mathbf{0}} \in {\mathbb{R}}^{d \times T}$ , following exactly (Vaswani et al.,2017):
|
| 56 |
+
|
| 57 |
+
$$
|
| 58 |
+
{\mathbf{Z}}^{i} = \operatorname{NORM}\left( {\operatorname{SELFATTENTION}\left( {\mathbf{H}}^{i - 1}\right) + {\mathbf{H}}^{i - 1}}\right) ,
|
| 59 |
+
$$
|
| 60 |
+
|
| 61 |
+
$$
|
| 62 |
+
{\mathbf{H}}^{i} = \operatorname{NORM}\left( {\operatorname{FFN}\left( {\mathbf{Z}}^{i}\right) + {\mathbf{Z}}^{i}}\right) ,
|
| 63 |
+
$$
|
| 64 |
+
|
| 65 |
+
where $\mathbf{Z}$ is the output of the self-attention layer, with a skip connection, and $\mathbf{H}$ is the output of the FFN layer, with a skip connection. As is standard: our NORM is LayerNorm, and self-attention is defined as in Eq. 1, but with $\mathbf{K} = {\mathbf{W}}_{K}\mathbf{H}$ , $\mathbf{Q} = {\mathbf{W}}_{Q}\mathbf{H}$ , and $\mathbf{V} = {\mathbf{W}}_{V}\mathbf{H}$ . For CTC-trained models, the output of the encoder ${\mathbf{H}}^{{L}_{e}}$ is followed by a linear layer to the output classes. For Seq2Seq models, we have an additional decoder, which is a stack of 6 Transformers with encoding dimension 256 and 4 attention heads. The probability distribution of the transcription is factorized as:
|
| 66 |
+
|
| 67 |
+
$$
|
| 68 |
+
p\left( {{y}_{1},\ldots ,{y}_{n}}\right) = \mathop{\prod }\limits_{{i = 1}}^{n}p\left( {{y}_{i} \mid {y}_{0},\ldots ,{y}_{i - 1},{\mathbf{H}}^{{L}_{e}}}\right) , \tag{2}
|
| 69 |
+
$$
|
| 70 |
+
|
| 71 |
+
where ${y}_{0}$ is a special symbol indicating the beginning of the transcription. For all layers (encoder and decoder - when present), we use dropout on the self-attention and layer drop (Fan et al., 2019), dropping entire layers at the FFN level.
|
| 72 |
+
|
| 73 |
+
### 2.2. Language Models
|
| 74 |
+
|
| 75 |
+
In this section, we present external language models (LMs) used in beam-search decoding. We consider $n$ -gram LMs as well as convolutional (Dauphin et al., 2017) (GCNN) and Transformer-based LMs. For $n$ -gram and GCNN LMs, we train both word-level and word-piece models, and only a word-level Transformer LM. All word-piece LMs are trained on the set of ${10}\mathrm{k}$ word pieces as outlined in Section 2.1. This ensures that the set of word pieces is consistent across both of the output distributions of the AMs and the candidates the LM scores during beam-search decoding.
|
| 76 |
+
|
| 77 |
+
For the word-piece and word-level GCNN models, we use the GCNN-14B architecture from (Dauphin et al., 2017) with embedding size 1024 and dropout 0.1 . The word-level Transformer LM has the same architecture as (Baevski & Auli, 2019)'s Google Billion Words model; we use 16 attention heads and 20 decoder layers with embedding, input and output dimensions of 1280 and 6144 for the FFN with dropout of 0.1 .
|
| 78 |
+
|
| 79 |
+
## 3. Unlabeled Audio Dataset Preparation
|
| 80 |
+
|
| 81 |
+
LIBRIVOX ${}^{2}$ is a large collection of freely-available audio-books. Using tools provided with the LIBRILIGHT dataset (Kahn et al.,2019b), we select 72K hours of read speech from English book listings and run several preprocessing steps. After filtering samples to remove readings of duplicate text and corrupted audio, we remove all audio for which the speaker has overlap with a sample in LIBRISPEECH. We run voice activity detection (VAD) using the wav2letter++ framework (Pratap et al., 2018) on the resulting collection of audio with a CTC model trained on LIBRISPEECH, and segment the result into chunks no greater than 36 s; the resulting audio corpus contains ${53.8}\mathrm{\;K}$ hours of read speech.
|
| 82 |
+
|
| 83 |
+
We then generate pseudo-labels for this audio using the recipe described in (Kahn et al., 2019a). To generate the pseudo-labels, we use a Transformer AM trained on LIB-RISPEECH with CTC loss that achieves a 6.20% WER on dev-other when decoded with a 4-gram word LM - the same model as is listed in Table 3 in the Appendix. We pseudo-label all audio using this AM and run beam-search decoding with a 4-gram word LM described in Appendix A.
|
| 84 |
+
|
| 85 |
+
## 4. Decoding
|
| 86 |
+
|
| 87 |
+
Decoding is designed to select the best transcription by leveraging both the posteriors of an acoustic model (AM) and the perplexity of a language model (LM). We perform one-pass beam-search decoding with a single external LM. Optionally, to further improve performance, we use stronger NN-based LMs to rescore the beam. Details on our beam-search decoder algorithm and rescoring are given in Appendix B.
|
| 88 |
+
|
| 89 |
+
## 5. Experiments
|
| 90 |
+
|
| 91 |
+
### 5.1. Technical Details
|
| 92 |
+
|
| 93 |
+
We use the standard splits for LIBRISPEECH (all the available training data was used for training, and two configurations, clean and other, for validation and test) and the standard LIBRISPEECH LM corpus for LM training. Models are trained using the wav2letter++ toolkit (Pratap et al., 2018); reproduction steps and pre-trained models are open-sourced ${}^{3}$ .
|
| 94 |
+
|
| 95 |
+
---
|
| 96 |
+
|
| 97 |
+
2https://librivox.org
|
| 98 |
+
|
| 99 |
+
---
|
| 100 |
+
|
| 101 |
+
Acoustic Model Training All hyper-parameters including model architecture are cross-validated on dev-clean and dev-other. Given that we have a large family of models, for simplicity and clarity, we only report hyper-parameters ranges in which we search their best values.
|
| 102 |
+
|
| 103 |
+
Plain SGD with momentum is used to train ResNet and TDS models, and Adagrad (Duchi et al., 2011) to train Transformers. Models are trained on 64 GPUs each with an overall batch size of 256 for ResNet and TDS and 320 for Transformer. With only LIBRISPEECH, all models converged in under a week; with pseudo-labels from LIBRIVOX, training required 2-3 weeks. The initial learning rate for ResNet models is chosen from $\left\lbrack {{0.05},{0.5}}\right\rbrack$ , while for TDS and Transformer models, the range decreases to [0.01, 0.03]. Specifically, for Transformers, we apply a linear learning rate warm-up schedule for either ${32}\mathrm{k}$ or ${64}\mathrm{k}$ updates. For fully-supervised training with LIBRISPEECH, the learning rate is halved every 90 epochs for Transformer models, and 150 epochs for ResNet and TDS models. With LIB-RIVOX, however, we only halve the learning rate once in the middle of the training. For TDS and ResNet models, we use momentum in the range $\left\lbrack {{0.1},{0.6}}\right\rbrack$ . With respect to regularization, we use 0.2 dropout everywhere (front-end, encoder, decoder), and layer drop for all Transformer blocks. Dropout in TDS blocks and ResNet convolutions is in the range $\left\lbrack {{0.05},{0.2}}\right\rbrack$ and increases with depth. For Seq2Seq training, we run 3 epochs of attention-window pretraining, and use ${99}\%$ of teacher forcing ( $1\%$ of uniform output sampling). We also use ${10}\%$ dropout in the decoder for TDS (and 0.1 dropout and 0.1 layer drop in the decoder for Transformers), together with $5\%$ label smoothing, $1\%$ random sampling and 1% word piece sampling. All models use SpecAugment (Park et al., 2019) with an LD policy.
|
| 104 |
+
|
| 105 |
+
Language Model Training All LMs in this section are trained on the standard LIBRISPEECH LM corpus. All word-level LMs use the same vocabulary for training. $n$ -gram LMs are trained with the KenLM toolkit (Heafield, 2011), while the GCNN and Transformer LMs are trained with fairseq ${}^{4}$ toolkit (Ott et al.,2019). The word-level 4-gram and GCNN are trained in the same way as (Likhomanenko et al., 2019). We also train a 6-gram word-piece LM, which has a similar context size to a word-level 4-gram LM, and prunes 5-grams appearing once and 6-gram appearing twice or fewer. The word-piece and word-level GCNN models are trained with Nesterov accelerated gradient descent (Nes-terov, 1983) on 8 GPUs for 22 epochs with a step-wise learning rate schedule starting from 1 and decreasing by a factor of 5 when the loss is on the plateau. Gradient clipping and weight normalization are used following (Dauphin et al., 2017). The word-level Transformer LM is trained with Nesterov accelerated gradient descent on 128 GPUs for 100 epochs with an inverse square root learning rate schedule. During the first ${16}\mathrm{k}$ iterations, a warm-up schedule that linearly increases the learning rate from 1e-7 to 1 is used. Word-level perplexities of all LM variants are listed in Table 1.
|
| 106 |
+
|
| 107 |
+
Table 1. Word-level perplexities of LMs on LIBRISPEECH. Perplexity is computed without unknown words.
|
| 108 |
+
|
| 109 |
+
<table><tr><td>LANGUAGE MODEL</td><td>DEV-CLEAN</td><td>DEV-OTHER</td></tr><tr><td>WORD 4-GRAM</td><td>148.0</td><td>136.6</td></tr><tr><td>NO LIBRIVOX OVERLAP</td><td>152.8</td><td>140.0</td></tr><tr><td>WP 6-GRAM</td><td>145.4</td><td>133.7</td></tr><tr><td>WP GCNN (188M)</td><td>61.7</td><td>61.9</td></tr><tr><td>WORD GCNN (319M)</td><td>57.0</td><td>57.9</td></tr><tr><td>WORD TRANSF. (562M)</td><td>48.2</td><td>50.2</td></tr></table>
|
| 110 |
+
|
| 111 |
+
### 5.2. Results
|
| 112 |
+
|
| 113 |
+
LIBRISPEECH Results All our results for LIB-RISPEECH are listed in the top of Table 3 in Appendix. We present results under three scenarios: without any decoding nor external LM (greedy decoding), with one-pass decoding only, and with decoding followed by beam rescoring. The decoding beam size is usually 50 and 500 for Seq2Seq and CTC respectively. We use a beam size of 250 for CTC decoding with a GCNN LM. We train strong baselines on simple ResNet architectures and improve the TDS models significantly compared to past results (Hannun et al., 2019). These convolutional models outperform end-to-end biLSTM models from (Lüscher et al., 2019). Our best acoustic models are Transformers-based and reach 6.98% without any decoding on test-other and 5.17% with decoding and rescoring, demonstrating that end-to-end training can perform as well as traditional bootstrapped systems.
|
| 114 |
+
|
| 115 |
+
LIBRIVOX Results Assuming all pseudo-labels are ground-truth, we train acoustic models on a combination of the 960 hours of labeled audio from LIBRISPEECH in conjunction the pseudo-labeled audio from LIBRIVOX, where batches are uniformly sampled (without weighting) from both LIBRISPEECH and LIBRIVOX datasets. Transformer AMs with both CTC and Seq2Seq loss were trained for 5 days on this combined dataset, achieving WERs on test-other of ${4.88}\%$ and ${2.28}\%$ on test-clean without decoding or use of an LM, which is state-of-the-art even amongst pipelines that use an LM. Results with decoding/rescoring are shown in Table 2, where we reach 2.09% and 4.11% on test-clean and test-other, respectively, and are further improvements on the state-of-the-art. From ablations study, Appendix C and D, we found interesting outcomes: i) increasing the amount of pseudo-labels strictly improves performance, ii) models trained on LIBRIVOX pseudo-labels alone outperform models trained on LIBRISPEECH, iii) a large collection of pseudo-labeled audio helps to learn better acoustic representation and transfer LM knowledge so there is no longer benefit much from decoding with an external LM.
|
| 116 |
+
|
| 117 |
+
---
|
| 118 |
+
|
| 119 |
+
${}^{3}$ https://github.com/facebookresearch/ wav2letter
|
| 120 |
+
|
| 121 |
+
${}^{4}$ https://github.com/pytorch/fairseq
|
| 122 |
+
|
| 123 |
+
---
|
| 124 |
+
|
| 125 |
+
Table 2. WERs on LIBRISPEECH development and test sets. Our best results are shown in the bottom section (with the number of parameters), and are both trained with Seq2Seq loss. Full results can be found in Appendix Table 3.
|
| 126 |
+
|
| 127 |
+
<table><tr><td colspan="2">AM</td><td colspan="2">LM</td><td colspan="2">DEV</td><td colspan="2">TEST</td></tr><tr><td>TYPE</td><td>LEXICON</td><td>TYPE</td><td>LEXICON</td><td>CLEAN</td><td>OTHER</td><td>CLEAN</td><td>OTHER</td></tr><tr><td>LAS (Park et al., 2019)</td><td>16k WP</td><td>-</td><td>-</td><td/><td/><td>2.8</td><td>6.8</td></tr><tr><td>Decoding</td><td>16k WP</td><td>RNN</td><td>16k WP</td><td/><td/><td>2.5</td><td>5.8</td></tr><tr><td>HMM/BILSTM</td><td>${12}\mathrm{K}\;\mathrm{{CDP}}$</td><td>4GRAM+LSTM</td><td>WORD</td><td>2.2</td><td>5.1</td><td>2.6</td><td>5.5</td></tr><tr><td>+ TRANSF. RESCORING (Lüscher et al., 2019)</td><td>${12}\mathrm{K}\;\mathrm{{CDP}}$</td><td>+TRANSF.</td><td>WORD</td><td>1.9</td><td>4.5</td><td>2.3</td><td>5.0</td></tr><tr><td>TRANSFORMERS (Karita et al., 2019)</td><td>BPE</td><td>RNN</td><td>WORD</td><td>2.2</td><td>5.6</td><td>2.6</td><td>5.7</td></tr><tr><td>CONV. TRANSF. (HAN ET AL., 2019)</td><td>6K TRIPHONES</td><td>3GRAM, RESCORED +TDNN + LSTM</td><td>WORD</td><td>1.8</td><td>5.8</td><td>2.2</td><td>5.7</td></tr><tr><td>CONV. TRANSF.</td><td>CHENONES</td><td>4GRAM</td><td>WORD</td><td/><td/><td>2.60</td><td>5.59</td></tr><tr><td>+ TRANSF. RESCORING (WANG ET AL., 2019)</td><td>CHENONES</td><td>TRANSF.</td><td>WORD</td><td/><td/><td>2.26</td><td>4.85</td></tr><tr><td>TRANSF. (270M) – LIBRISPEECH</td><td>10K WP</td><td>-</td><td>-</td><td>2.54</td><td>6.67</td><td>2.89</td><td>6.98</td></tr><tr><td>+DECODING/RESCORING</td><td>10K WP</td><td>GCNN + TRANSF.</td><td>WORD</td><td>2.07</td><td>4.79</td><td>2.37</td><td>5.17</td></tr><tr><td>Transf. (296M) – LibriVox</td><td>10K WP</td><td>-</td><td>-</td><td>2.12</td><td>4.59</td><td>2.28</td><td>4.88</td></tr><tr><td>+DECODING/RESCORING</td><td>10K WP</td><td>GCNN + TRANSF.</td><td>WORD</td><td>2.00</td><td>3.65</td><td>2.09</td><td>4.11</td></tr></table>
|
| 128 |
+
|
| 129 |
+
## 6. Related Work
|
| 130 |
+
|
| 131 |
+
Deep neural networks were reintroduced in ASR with HMMs (Hinton et al., 2012), and many of state-of-the-art models still rely on force alignment (Han et al., 2017; Lüscher et al., 2019; Karita et al., 2019). Nonetheless, there have been increasingly competitive end-to-end results trained with CTC (Graves & Jaitly, 2014; Amodei et al., 2016), ASG (Collobert et al., 2016; Zeghidour et al., 2018), LF-MMI (Hadian et al., 2018), sequence-to-sequence (Chan et al., 2016; Chiu et al., 2018a), transduction (Prabhavalkar et al., 2017; He et al., 2019), and differentiable decoding (Collobert et al., 2019a). Listen Attend and Spell (Chan et al., 2016) is a family of end-to-end models based on biL-STMs which achieved state-of-the-art results with improved regularization through data augmentation (Park et al., 2019); we consequently use SpecAugment in all of our experiments. Seq2Seq models are not limited to RNNs; time-depth separable convolutions also give strong results (Hannun et al., 2019). Our best models are transformer-based, as (Lüscher et al., 2019; Karita et al., 2019), which give good results in Seq2Seq settings even without external LMs (Mohamed et al., 2019). In ASR, semi-supervised pseudo-label-style self-training has been explored generally in end-to-end settings in (Soltau et al., 2016; Li et al., 2019a; Kahn et al., 2019a) for both low-resource (Veselj) et al., 2017; Cui et al., 2017) and large-scale (Parthasarathi & Strom, 2019) setups.
|
| 132 |
+
|
| 133 |
+
## 7. Discussion
|
| 134 |
+
|
| 135 |
+
We presented state-of-the-art results on LIBRISPEECH with end-to-end methods. While allowing for lexicon-free decoding, the ${10}\mathrm{k}$ word-piece tokens used during training limit the amount of striding we can use in our model architectures and can be replaced by AMs outputting words with an arbitrary lexicon (Collobert et al., 2019b). As relative WER gains due to language models shrink (from $\approx {20}\%$ relative-WER without LIBRIVOX to ≈10% with, for GCNN decoding), and as we showed that AMs learn LM-level information, differentiable decoding (Collobert et al., 2019a) is a possible avenue for single-stage AM + LM joint training.
|
| 136 |
+
|
| 137 |
+
We show the effectiveness of a simple pipeline that does not require many training steps. In light of our semi-supervised results without decoding or an LM, we think Seq2Seq/CTC losses, transducers, and differentiable decoding are viable methods to achieve end-to-end state-of-the-art results, without external LMs, through semi-supervised learning.
|
| 138 |
+
|
| 139 |
+
## 8. Acknowledgements
|
| 140 |
+
|
| 141 |
+
We would like to thank Steven Garan for audio recordings of shuffled sentences from LIBRISPEECH dev-other.
|
| 142 |
+
|
| 143 |
+
References
|
| 144 |
+
|
| 145 |
+
Amodei, D., Ananthanarayanan, S., Anubhai, R., et al. Deep speech 2: End-to-end speech recognition in english and mandarin. In International conference on machine learning, pp. 173-182, 2016.
|
| 146 |
+
|
| 147 |
+
Ba, J. L., Kiros, J. R., and Hinton, G. E. Layer normalization. arXiv preprint arXiv:1607.06450, 2016.
|
| 148 |
+
|
| 149 |
+
Baevski, A. and Auli, M. Adaptive input representations for neural language modeling. In International Conference on Learning Representations, 2019. URL https:// openreview.net/forum?id=ByxZX20qFQ.
|
| 150 |
+
|
| 151 |
+
Chan, W., Jaitly, N., Le, Q. V., and Vinyals, O. Listen, attend and spell: A neural network for large vocabulary conversational speech recognition. In ICASSP, 2016.
|
| 152 |
+
|
| 153 |
+
Chiu, C.-C., Sainath, T., Wu, Y., et al. State-of-the-art speech recognition with sequence-to-sequence models. ICASSP, 2018a.
|
| 154 |
+
|
| 155 |
+
Chiu, C.-C., Sainath, T. N., Wu, Y., Prabhavalkar, R., Nguyen, P., Chen, Z., Kannan, A., Weiss, R. J., Rao, K., Gonina, E., et al. State-of-the-art speech recognition with sequence-to-sequence models. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 4774-4778. IEEE, 2018b.
|
| 156 |
+
|
| 157 |
+
Chorowski, J. and Jaitly, N. Towards better decoding and language model integration in sequence to sequence models. arXiv preprint arXiv:1612.02695, 2016.
|
| 158 |
+
|
| 159 |
+
Collobert, R., Puhrsch, C., and Synnaeve, G. Wav2letter: an end-to-end convnet-based speech recognition system. arXiv preprint arXiv:1609.03193, 2016.
|
| 160 |
+
|
| 161 |
+
Collobert, R., Hannun, A., and Synnaeve, G. A fully differentiable beam search decoder. In ${ICML}$ , pp. 1341-1350, 2019a. URL http://proceedings.mlr.press/ v97/collobert19a.html.
|
| 162 |
+
|
| 163 |
+
Collobert, R., Hannun, A., and Synnaeve, G. Word-level speech recognition with a dynamic lexicon. arXiv preprint arXiv:1906.04323, 2019b.
|
| 164 |
+
|
| 165 |
+
Cui, J., Kingsbury, B., Ramabhadran, B., Saon, G., Sercu, T., Audhkhasi, K., Sethy, A., Nussbaum-Thom, M., and Rosenberg, A. Knowledge distillation across ensembles of multilingual models for low-resource languages. 2017.
|
| 166 |
+
|
| 167 |
+
Dauphin, Y. N., Fan, A., Auli, M., and Grangier, D. Language modeling with gated convolutional networks. In Proceedings of the 34th International Conference on Machine Learning - Volume 70, ICML'17, pp. 933-941. JMLR.org, 2017.
|
| 168 |
+
|
| 169 |
+
Duchi, J., Hazan, E., and Singer, Y. Adaptive subgradi-
|
| 170 |
+
|
| 171 |
+
ent methods for online learning and stochastic optimization. Journal of machine learning research, 12(Jul):2121- 2159, 2011.
|
| 172 |
+
|
| 173 |
+
Fan, A., Grave, E., and Joulin, A. Reducing transformer depth on demand with structured dropout. arXiv preprint arXiv:1909.11556, 2019.
|
| 174 |
+
|
| 175 |
+
Graves, A. and Jaitly, N. Towards end-to-end speech recognition with recurrent neural networks. In Proceedings of the 31st International Conference on International Conference on Machine Learning - Volume 32, ICML'14, pp. II-1764-II-1772. JMLR.org, 2014. URL http://dl.acm.org/citation.cfm?id=3044805.3045089.
|
| 176 |
+
|
| 177 |
+
Graves, A., Fernández, S., Gomez, F., and Schmidhuber, J. Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks. In Proceedings of the 23rd international conference on Machine learning, pp. 369-376, 2006.
|
| 178 |
+
|
| 179 |
+
Hadian, H., Sameti, H., Povey, D., and Khudanpur, S. End-to-end speech recognition using lattice-free mmi. In Interspeech, pp. 12-16, 2018.
|
| 180 |
+
|
| 181 |
+
Han, K. J., Chandrashekaran, A., et al. The capio 2017 conversational speech recognition system, 2017.
|
| 182 |
+
|
| 183 |
+
Han, K. J., Prieto, R., Wu, K., and Ma, T. State-of-the-art speech recognition using multi-stream self-attention with dilated 1d convolutions, 2019.
|
| 184 |
+
|
| 185 |
+
Hannun, A., Lee, A., Xu, Q., and Collobert, R. Sequence-to-sequence speech recognition with time-depth separable convolutions. Interspeech 2019, Sep 2019. doi: 10.21437/ interspeech.2019-2460.
|
| 186 |
+
|
| 187 |
+
He, K., Zhang, X., Ren, S., and Sun, J. Deep residual learning for image recognition. In Computer Vision and Pattern Recognition (CVPR), 2016.
|
| 188 |
+
|
| 189 |
+
He, Y., Sainath, T. N., Prabhavalkar, R., et al. Streaming end-to-end speech recognition for mobile devices. ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), May 2019. doi: 10.1109/icassp.2019.8682336.
|
| 190 |
+
|
| 191 |
+
Heafield, K. Kenlm: Faster and smaller language model queries. In Proceedings of the sixth workshop on statistical machine translation, pp. 187-197. Association for Computational Linguistics, 2011.
|
| 192 |
+
|
| 193 |
+
Hinton, G., Deng, L., Yu, D., Dahl, G., rahman Mohamed, A., Jaitly, N., Senior, A., Vanhoucke, V., Nguyen, P., Sainath, T., and Kingsbury, B. Deep neural networks for acoustic modeling in speech recognition. Signal Processing Magazine, 2012.
|
| 194 |
+
|
| 195 |
+
Kahn, J., Lee, A., and Hannun, A. Self-training for end-to-end speech recognition. arXiv preprint arXiv:1909.09116, 2019a.
|
| 196 |
+
|
| 197 |
+
Kahn, J., Rivière, M., Zheng, W., Kharitonov, E., Xu, Q., Mazaré, P., Karadayi, J., Liptchinsky, V., Collobert, R., Fuegen, C., Likhomanenko, T., Synnaeve, G., Joulin, A., Mohamed, A., and Dupoux, E. Libri-light: A benchmark for asr with limited or no supervision, 2019b.
|
| 198 |
+
|
| 199 |
+
Kalchbrenner, N., Elsen, E., Simonyan, K., Noury, S., Casagrande, N., Lockhart, E., Stimberg, F., Oord, A. v. d., Dieleman, S., and Kavukcuoglu, K. Efficient neural audio synthesis. arXiv preprint arXiv:1802.08435, 2018.
|
| 200 |
+
|
| 201 |
+
Karita, S., Chen, N., Hayashi, T., et al. A comparative study on transformer vs rnn in speech applications, 2019.
|
| 202 |
+
|
| 203 |
+
Kudo, T. and Richardson, J. Sentencepiece: A simple and language independent subword tokenizer and detokenizer for neural text processing. arXiv preprint arXiv:1808.06226, 2018.
|
| 204 |
+
|
| 205 |
+
Li, B., Sainath, T. N., Pang, R., and Wu, Z. Semi-supervised training for end-to-end models via weak distillation. In ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 2837-2841. IEEE, 2019a.
|
| 206 |
+
|
| 207 |
+
Li, J., Lavrukhin, V., Ginsburg, B., Leary, R., Kuchaiev, O., Cohen, J. M., Nguyen, H., and Gadde, R. T. Jasper: An end-to-end convolutional neural acoustic model. In Interspeech, 2019b.
|
| 208 |
+
|
| 209 |
+
Likhomanenko, T., Synnaeve, G., and Collobert, R. Who needs words? lexicon-free speech recognition. arXiv preprint arXiv:1904.04479, 2019.
|
| 210 |
+
|
| 211 |
+
Lüscher, C., Beck, E., Irie, K., et al. Rwth asr systems for librispeech: Hybrid vs attention. Interspeech 2019, Sep 2019. doi: 10.21437/interspeech.2019-1780.
|
| 212 |
+
|
| 213 |
+
Mohamed, A., Okhonko, D., and Zettlemoyer, L. Transformers with convolutional context for asr, 2019.
|
| 214 |
+
|
| 215 |
+
Nesterov, Y. A method for unconstrained convex minimization problem with the rate of convergence $\mathrm{O}\left( {1/\mathrm{k}\lambda }\right)$ . In Doklady AN USSR, volume 269, pp. 543-547, 1983.
|
| 216 |
+
|
| 217 |
+
Ott, M., Edunov, S., Baevski, A., Fan, A., Gross, S., Ng, N., Grangier, D., and Auli, M. fairseq: A fast, extensible toolkit for sequence modeling. In Proceedings of NAACL-HLT 2019: Demonstrations, 2019.
|
| 218 |
+
|
| 219 |
+
Panayotov, V., Chen, G., Povey, D., and Khudanpur, S. Librispeech: an ASR corpus based on public domain audio books. In Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on, pp. 5206-5210. IEEE, 2015.
|
| 220 |
+
|
| 221 |
+
Park, D. S., Chan, W., Zhang, Y., et al. Specaugment: A simple data augmentation method for automatic speech recognition. Interspeech 2019, Sep 2019. doi: 10.21437/ interspeech.2019-2680.
|
| 222 |
+
|
| 223 |
+
Park, J., Boo, Y., Choi, I., Shin, S., and Sung, W. Fully neural network based speech recognition on mobile and embedded devices. In Advances in Neural Information Processing Systems, pp. 10620-10630, 2018.
|
| 224 |
+
|
| 225 |
+
Parthasarathi, S. H. K. and Strom, N. Lessons from building acoustic models with a million hours of speech. In 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 6670-6674. IEEE, 2019.
|
| 226 |
+
|
| 227 |
+
Prabhavalkar, R., Rao, K., Sainath, T. N., et al. A comparison of sequence-to-sequence models for speech recognition. In Interspeech, pp. 939-943, 2017.
|
| 228 |
+
|
| 229 |
+
Pratap, V., Hannun, A., Xu, Q., et al. wav2letter++: The fastest open-source speech recognition system. arXiv preprint arXiv:1812.07625, 2018.
|
| 230 |
+
|
| 231 |
+
Saon, G., Kurata, G., Sercu, T., Audhkhasi, K., Thomas, S., Dimitriadis, D., Cui, X., Ramabhadran, B., Picheny, M., Lim, L.-L., Roomi, B., and Hall, P. English conversational telephone speech recognition by humans and machines. In Interspeech, pp. 132-136, 2017.
|
| 232 |
+
|
| 233 |
+
Schuster, M. and Nakajima, K. Japanese and korean voice search. In International Conference on Acoustics, Speech and Signal Processing, pp. 5149-5152, 2012.
|
| 234 |
+
|
| 235 |
+
Soltau, H., Liao, H., and Sak, H. Neural speech recognizer: Acoustic-to-word lstm model for large vocabulary speech recognition. arXiv preprint arXiv:1610.09975, 2016.
|
| 236 |
+
|
| 237 |
+
Sriram, A., Jun, H., Satheesh, S., and Coates, A. Cold fusion: Training seq2seq models together with language models. arXiv preprint arXiv:1708.06426, 2017.
|
| 238 |
+
|
| 239 |
+
Sutskever, I., Vinyals, O., and Le, Q. V. Sequence to sequence learning with neural networks. In Advances in neural information processing systems, pp. 3104-3112, 2014.
|
| 240 |
+
|
| 241 |
+
Vaswani, A., Shazeer, N., Parmar, N., et al. Attention is all you need. In Adv. NIPS, 2017.
|
| 242 |
+
|
| 243 |
+
Veselỳ, K., Burget, L., and Cernocký, J. Semi-supervised DNN training with word selection for ASR. In Interspeech, pp. 3687-3691, 2017.
|
| 244 |
+
|
| 245 |
+
Wang, Y., Deng, X., Pu, S., and Huang, Z. Residual convolutional ctc networks for automatic speech recognition. arXiv preprint arXiv:1702.07793, 2017.
|
| 246 |
+
|
| 247 |
+
Wang, Y., Mohamed, A., Le, D., Liu, C., Xiao, A., Ma-hadeokar, J., Huang, H., Tjandra, A., Zhang, X., Zhang, F., Fuegen, C., Zweig, G., and Seltzer, M. L. Transformer-based acoustic modeling for hybrid speech recognition, 2019.
|
| 248 |
+
|
| 249 |
+
Wu, Y., Schuster, M., Chen, Z., Le, Q. V., Norouzi, M., Macherey, W., Krikun, M., Cao, Y., Gao, Q., Macherey, K., et al. Google's neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144, 2016.
|
| 250 |
+
|
| 251 |
+
Xiong, W., Droppo, J., Huang, X., Seide, F., Seltzer, M., Stolcke, A., Yu, D., and G., Z. The microsoft 2016 conversational speech recognition system. In ICASSP, 2017.
|
| 252 |
+
|
| 253 |
+
Zeghidour, N., Xu, Q., Liptchinsky, V., Usunier, N., et al. Fully convolutional speech recognition. CoRR, abs/1812.06864, 2018. URL http://arxiv.org/ abs/1812.06864.
|
| 254 |
+
|
| 255 |
+
Zhou, S., Dong, L., Xu, S., and Xu, B. A comparison of modeling units in sequence-to-sequence speech recognition with the transformer on mandarin chinese. In International Conference on Neural Information Processing, pp. 210-220. Springer, 2018.
|
| 256 |
+
|
| 257 |
+
Table 3. Word error rates on LIBRISPEECH's development and test sets. Our models listed in the top and bottom blocks are trained with CTC and Seq2seq losses respectively.
|
| 258 |
+
|
| 259 |
+
<table><tr><td colspan="2">AM</td><td colspan="2">LM</td><td colspan="2">DEV</td><td colspan="2">TEST</td></tr><tr><td>TYPE</td><td>LEXICON</td><td>TYPE</td><td>LEXICON</td><td>CLEAN</td><td>OTHER</td><td>CLEAN</td><td>OTHER</td></tr><tr><td>$\mathbf{{CTC}}$</td><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>RESNET (306M)</td><td>10K WP</td><td>-</td><td>-</td><td>3.93</td><td>10.13</td><td>4.08</td><td>10.03</td></tr><tr><td>DECODING</td><td/><td>ZEROLM</td><td>LEX</td><td>3.76</td><td>9.7</td><td>4.07</td><td>9.77</td></tr><tr><td>DECODING</td><td/><td>4GRAM</td><td>WORD</td><td>3.29</td><td>8.56</td><td>3.68</td><td>8.69</td></tr><tr><td>DECODING</td><td/><td>GCNN</td><td>WORD</td><td>2.99</td><td>7.50</td><td>3.28</td><td>7.53</td></tr><tr><td>RESNET (500M) LIBRIVOX</td><td>10K WP</td><td>-</td><td>-</td><td>2.34</td><td>5.54</td><td>2.55</td><td>5.99</td></tr><tr><td>DECODING</td><td/><td>ZEROLM</td><td>LEX</td><td>2.37</td><td>5.45</td><td>2.73</td><td>5.96</td></tr><tr><td>Decoding</td><td/><td>4GRAM</td><td>WORD</td><td>2.34</td><td>5.23</td><td>2.68</td><td>5.75</td></tr><tr><td>DECODING</td><td/><td>GCNN</td><td>WORD</td><td>2.19</td><td>4.64</td><td>2.45</td><td>5.13</td></tr><tr><td>TDS (200M)</td><td>10K WP</td><td>-</td><td>-</td><td>4.22</td><td>11.16</td><td>4.63</td><td>11.16</td></tr><tr><td>DECODING</td><td/><td>ZEROLM</td><td>LEX</td><td>3.93</td><td>10.61</td><td>4.44</td><td>10.67</td></tr><tr><td>Decoding</td><td/><td>4GRAM</td><td>WORD</td><td>3.49</td><td>9.18</td><td>3.98</td><td>9.53</td></tr><tr><td>DECODING</td><td/><td>GCNN</td><td>WORD</td><td>2.92</td><td>7.52</td><td>3.40</td><td>8.05</td></tr><tr><td>TDS (500M) LIBRIVOX</td><td>10K WP</td><td>-</td><td>-</td><td>2.44</td><td>5.70</td><td>2.66</td><td>6.11</td></tr><tr><td>Decoding</td><td/><td>ZEROLM</td><td>LEX</td><td>2.47</td><td>5.61</td><td>2.86</td><td>6.18</td></tr><tr><td>DECODING</td><td/><td>4GRAM</td><td>WORD</td><td>2.44</td><td>5.33</td><td>2.81</td><td>5.91</td></tr><tr><td>DECODING</td><td/><td>GCNN</td><td>WORD</td><td>2.26</td><td>4.71</td><td>2.55</td><td>5.24</td></tr><tr><td>TRANSF. (322M)</td><td>10K WP</td><td>-</td><td>-</td><td>2.99</td><td>7.31</td><td>3.09</td><td>7.40</td></tr><tr><td>DECODING</td><td/><td>ZEROLM</td><td>LEX</td><td>2.85</td><td>6.98</td><td>3.14</td><td>7.23</td></tr><tr><td>DECODING</td><td/><td>4GRAM</td><td>WORD</td><td>2.63</td><td>6.20</td><td>2.86</td><td>6.72</td></tr><tr><td>+ RESCORING</td><td/><td>GCNN + TRANSF.</td><td>WORD</td><td>2.18</td><td>4.90</td><td>2.44</td><td>5.36</td></tr><tr><td>Decoding</td><td/><td>GCNN</td><td>WORD</td><td>2.35</td><td>5.29</td><td>2.57</td><td>5.85</td></tr><tr><td>+ RESCORING</td><td/><td>GCNN + TRANSF.</td><td>WORD</td><td>2.20</td><td>4.94</td><td>2.47</td><td>5.45</td></tr><tr><td>Transf. (299M) LibriVox</td><td>10K WP</td><td>-</td><td>-</td><td>2.28</td><td>5.00</td><td>2.39</td><td>5.35</td></tr><tr><td>Decoding</td><td/><td>ZEROLM</td><td>LEX</td><td>2.31</td><td>4.94</td><td>2.58</td><td>5.42</td></tr><tr><td>Decoding</td><td/><td>4GRAM</td><td>WORD</td><td>2.24</td><td>4.59</td><td>2.52</td><td>5.22</td></tr><tr><td>+RESCORING</td><td/><td>GCNN + TRANSF.</td><td>WORD</td><td>1.99</td><td>3.91</td><td>2.28</td><td>4.50</td></tr><tr><td>DECODING</td><td/><td>GCNN</td><td>WORD</td><td>2.09</td><td>4.27</td><td>2.41</td><td>4.79</td></tr><tr><td>+ RESCORING</td><td/><td>GCNN + TRANSF.</td><td>WORD</td><td>2.01</td><td>3.95</td><td>2.31</td><td>4.54</td></tr><tr><td colspan="8">SEQ2SEQ</td></tr><tr><td>RESNET (389M)</td><td>10K WP</td><td>-</td><td>-</td><td>3.51</td><td>9.89</td><td>4.92</td><td>10.33</td></tr><tr><td>Decoding</td><td/><td>ZEROLM</td><td>LEXFREE</td><td>3.42</td><td>9.60</td><td>4.31</td><td>9.59</td></tr><tr><td>Decoding</td><td/><td>6GRAM</td><td>10K WP</td><td>3.05</td><td>8.69</td><td>3.88</td><td>8.88</td></tr><tr><td>DECODING</td><td/><td>GCNN</td><td>10K WP</td><td>2.78</td><td>7.86</td><td>3.79</td><td>8.21</td></tr><tr><td>RESNET (500M) LIBRIVOX</td><td>10K WP</td><td>-</td><td>-</td><td>2.27</td><td>5.29</td><td>2.86</td><td>5.88</td></tr><tr><td>Decoding</td><td/><td>ZEROLM</td><td>LEXFREE</td><td>2.26</td><td>5.28</td><td>2.67</td><td>5.54</td></tr><tr><td>Decoding</td><td/><td>6GRAM</td><td>10K WP</td><td>2.29</td><td>5.25</td><td>2.69</td><td>5.62</td></tr><tr><td>DECODING</td><td/><td>GCNN</td><td>10K WP</td><td>2.26</td><td>4.91</td><td>2.66</td><td>5.31</td></tr><tr><td>TDS (190M)</td><td>10K WP</td><td>-</td><td>-</td><td>3.20</td><td>8.20</td><td>3.43</td><td>8.30</td></tr><tr><td>DECODING</td><td/><td>ZEROLM</td><td>LEXFREE</td><td>2.89</td><td>8.00</td><td>3.24</td><td>7.99</td></tr><tr><td>DECODING</td><td/><td>6GRAM</td><td>10K WP</td><td>2.76</td><td>7.01</td><td>3.18</td><td>7.16</td></tr><tr><td>Decoding</td><td/><td>GCNN</td><td>10K WP</td><td>2.54</td><td>6.30</td><td>2.93</td><td>6.43</td></tr><tr><td>TDS (500M) LIBRIVOX</td><td>10K WP</td><td>-</td><td>-</td><td>2.17</td><td>4.78</td><td>2.37</td><td>5.15</td></tr><tr><td>DECODING</td><td/><td>ZEROLM</td><td>LEXFREE</td><td>2.20</td><td>4.80</td><td>2.38</td><td>5.11</td></tr><tr><td>Decoding</td><td/><td>6GRAM</td><td>10K WP</td><td>2.18</td><td>4.61</td><td>2.35</td><td>5.02</td></tr><tr><td>DECODING</td><td/><td>GCNN</td><td>10K WP</td><td>2.08</td><td>4.21</td><td>2.24</td><td>4.61</td></tr><tr><td>TRANSF. (270M)</td><td>10K WP</td><td>-</td><td>-</td><td>2.54</td><td>6.67</td><td>2.89</td><td>6.98</td></tr><tr><td>DECODING</td><td/><td>ZEROLM</td><td>LEXFREE</td><td>2.49</td><td>6.32</td><td>2.75</td><td>6.58</td></tr><tr><td>DECODING</td><td/><td>6GRAM</td><td>10K WP</td><td>2.29</td><td>5.81</td><td>2.72</td><td>6.23</td></tr><tr><td>+ RESCORING</td><td/><td>GCNN + TRANSF.</td><td>WORD</td><td>2.13</td><td>5.00</td><td>2.51</td><td>5.47</td></tr><tr><td>Decoding</td><td/><td>GCNN</td><td>10K WP</td><td>2.12</td><td>5.20</td><td>2.40</td><td>5.70</td></tr><tr><td>+RESCORING</td><td/><td>GCNN + TRANSF.</td><td>WORD</td><td>2.10</td><td>4.79</td><td>2.33</td><td>5.17</td></tr><tr><td>Transf. (296M) LibriVox</td><td>10k WP</td><td>-</td><td>-</td><td>2.12</td><td>4.59</td><td>2.28</td><td>4.88</td></tr><tr><td>DECODING</td><td/><td>ZEROLM</td><td>LEXFREE</td><td>2.10</td><td>4.53</td><td>2.27</td><td>4.80</td></tr><tr><td>DECODING</td><td/><td>6GRAM</td><td>10K WP</td><td>2.06</td><td>4.32</td><td>2.25</td><td>4.70</td></tr><tr><td>+ RESCORING</td><td/><td>GCNN + TRANSF.</td><td>WORD</td><td>1.91</td><td>3.76</td><td>2.10</td><td>4.20</td></tr><tr><td>DECODING</td><td/><td>GCNN</td><td>10K WP</td><td>1.97</td><td>3.95</td><td>2.17</td><td>4.37</td></tr><tr><td>+ RESCORING</td><td/><td>GCNN + TRANSF.</td><td>WORD</td><td>2.00</td><td>3.65</td><td>2.09</td><td>4.11</td></tr></table>
|
| 260 |
+
|
| 261 |
+
## A. Pseudo-Labeling: Text Corpus Preparation and $n$ -gram LM Training
|
| 262 |
+
|
| 263 |
+
The LIBRISPEECH language model corpus ${}^{5}$ contains text from 14500 public domain books taken from the Gutenberg project ${}^{6}$ . Given that pseudo-labels are generated with a beam-search decoding procedure that integrates a language model, it is important that the corpus used to train the language model does not have overlap with the unlabeled audio, else information about the ground truth labels for that unlabeled audio may be explicitly embedded in the LM. We remove all text from the LIBRISPEECH language model training corpus that is ground truth for any of the unlabeled audio from the subset of LIBRIVOX.
|
| 264 |
+
|
| 265 |
+
To do so, we follow several steps. Firstly, we filter out all books from the LIBRISPEECH LM corpus with IDs present in LIBRIVOX. Secondly, after normalizing all titles (removing punctuation, casing, and non-alphanumeric tokens), we remove all titles with zero Levenshtein distance between titles from the LIBRIVOX and the LIBRISPEECH LM cor-puses. We use a Levenshtein metric over words rather than tokens for improved performance. We then find titles with nonzero but low similarity scores isolated via the following conditions. Given two book title strings ${s}_{1}$ and ${s}_{2}$ , and constants $\alpha$ and $\beta$ :
|
| 266 |
+
|
| 267 |
+
$\max \left\{ {\left| {s}_{1}\right| ,\left| {s}_{2}\right| }\right\} - \min \left\{ {\left| {s}_{1}\right| ,\left| {s}_{2}\right| }\right\} < \alpha \cdot \min \left\{ {\left| {s}_{1}\right| ,\left| {s}_{2}\right| }\right\} \&$
|
| 268 |
+
|
| 269 |
+
$$
|
| 270 |
+
\text{Levenshtein}\left( {{s}_{1},{s}_{2}}\right) \leq \beta \cdot \max \left\{ {\left| {s}_{1}\right| ,\left| {s}_{2}\right| }\right\}
|
| 271 |
+
$$
|
| 272 |
+
|
| 273 |
+
where notation $\left| s\right|$ refers to the number of words in the string $\left| s\right|$ , and 0.75 and 0.3 were used as values for $\alpha$ and $\beta$ , respectively. These constants are found empirically to remove obviously different titles and to have reasonable number of pairs ( ${10}\mathrm{k}$ ) for further manual check. Titles that are manually matched are removed to create the final corpus; 13% of the original LIBRISPEECH-LM corpus was filtered with the aforementioned steps.
|
| 274 |
+
|
| 275 |
+
Before training LMs, we normalize the filtered corpus so as to mimic the original normalization procedure found in LIBRISPEECH. 88% of our normalized/filtered corpus has identical normalized text compared to the original LIB-RISPEECH LM corpus. As a result of our using a different tokenizer, sentence boundaries may differ across corpuses, as may abbreviations (e.g. we map '&c' to 'et cetera').
|
| 276 |
+
|
| 277 |
+
A 4-gram language model is trained with the resulting corpus using the KenLM toolkit (Heafield, 2011) and the top ${200}\mathrm{k}$ words as vocabulary. The model is trained without pruning ( ${183}\mathrm{\;k}$ of the top ${200}\mathrm{k}$ words are the same as the original LIBRISPEECH LM corpus). This model is then used at beam-search decoding time in conjunction with an acoustic model trained on LIBRISPEECH to generate pseudo-labels on the subset of LIBRIVOX detailed in Section 3. During beam-search decoding we use a lexicon which is constructed from the LIBRISPEECH train sets only.
|
| 278 |
+
|
| 279 |
+
The perplexity difference between the 4-gram LM trained on the filtered corpus and the 4-gram LM trained on original LIBRISPEECH LM corpus is small. The word perplexity of each model is shown in Table 1. Beam-search decoding of a Transformer AM trained on LIBRISPEECH with an LM trained on the filtered corpus results in only a ${0.05}\%$ absolute WER increase on dev-other compared to decoding with an $n$ -gram trained on the full corpus.
|
| 280 |
+
|
| 281 |
+
## B. Decoding
|
| 282 |
+
|
| 283 |
+
### B.1. Beam-search Decoder
|
| 284 |
+
|
| 285 |
+
In our experiments, we use lexicon-based and lexicon-free beam-search decoders following (Collobert et al., 2016; Likhomanenko et al.,2019) with either $n$ -gram or GCNN LMs. The lexicon-based decoder, whose search space is limited to the words in the lexicon, is used for CTC models with a word-level LM. The lexicon-free decoder is capable of generating words with arbitrary spelling and is used for S2S models with a word-piece LM. The decoder takes as input posteriors from an acoustic model, a prefix trie built on a lexicon, and an external LM. We tune the language model weight $\alpha$ and the word insertion penalty $\beta$ on validation sets (dev-clean and dev-other). The decoder outputs a transcription $\widehat{\mathbf{y}}$ that maximizes
|
| 286 |
+
|
| 287 |
+
$$
|
| 288 |
+
\log {P}_{AM}\left( {\widehat{\mathbf{y}} \mid \mathbf{x}}\right) + \alpha \log {P}_{LM}\left( \widehat{\mathbf{y}}\right) + \beta \left| \widehat{\mathbf{y}}\right| .
|
| 289 |
+
$$
|
| 290 |
+
|
| 291 |
+
To stabilize the Seq2Seq beam search, we introduce an EOS-penalty $\gamma$ to hypothesis that have finished in an end-of-sentence token. $\gamma$ is tuned together with other hyper-parameters and our experiments show that this strategy effectively prevents the decoder from early-stopping. To improve decoding efficiency, we also incorporate the thresholding technique in (Hannun et al., 2019) and strategies mentioned in (Zeghidour et al., 2018) including hypothesis merging, score caching, and batched LM forwarding. For CTC decoding, following (Park et al., 2018), only the blank token is considered if its posterior probability is greater than 0.95 .
|
| 292 |
+
|
| 293 |
+
### B.2. Rescoring
|
| 294 |
+
|
| 295 |
+
After acquiring the transcriptions of the $N$ -best hypotheses from the one-pass beam-search decoder, we use an external word-level GCNN LM and a Transformer LM to evaluate their log-probabilities, denoted as $\log {P}_{1}\left( \widehat{\mathbf{y}}\right)$ and $\log {P}_{2}\left( \widehat{\mathbf{y}}\right)$ respectively. We then perform rescoring to reorder the hypotheses according to the following score:
|
| 296 |
+
|
| 297 |
+
$$
|
| 298 |
+
\log {P}_{AM}\left( {\widehat{\mathbf{y}} \mid \mathbf{x}}\right) + {\alpha }_{1}\log {P}_{1}\left( \widehat{\mathbf{y}}\right) + {\alpha }_{2}\log {P}_{2}\left( \widehat{\mathbf{y}}\right) + \beta \left| \widehat{\mathbf{y}}\right| ,
|
| 299 |
+
$$
|
| 300 |
+
|
| 301 |
+
where ${\alpha }_{1},{\alpha }_{2},\beta$ are hyper-parameters of the rescoring algorithm optimized on the validation set and $\left| \widehat{\mathbf{y}}\right|$ is the transcription length in characters (including the spaces between words). In order to diversify the hypotheses in the beam, to increase the probability that the correct transcription is included, we usually relax the threshold in the decoder and increase beam size when dumping beam candidates.
|
| 302 |
+
|
| 303 |
+
---
|
| 304 |
+
|
| 305 |
+
${}^{5}$ http://www.openslr.org/11/
|
| 306 |
+
|
| 307 |
+
6 https://www.gutenberg.org/
|
| 308 |
+
|
| 309 |
+
---
|
| 310 |
+
|
| 311 |
+
## C. Ablations
|
| 312 |
+
|
| 313 |
+
### C.1. Varying the amount of unlabeled audio
|
| 314 |
+
|
| 315 |
+
In this study, we train on several different randomly-selected subsets of pseudo-labels from the original collection generated as described in Section 3. Results are given in Table 4. Increasing the amount of pseudo-labels strictly improves performance. The listed ${53.8}\mathrm{k}$ hour result is using the fully-prepared dataset as outlined in Section 3. WERs given are without decoding after ${800}\mathrm{k}$ iterations of training.
|
| 316 |
+
|
| 317 |
+
Table 4. WERs of a Transformer AM architecture outlined in section 2.1 trained with Seq2Seq loss on LIBRISPEECH with different amounts of pseudo-labeled audio from LIBRIVOX.
|
| 318 |
+
|
| 319 |
+
<table><tr><td>TRAINING DATASET (Hours)</td><td>DEV-CLEAN</td><td>DEV-OTHER</td></tr><tr><td>LS ONLY</td><td>2.54</td><td>6.67</td></tr><tr><td>LS + 1 K LV</td><td>2.35</td><td>5.56</td></tr><tr><td>LS + 3K LV</td><td>2.21</td><td>5.16</td></tr><tr><td>$\mathrm{{LS}} + {10}\mathrm{{KLV}}$</td><td>2.11</td><td>4.95</td></tr><tr><td>LS + 53.8K LV</td><td>2.11</td><td>4.59</td></tr></table>
|
| 320 |
+
|
| 321 |
+
Table 5. WERs of a Transformer AM when trained with pseudo-labels generated with a decoder integrating an LM that contains overlapping text with unlabeled audio versus an LM with no overlap. Results are shown after decoding with the word 4-gram language model described in Section 2.2.
|
| 322 |
+
|
| 323 |
+
<table><tr><td>MODEL</td><td>Overtap</td><td>DEV-OTHER</td><td>TEST-OTHER</td></tr><tr><td rowspan="2">TRANS. S2S</td><td>No</td><td>4.58</td><td>4.90</td></tr><tr><td>Yes</td><td>4.51</td><td>4.87</td></tr><tr><td rowspan="2">TRANS. CTC</td><td>No</td><td>4.92</td><td>5.47</td></tr><tr><td>Yes</td><td>4.80</td><td>5.33</td></tr></table>
|
| 324 |
+
|
| 325 |
+
### C.2. Generating pseudo-labels with an LM containing overlapping text
|
| 326 |
+
|
| 327 |
+
As discussed in Appendix A, using an LM to generate pseudo-labels that was trained with a corpus that includes ground truth text from unlabeled audio introduces an overlap that may unrealistically improve the quality of pseudo-labels. We show that the effect of using an LM trained with a small amount of overlapping text to generate pseudo-labels has only a small effect on the performance of models trained on those pseudo-labels.
|
| 328 |
+
|
| 329 |
+
Table 5 contains results for Transformer AMs with both CTC and Seq2Seq loss as described in 2.1 trained on pseudo-labels generated with a decoding step that uses an LM trained on an overlapping versus non-overlapping corpus. The models used are of the same architecture as described in Section 2.1. There is a small improvement in dev-other performance for pseudo-labels generated from an overlapping LM, but both models generalize very similarly.
|
| 330 |
+
|
| 331 |
+
### C.3. Training on pseudo-labels only
|
| 332 |
+
|
| 333 |
+
Models trained on LIBRIVOX pseudo-labels alone outperform models trained on LIBRISPEECH. As outlined in Section 5 , all acoustic models are trained on a combination of LIBRISPEECH and pseudo-labeled LIBRIVOX audio. In this setup, it is difficult to disambiguate the importance of the pseudo-labeled audio compared to supervised data from LIBRISPEECH. To test the quality of pseudo-labels in isolation, we trained a CTC-based Transformer model similar to that described in Section 2.1 to compare directly with the CTC-based transformer AM used to generate the pseudo-labels described in Section 3. We compare the resulting AM-only performance on the LIBRISPEECH development sets. Without decoding, the resulting LIBRIVOX pseudo-label-only model achieves WERs of 2.38% and 5.43% on dev-clean and dev-other respectively, which improves over the LIBRISPEECH-only baseline’s 2.99% and ${7.31}\%$ , respectively. The volume, quality, and diversity of the generated pseudo-labels alone are sufficient to generate superior results as compared to a model trained only on LIBRISPEECH. The model trained on LIBRISPEECH and LIBRIVOX pseudo-labels achieves an improved 2.28% and 4.99% on dev-clean and dev-other, respectively.
|
| 334 |
+
|
| 335 |
+
## D. End-to-End Acoustic Models Learn a Language Model: Removing the LM from $\mathbf{{ASR}}$
|
| 336 |
+
|
| 337 |
+
In the sections that follow, we show two results. We first give a simple experimental framework to demonstrate that acoustic models trained on speech learn nontrivial language models, and that training on additional audio facilitates learning better acoustic representations. We then show that with a large collection of pseudo-labeled audio, well-trained acoustic models no longer benefit much from decoding with an external language model in most cases.
|
| 338 |
+
|
| 339 |
+
#### D.1.AMs learning LM: transcribing shuffled audio
|
| 340 |
+
|
| 341 |
+
The language modeling properties of end-to-end acoustic models are briefly discussed in (Chan et al., 2016), where an AM trained with CTC is shown to learn an implicit language model based on its predicted posteriors for words with multiple spelling variants. Still other results show that fusing an LM with an AM during training can improve performance (Sriram et al., 2017; Chorowski & Jaitly, 2016; Wu et al., 2016). These previous works use RNN-based acoustic models, which possess infinite receptive fields and processes most or all of an input utterance during a single forward pass. We show that modern convolutional architectures have large receptive fields and likely also learn word representations directly from audio.
|
| 342 |
+
|
| 343 |
+

|
| 344 |
+
|
| 345 |
+
Figure 2. dev-other WERs without decoding across acoustic models and loss functions for original and shuffled versions of dev-other across three settings. Each plot uses the following original and shuffled audio: Left: original and shuffled dev-other audio segmented using ASG. Middle: audio generated by TTS vocoder for the original and shuffled transcriptions from dev-other. Right: original and shuffled audio for a subset of dev-other recorded by the paper's authors.
|
| 346 |
+
|
| 347 |
+
If an AM learns a robust LM, the acoustic model will less effectively predict utterances of high underlying word-perplexity; the model will rely on its acoustic representations to predict words without context, providing a good proxy for the quality of its learned acoustic representations. In the experiments that follow, we introduce a simple "shuffled transcription" task in which models transcribe LIB-RISPEECH dev-other with utterances corresponding to both unshuffled and shuffled transcriptions. Experiments are performed in three audio settings to eliminate bias when scrambling words. First, with a TTS model, unshuffled and shuffled sentences are forwarded through a WaveRNN vocoder (Kalchbrenner et al., 2018) trained on the LJSpeech dataset ${}^{7}$ using the Mozilla TTS toolkit ${}^{8}$ . In the second setting, audio is segmented at the word level using a convolutional stride 2 letter-based AM trained with ASG loss (Collobert et al., 2016), then re-spliced together in the given shuffled order. Finally, the paper's authors recorded unshuffled and shuffled utterances from a subset of dev-other.
|
| 348 |
+
|
| 349 |
+
Figure 2 contains the WERs across audio settings on dev-other without decoding. Both CTC and Seq2Seq models perform poorly across the board on shuffled audio which is expected. As soon as we are interested not in the absolute WER values but in the relative WER values across models / losses / datasets, the main outcome from Figure 2 is that AMs trained with LIBRIVOX pseudo-labels are able to learn better acoustic representations which improve performance on shuffled inputs for which their internal LMs is not predictive.
|
| 350 |
+
|
| 351 |
+
### D.2. With enough unlabeled audio, decoding with an LM doesn't improve performance
|
| 352 |
+
|
| 353 |
+
The importance of the language model to the success of the pseudo-labeling is known; (Kahn et al., 2019a) show that in the end-to-end setting, as the quality of the language model used to generate the pseudo-label decreases even marginally, the quality of the model trained on the resulting pseudo-labels decreases. In what follows, we show that through the self-training procedure, decoding an acoustic model trained on LIBRIVOX pseudo-labels generated with the help of a language model gives very small improvements compared to models trained only on LIBRISPEECH.
|
| 354 |
+
|
| 355 |
+
Results are shown in Figure 3. We use a beam-search decoding procedure without an LM ("Zero-LM") to disambiguate the effect of beam search on WER, and evaluate on dev-other to provide a better lower bound for how much decoding with the LM can improve performance (decoder parameters are also optimized on dev-other ). The models for which results are shown are trained on pseudo-labels from LIBRIVOX generated with an $n$ -gram language model without an overlapping text corpus (see the ablation in Appendix C and Section 2.2). Decoding with the LM gives little to no gain for models trained on LIBRISPEECH +
|
| 356 |
+
|
| 357 |
+
---
|
| 358 |
+
|
| 359 |
+
${}^{7}$ https://keithito.com/LJ-Speech-Dataset/
|
| 360 |
+
|
| 361 |
+
${}^{8}$ https://github.com/mozilla/TTS
|
| 362 |
+
|
| 363 |
+
---
|
| 364 |
+
|
| 365 |
+

|
| 366 |
+
|
| 367 |
+
Figure 3. WER on dev-other for models trained on LIB-RISPEECH and LIBRISPEECH + LIBRIVOX after decoding with and without the 4-gram LM described in Section 2.2. The gain from LM beam-search decoding for models trained on LIB-RIVOX is much smaller compared to that for models trained on LIBRISPEECH.
|
| 368 |
+
|
| 369 |
+
LIBRIVOX and a much more significant gain for those models trained only on LIBRISPEECH, suggesting information from the 4-gram LM used to generate pseudo-labels on LIB-RIVOX has thoroughly diffused into AMs trained with those labels. Full results can be found in Table 3.
|
| 370 |
+
|
| 371 |
+
## E. Experiment Details
|
| 372 |
+
|
| 373 |
+
Comprehensive WER results for LIBRISPEECH and LIB-RIVOX acoustic models, including with greedy and beam-search decoding with different LMs and beam rescoring can be found in Table 3. This section mainly focus on providing details of how we optimize the beam-search decoding and rescoring procedures for our acoustic models.
|
| 374 |
+
|
| 375 |
+
### E.1. Beam-Search Decoding
|
| 376 |
+
|
| 377 |
+
When beam-search decoding, we use the dev-clean and dev-other sets as validation sets and use random search to optimize decoding hyper-parameters. The search ranges of those hyper-parameters are listed in Table 6. We use between 64 and 128 runs in each random search with hyper-parameter values uniformly sampled from the given ranges. It is worth noting that the optimal ranges for language model weight for models trained on LIBRISPEECH are higher than ones found for LIBRIVOX models as shown in Table 7. This is conceivably additional evidence that models trained with additional audio rely less on language models.
|
| 378 |
+
|
| 379 |
+
### E.2. Rescoring
|
| 380 |
+
|
| 381 |
+
To perform rescoring, we first dump all hypotheses proposed during beam-search decoding using the optimal hyper-parameters found with random search. When dumping candidates, beam size, token beam size, and beam threshold are increased so as to increase the number of proposed hypotheses on which to run rescoring. Further details are listed in Table 8. We find optimal values of rescoring hyper-parameters ${\alpha }_{1},{\alpha }_{2}$ and $\beta$ (see Appendix B.2) via a grid search for CTC models $\left( {{\alpha }_{1},\beta \in \left\lbrack {0,1}\right\rbrack }\right.$ and ${\alpha }_{2} \in \left\lbrack {-{0.3},{0.3}}\right\rbrack$ where the grid step is set to 0.1 ), and a random search for sequence-to-sequence models $\left( {{\alpha }_{1}, \in \left\lbrack {0,{2.5}}\right\rbrack ,{\alpha }_{2} \in \left\lbrack {-1,1}\right\rbrack }\right.$ , $\beta \in \left\lbrack {-3,3}\right\rbrack$ with 1000 attempts).
|
| 382 |
+
|
| 383 |
+
Table 6. Hyper-parameter values and ranges used in a random search for beam-search decoding with $n$ -gram (top block) and GCNN (bottom block) LMs.
|
| 384 |
+
|
| 385 |
+
<table><tr><td rowspan="2">PARAMETERS</td><td colspan="2">LIBRISPEECH</td><td colspan="2">LIBRIVOX</td></tr><tr><td>CTC</td><td>S2S</td><td>CTC</td><td>S2S</td></tr><tr><td>BEAM</td><td>500</td><td>50,100</td><td>500</td><td>20,50,100</td></tr><tr><td>TOKEN BEAM</td><td>100</td><td>10,50</td><td>100</td><td>3,5,10</td></tr><tr><td>LM WEIGHT</td><td>$\left\lbrack {0,3}\right\rbrack$</td><td>$\left\lbrack {0,2}\right\rbrack$</td><td>0,1.5</td><td>$\left\lbrack {0,1}\right\rbrack$</td></tr><tr><td>THRESHOLD</td><td>100</td><td>10,50</td><td>100</td><td>5,10,50</td></tr><tr><td>WORD INSERT.</td><td>$\left\lbrack {-3,3}\right\rbrack$</td><td>-</td><td>$\left\lbrack {-3,3}\right\rbrack$</td><td>-</td></tr><tr><td>EOS-PENALTY</td><td>-</td><td>$\left\lbrack {-{10},0}\right\rbrack$</td><td>-</td><td>$\left\lbrack {-{10},0}\right\rbrack$</td></tr><tr><td>BEAM</td><td>250</td><td>50</td><td>250</td><td>20,50,100</td></tr><tr><td>TOKEN BEAM</td><td>100</td><td>10.18</td><td>100</td><td>3,5,10</td></tr><tr><td>LM WEIGHT</td><td>$\left\lbrack {0,3}\right\rbrack$</td><td>$\left\lbrack {0,2}\right\rbrack$</td><td>0,1.5</td><td>$\left\lbrack {0,{0.8}}\right\rbrack$</td></tr><tr><td>THRESHOLD</td><td>20</td><td>10.15</td><td>20</td><td>5,10,50</td></tr><tr><td>WORD INSERT.</td><td>[-3,3]</td><td>-</td><td>$\left\lbrack {-3,3}\right\rbrack$</td><td>-</td></tr><tr><td>EOS-PENALTY</td><td>-</td><td>$\left\lbrack {-{10},0}\right\rbrack$</td><td>-</td><td>$\left\lbrack {-{10},0}\right\rbrack$</td></tr></table>
|
| 386 |
+
|
| 387 |
+
Table 7. Optimal LM weight ranges (based on WER) for beam-search decoding with $n$ -gram (top block) and GCNN (bottom block) LMs found via random search.
|
| 388 |
+
|
| 389 |
+
<table><tr><td colspan="3">LIBRISPEECH</td><td colspan="2">LIBRIVOX</td></tr><tr><td>DATA</td><td>CTC</td><td>S2S</td><td>CTC</td><td>S2S</td></tr><tr><td>CLEAN</td><td>$\left\lbrack {{0.8},{1.4}}\right\rbrack$</td><td>0.6,1.1</td><td>$\left\lbrack {{0.2},{0.4}}\right\rbrack$</td><td>$\left\lbrack {{0.0},{0.2}}\right\rbrack$</td></tr><tr><td>OTHER</td><td>$\left\lbrack {{1.1},{1.9}}\right\rbrack$</td><td>$\left\lbrack {{0.6},{1.2}}\right\rbrack$</td><td>$\left\lbrack {{0.5},{0.7}}\right\rbrack$</td><td>$\left\lbrack {{0.1},{0.5}}\right\rbrack$</td></tr><tr><td>CLEAN</td><td>$\left\lbrack {{0.4},{0.8}}\right\rbrack$</td><td>$\left\lbrack {{0.2},{0.5}}\right\rbrack$</td><td>$\left\lbrack {{0.2},{0.5}}\right\rbrack$</td><td>$\left\lbrack {{0.0},{0.4}}\right\rbrack$</td></tr><tr><td>OTHER</td><td>$\left\lbrack {{0.5},{1.1}}\right\rbrack$</td><td>$\left\lbrack {{0.3},{0.7}}\right\rbrack$</td><td>$\left\lbrack {{0.3},{0.6}}\right\rbrack$</td><td>$\left\lbrack {{0.2},{0.4}}\right\rbrack$</td></tr></table>
|
| 390 |
+
|
| 391 |
+
Table 8. Parameters values used when dumping beam candidates for rescoring with $n$ -gram (top block) and GCNN (bottom block) LMs.
|
| 392 |
+
|
| 393 |
+
<table><tr><td>PARAMETERS</td><td>CTC</td><td>S2S</td></tr><tr><td>BEAM</td><td>2500</td><td>250</td></tr><tr><td>TOKEN BEAM</td><td>1500</td><td>150</td></tr><tr><td>THRESHOLD</td><td>5000</td><td>150</td></tr><tr><td>BEAM</td><td>250</td><td>250</td></tr><tr><td>TOKEN BEAM</td><td>100</td><td>100</td></tr><tr><td>THRESHOLD</td><td>20</td><td>100</td></tr></table>
|
| 394 |
+
|
| 395 |
+
## F. Generating Shuffled Audio
|
| 396 |
+
|
| 397 |
+
This section provides details of how we generated shuffled utterances used in the experiments in Section D.1. Each experiment could introduce systematic error. Therefore, we propose several experiments to conclude. For the two methods generating existing or using new audio (TTS and Segmentation), we shuffle dev-other five times and report the mean and standard deviation (as error bars) in Figure 2.
|
| 398 |
+
|
| 399 |
+
### F.1. TTS
|
| 400 |
+
|
| 401 |
+
For each sentence in dev-other, we randomly shuffle its words to form a new sentence. We run the resulting text through a TTS model as outlined in Section C to create synthetic audio for the scrambled sentences. While simple and easy to implement, this method introduces and amplifies intrinsic errors in the TTS model into the ablation. In particular, the model struggles to handle many of the rare words present in dev-other. Also TTS approach is still away from the human speech.
|
| 402 |
+
|
| 403 |
+
### F.2. Segmentation
|
| 404 |
+
|
| 405 |
+
With this method, we first force-align the transcriptions of dev-other to the existing audio using a letter-based stride-two ASG model as outlined in Section C and collecting the beginning timestamp and duration of each word. Then, to avoid splicing words that are ready closely together, audio samples are only split when silence of longer than 130 milliseconds is detected (split is done in the middle of silence segment). Finally, audio chunks are randomly shuffled and re-assembled into new utterances. Since this ablation aims to remove LM-friendly context from audio, we filter the resulting recombined audio samples. In particular, we filter all utterances that have only one segment, or have at least one segment with more than 6 words in it. After filtering, 1969 out of 2864 samples in dev-other remain. The distribution of the number of words in each of the resulting segments is shown in Figure 4.
|
| 406 |
+
|
| 407 |
+

|
| 408 |
+
|
| 409 |
+
Figure 4. Distribution of all $n$ -grams in the obtained segments of filtered dev-other (1969 samples with 16,362 segments in total).
|
| 410 |
+
|
| 411 |
+
Unlike the TTS method described above, the segmentation method reuses audio as much as possible from dev-other. That said, neither the force alignment nor the segmentation techniques handle all the word boundaries. As such, there may be incomplete words in the resulting audio and LM-friendly context.
|
| 412 |
+
|
| 413 |
+
### F.3. Recording
|
| 414 |
+
|
| 415 |
+
The paper's authors recorded 184 randomly selected sentences from dev-other as well as a single set of shuffled utterances. The unshuffled recorded audio has the lowest WER among all the three methods. We plan to complete a collection of unshuffled and shuffled audio for dev-other in future work.
|
| 416 |
+
|
| 417 |
+
### F.4. Perplexity
|
| 418 |
+
|
| 419 |
+
As shown in Table 9, there are large gaps between the perplexity of transcriptions in the original and shuffled sets across all settings. Our shuffling strategy thus removes important word context and breaks the alignment of the audio words distribution with the LM. The WER gap between the two sets is thus a proxy for the amount of language modeling an acoustic model may implicitly perform.
|
| 420 |
+
|
| 421 |
+
Table 9. Performance of word-level 4-gram and Transformer LMs from Table 1 on original and shuffled audio transcriptions generated from LIBRISPEECH dev-other.
|
| 422 |
+
|
| 423 |
+
<table><tr><td>SETTING</td><td>SHUFFLED</td><td>4-GRAM LM</td><td>TRANSF. LM</td></tr><tr><td>TTS</td><td>No</td><td>147</td><td>50</td></tr><tr><td>TTS</td><td>Yes</td><td>${749} \pm 2$</td><td>${389} \pm 2$</td></tr><tr><td>SEGMENT.</td><td>No</td><td>167</td><td>56</td></tr><tr><td>SEGMENT.</td><td>Yes</td><td>${827} \pm 5$</td><td>${743} \pm 9$</td></tr><tr><td>Recording</td><td>No</td><td>162</td><td>49</td></tr><tr><td>Recording</td><td>Yes</td><td>3807</td><td>2995</td></tr></table>
|
| 424 |
+
|
papers/ICML/ICML 2020/ICML 2020 Workshop/ICML 2020 Workshop SAS/OSVxDDc360z/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,190 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
$\mathbf{{End} - {to} - {End}\;{ASR} : }$
|
| 2 |
+
|
| 3 |
+
from Supervised to Semi-Supervised Learning with Modern Architectures
|
| 4 |
+
|
| 5 |
+
Gabriel Synnaeve ${}^{ * }{}^{1}$ Qiantong Xu ${}^{ * }{}^{1}$ Jacob Kahn ${}^{ * }{}^{1}$ Tatiana Likhomanenko ${}^{ * }{}^{1}$ Edouard Grave ${}^{ * }{}^{1}$ Vineel Pratap ${}^{1}$ Anuroop Sriram ${}^{1}$ Vitaliy Liptchinsky ${}^{1}$ Ronan Collobert ${}^{ * }{}^{1}$
|
| 6 |
+
|
| 7 |
+
§ ABSTRACT
|
| 8 |
+
|
| 9 |
+
We study pseudo-labeling for the semi-supervised training of ResNet, Time-Depth Separable Con-vNets, and Transformers for speech recognition, with either CTC or Seq2Seq loss functions. We perform experiments on the standard LIB-RISPEECH dataset, and leverage additional unlabeled data from LIBRIVOX through pseudo-labeling. We show that while Transformer-based acoustic models have superior performance with the supervised dataset alone, semi-supervision improves all models across architectures and loss functions and bridges much of the performance gaps between them. In doing so, we reach a new state-of-the-art for end-to-end acoustic models decoded with an external language model in the standard supervised learning setting, and a new absolute state-of-the-art with semi-supervised training. Finally, we study the effect of leveraging different amounts of unlabeled audio, propose several ways of evaluating the characteristics of unlabeled audio which improve acoustic modeling, and show that acoustic models trained with more audio rely less on external language models.
|
| 10 |
+
|
| 11 |
+
§ 1. INTRODUCTION
|
| 12 |
+
|
| 13 |
+
End-to-end speech recognition models are simpler to implement and train than bootstrapped systems. Even given recent promising results from these systems, best-results for common benchmarks are still dominated by classical ASR models; systems requiring force alignment may leave some performance aside for each training step. We set out to study end-to-end systems on LIBRISPEECH (Panayotov et al., 2015) and, without any algorithmic contribution, see if they can be made to perform as well as more complex training pipelines. The difficulties involved in properly optimizing acoustic models with Connectionist Temporal Classification (CTC) (Graves et al., 2006) or sequence-to-sequence (Seq2Seq) (Sutskever et al., 2014) (v.s. cross-entropy, for instance) combined with more readily-available regularization techniques for classical pipelines make this comparison challenging. Our best acoustic models nonetheless reach 5.17% WER on test-other, showing that end-to-end models can compete with traditional pipelines.
|
| 14 |
+
|
| 15 |
+
< g r a p h i c s >
|
| 16 |
+
|
| 17 |
+
Figure 1. WERs on dev-other across AM architectures and loss functions. Left: WERs of different models trained on LIB-RISPEECH with and without beam-search decoding ("no LM" refers to the greedy decoding). Transformer AM architectures outperform others by a large margin. Right: WERs of models trained on LIBRIVOX. All models trained on LIBRIVOX significantly outperform their LIBRISPEECH counterparts. The gap between Transformer AMs and other models is much smaller with LIBRIVOX data.
|
| 18 |
+
|
| 19 |
+
As in other domains, self and semi-supervised learning in ASR, where a pretrained network generates and trains on its own labels, yields improvements (Vesely et al., 2017). In end-to-end ASR, pseudo-labeling and self-training can be quite effective, and its effectiveness is further improved when more data is available (Kahn et al., 2019a). In this setting, we train a model on LIBRISPEECH, then use that model in conjunction with a language model to generate pseudo-labels from unlabeled audio. We show that with this training scheme, our results without an external language model (LM) reach state-of-the-art results that use an external language model, with 2.28% and 4.88% Word Error Rate (WER) on test-clean and test-other respectively. With LM beam-search decoding and rescoring, we reach 2.09% and 4.11% WER on the test set.
|
| 20 |
+
|
| 21 |
+
*Equal contribution ${}^{1}$ Facebook AI Research, Menlo Park & New York, US, and Paris, France. Correspondence to: Gabriel Synnaeve $<$ gab@fb.com>.
|
| 22 |
+
|
| 23 |
+
Published at the workshop on Self-supervision in Audio and Speech at the ${37}^{\text{ th }}$ International Conference on Machine Learning, Vienna, Austria. Copyright 2020 by the author(s).
|
| 24 |
+
|
| 25 |
+
While many advances in end-to-end ASR come as the result of neural architecture search (Prabhavalkar et al., 2017; Zhou et al., 2018; Chiu et al., 2018b), we additionally show that simple semi-supervision via pseudo-labeling significantly bridges the performance gap between a variety of different model architectures and loss functions, as shown in Figure 1. In particular, with enough unlabeled audio, Transformer, ResNet, and depthwise-separable convolution-based acoustic models give similar performance with both CTC and Seq2Seq loss functions, suggesting that new techniques in semi-supervision may facilitate equally-significant gains in ASR performance while being applicable to a multitude of end-to-end setups.
|
| 26 |
+
|
| 27 |
+
§ 2. MODELS
|
| 28 |
+
|
| 29 |
+
§ 2.1. ACOUSTIC MODELS
|
| 30 |
+
|
| 31 |
+
In this section, we present the three families of acoustic models (AMs) studied. All AMs output probability distributions over tokens. In particular, we use a set of ${10}\mathrm{k}$ word pieces (Schuster & Nakajima, 2012; Kudo & Richardson, 2018) generated from the SentencePiece toolkit ${}^{1}$ . The choice to use a fixed set of ${10}\mathrm{k}$ word pieces is made for the simplicity of the comparative study, not the result of a limitation. Similarly, all AMs take 80-channel log-mel filterbanks as input, with STFTs computed on Hamming windows strided by ${10}\mathrm{\;{ms}}$ . This window size is ${25}\mathrm{\;{ms}}$ for Transformer models and ${30}\mathrm{\;{ms}}$ for TDS and ResNet models. All models are trained end-to-end with either CTC or Seq2Seq loss. Given the huge difference between the amounts of data, we prepare two sets of architectures: one for training only on labeled LIBRISPEECH and one for unlabeled LIBRIVOX.
|
| 32 |
+
|
| 33 |
+
ResNet Acoustic Model ResNets were first introduced in the domain of computer vision (He et al., 2016) and have since been successfully applied to speech recognition (Xiong et al., 2017; Saon et al., 2017; Li et al., 2019b; Wang et al., 2017). ResNets are composed of several blocks of convolutions (in our case only 1-D convolutions), with skip connections. In particular, our ResNet encoder includes 42 convolutional layers each with a kernel size of 3 . The encoder first maps the input to an embedding space of size 1024 using a single convolutional layer with stride 2; 12 blocks of three 1-D convolutions each follow. Each of the convolutional layers is followed by ReLU, dropout and Lay-erNorm (Ba et al., 2016). Both the dropout and the number of hidden units increases with the depth of the network. Specific convolution layers are inserted between ResNet blocks in order to upsample when the hidden representation size increases. Our architecture performs significant pooling with respect to the input ( 16 frames in total, equating to 160 milliseconds) - in addition to the first strided convolutional layer, 3 max pooling layers (each with stride 2) are distributed across the depth of the network (after blocks 3 , 7 and 10). Nearly identical encoder architectures are used in front of CTC and Seq2Seq loss functions; the Seq2Seq encoder has its last bottleneck layer removed and lower dropout in deeper layers. The Seq2Seq self-attention decoder for the ResNet architecture is the same as that used with the TDS convolutional AM described below. To better fit the unlabeled data, we increase the model size by increasing the number of channels in each convolution layer.
|
| 34 |
+
|
| 35 |
+
Time-Depth Separable (TDS) Convolution Acoustic Model We extend the TDS block (Hannun et al., 2019) (which is composed of one 2-D convolution layer and two fully-connected layers with ReLU, LayerNorm and residual connections in between), by increasing the number of channels in the feature maps spanning the two internal fully-connected layers by a factor $F > 1$ , so as to increase model capacity. Following (Hannun et al., 2019), 3 sub-sampling layers, i.e. 1-D convolution layers with stride 2, are adopted to ensure an optimal context size for the encoder. For training with only labeled data, we have three groups of TDS blocks with $F = 3$ after each sub-sampling layers. There are 5,6, and 10 blocks in each group, containing 10, 14, and 18 channels, respectively. To increase model capacity for unlabeled data, the three groups of TDS blocks, having fewer4,5, and 6 blocks and $F = 2$ in each, are equipped with much larger16,32, and 48 channels. All convolutions in both TDS and sub-sampling layers have kernel size of ${21} \times 1$ . Identical encoders are used for CTC and Seq2Seq.
|
| 36 |
+
|
| 37 |
+
Our Seq2Seq self-attention decoder performs $R$ rounds of attention through the same $N$ -layers of RNN-GRU each with a hidden unit size of 512 in conjunction with the same efficient key-value attention as in (Hannun et al., 2019; Vaswani et al., 2017):
|
| 38 |
+
|
| 39 |
+
$$
|
| 40 |
+
{\mathbf{S}}_{t}^{r} = \operatorname{SOFTMAX}\left( {\frac{1}{\sqrt{d}}{\mathbf{K}}^{\top }{\mathbf{Q}}_{t}^{r - 1}}\right) \mathbf{V}, \tag{1}
|
| 41 |
+
$$
|
| 42 |
+
|
| 43 |
+
where $\left\lbrack {\mathbf{K},\mathbf{V}}\right\rbrack$ is 512-dimensional encoder activation and ${\mathbf{Q}}_{t}^{r} = g\left( {{\mathbf{Q}}_{t - 1}^{r},{\mathbf{Q}}_{t}^{r - 1}}\right) + {\mathbf{S}}_{t}^{r}$ is the query vector at time $t$ in round $r$ , generated by the GRU $g\left( \cdot \right)$ . The initial ${\mathbf{Q}}_{t}^{0}$ is a 512-dimensional token embedding, and the final ${\mathbf{Q}}_{t}^{R}$ is linearly projected to output classes for token classification. In our experiments, $N$ and $R$ are both set to either 2 or 3 based on validation performance. We use dropout in all TDS blocks and GRUs to prevent overfitting.
|
| 44 |
+
|
| 45 |
+
'https://github.com/google/sentencepiece
|
| 46 |
+
|
| 47 |
+
Transformer-Based Acoustic Model Our transformer-based acoustic models have a small front-end: 3 (LIBRISPEECH AMs) or 6 (LIBRIVOX AM) layers of 1- D convolutions each of kernel width 3 and respective input and output sizes $\left( {{80},{D}_{c}}\right) ,\left( {{D}_{c}/2,{D}_{c}}\right) ,\left\lbrack \left( {{D}_{c}/2,{D}_{c}}\right) \right.$ , $\left( {{D}_{c}/2,{D}_{c}}\right) ,\left( {{D}_{c}/2,{D}_{c}}\right)$ , $\left. {\left( {{D}_{c}/2,{D}_{tr} \times 2}\right) \text{ , with }{D}_{c} = }\right)$ 1024 or 2048. Each convolution is followed by a GLU activation function (Dauphin et al., 2017) and are striding by 2 each (for 3 consecutive layers), or every other layer (for 6 layers). The output of the front-end for all models is thus strided by 8 frames ( ${80}\mathrm{\;{ms}}$ ). After the front-end, each Transformer block has 4 attention heads followed by a feedforward network (FFN) with one hidden layer and a ReLU non-linearity. There are two configurations of Transformer blocks: one 24 layer configuration (only for the LIB-RISPEECH CTC AM) with dimension ${D}_{tr} = {1024}$ for the self-attention and 4096 for the FFN, and one 36 layer configuration with dimension ${D}_{tr} = {768}$ for the self-attention and 3072 for the FFN. Specifically, given a sequence of $T$ vectors of dimension $d$ , the input is represented by the matrix ${\mathbf{H}}^{\mathbf{0}} \in {\mathbb{R}}^{d \times T}$ , following exactly (Vaswani et al.,2017):
|
| 48 |
+
|
| 49 |
+
$$
|
| 50 |
+
{\mathbf{Z}}^{i} = \operatorname{NORM}\left( {\operatorname{SELFATTENTION}\left( {\mathbf{H}}^{i - 1}\right) + {\mathbf{H}}^{i - 1}}\right) ,
|
| 51 |
+
$$
|
| 52 |
+
|
| 53 |
+
$$
|
| 54 |
+
{\mathbf{H}}^{i} = \operatorname{NORM}\left( {\operatorname{FFN}\left( {\mathbf{Z}}^{i}\right) + {\mathbf{Z}}^{i}}\right) ,
|
| 55 |
+
$$
|
| 56 |
+
|
| 57 |
+
where $\mathbf{Z}$ is the output of the self-attention layer, with a skip connection, and $\mathbf{H}$ is the output of the FFN layer, with a skip connection. As is standard: our NORM is LayerNorm, and self-attention is defined as in Eq. 1, but with $\mathbf{K} = {\mathbf{W}}_{K}\mathbf{H}$ , $\mathbf{Q} = {\mathbf{W}}_{Q}\mathbf{H}$ , and $\mathbf{V} = {\mathbf{W}}_{V}\mathbf{H}$ . For CTC-trained models, the output of the encoder ${\mathbf{H}}^{{L}_{e}}$ is followed by a linear layer to the output classes. For Seq2Seq models, we have an additional decoder, which is a stack of 6 Transformers with encoding dimension 256 and 4 attention heads. The probability distribution of the transcription is factorized as:
|
| 58 |
+
|
| 59 |
+
$$
|
| 60 |
+
p\left( {{y}_{1},\ldots ,{y}_{n}}\right) = \mathop{\prod }\limits_{{i = 1}}^{n}p\left( {{y}_{i} \mid {y}_{0},\ldots ,{y}_{i - 1},{\mathbf{H}}^{{L}_{e}}}\right) , \tag{2}
|
| 61 |
+
$$
|
| 62 |
+
|
| 63 |
+
where ${y}_{0}$ is a special symbol indicating the beginning of the transcription. For all layers (encoder and decoder - when present), we use dropout on the self-attention and layer drop (Fan et al., 2019), dropping entire layers at the FFN level.
|
| 64 |
+
|
| 65 |
+
§ 2.2. LANGUAGE MODELS
|
| 66 |
+
|
| 67 |
+
In this section, we present external language models (LMs) used in beam-search decoding. We consider $n$ -gram LMs as well as convolutional (Dauphin et al., 2017) (GCNN) and Transformer-based LMs. For $n$ -gram and GCNN LMs, we train both word-level and word-piece models, and only a word-level Transformer LM. All word-piece LMs are trained on the set of ${10}\mathrm{k}$ word pieces as outlined in Section 2.1. This ensures that the set of word pieces is consistent across both of the output distributions of the AMs and the candidates the LM scores during beam-search decoding.
|
| 68 |
+
|
| 69 |
+
For the word-piece and word-level GCNN models, we use the GCNN-14B architecture from (Dauphin et al., 2017) with embedding size 1024 and dropout 0.1 . The word-level Transformer LM has the same architecture as (Baevski & Auli, 2019)'s Google Billion Words model; we use 16 attention heads and 20 decoder layers with embedding, input and output dimensions of 1280 and 6144 for the FFN with dropout of 0.1 .
|
| 70 |
+
|
| 71 |
+
§ 3. UNLABELED AUDIO DATASET PREPARATION
|
| 72 |
+
|
| 73 |
+
LIBRIVOX ${}^{2}$ is a large collection of freely-available audio-books. Using tools provided with the LIBRILIGHT dataset (Kahn et al.,2019b), we select 72K hours of read speech from English book listings and run several preprocessing steps. After filtering samples to remove readings of duplicate text and corrupted audio, we remove all audio for which the speaker has overlap with a sample in LIBRISPEECH. We run voice activity detection (VAD) using the wav2letter++ framework (Pratap et al., 2018) on the resulting collection of audio with a CTC model trained on LIBRISPEECH, and segment the result into chunks no greater than 36 s; the resulting audio corpus contains ${53.8}\mathrm{\;K}$ hours of read speech.
|
| 74 |
+
|
| 75 |
+
We then generate pseudo-labels for this audio using the recipe described in (Kahn et al., 2019a). To generate the pseudo-labels, we use a Transformer AM trained on LIB-RISPEECH with CTC loss that achieves a 6.20% WER on dev-other when decoded with a 4-gram word LM - the same model as is listed in Table 3 in the Appendix. We pseudo-label all audio using this AM and run beam-search decoding with a 4-gram word LM described in Appendix A.
|
| 76 |
+
|
| 77 |
+
§ 4. DECODING
|
| 78 |
+
|
| 79 |
+
Decoding is designed to select the best transcription by leveraging both the posteriors of an acoustic model (AM) and the perplexity of a language model (LM). We perform one-pass beam-search decoding with a single external LM. Optionally, to further improve performance, we use stronger NN-based LMs to rescore the beam. Details on our beam-search decoder algorithm and rescoring are given in Appendix B.
|
| 80 |
+
|
| 81 |
+
§ 5. EXPERIMENTS
|
| 82 |
+
|
| 83 |
+
§ 5.1. TECHNICAL DETAILS
|
| 84 |
+
|
| 85 |
+
We use the standard splits for LIBRISPEECH (all the available training data was used for training, and two configurations, clean and other, for validation and test) and the standard LIBRISPEECH LM corpus for LM training. Models are trained using the wav2letter++ toolkit (Pratap et al., 2018); reproduction steps and pre-trained models are open-sourced ${}^{3}$ .
|
| 86 |
+
|
| 87 |
+
2https://librivox.org
|
| 88 |
+
|
| 89 |
+
Acoustic Model Training All hyper-parameters including model architecture are cross-validated on dev-clean and dev-other. Given that we have a large family of models, for simplicity and clarity, we only report hyper-parameters ranges in which we search their best values.
|
| 90 |
+
|
| 91 |
+
Plain SGD with momentum is used to train ResNet and TDS models, and Adagrad (Duchi et al., 2011) to train Transformers. Models are trained on 64 GPUs each with an overall batch size of 256 for ResNet and TDS and 320 for Transformer. With only LIBRISPEECH, all models converged in under a week; with pseudo-labels from LIBRIVOX, training required 2-3 weeks. The initial learning rate for ResNet models is chosen from $\left\lbrack {{0.05},{0.5}}\right\rbrack$ , while for TDS and Transformer models, the range decreases to [0.01, 0.03]. Specifically, for Transformers, we apply a linear learning rate warm-up schedule for either ${32}\mathrm{k}$ or ${64}\mathrm{k}$ updates. For fully-supervised training with LIBRISPEECH, the learning rate is halved every 90 epochs for Transformer models, and 150 epochs for ResNet and TDS models. With LIB-RIVOX, however, we only halve the learning rate once in the middle of the training. For TDS and ResNet models, we use momentum in the range $\left\lbrack {{0.1},{0.6}}\right\rbrack$ . With respect to regularization, we use 0.2 dropout everywhere (front-end, encoder, decoder), and layer drop for all Transformer blocks. Dropout in TDS blocks and ResNet convolutions is in the range $\left\lbrack {{0.05},{0.2}}\right\rbrack$ and increases with depth. For Seq2Seq training, we run 3 epochs of attention-window pretraining, and use ${99}\%$ of teacher forcing ( $1\%$ of uniform output sampling). We also use ${10}\%$ dropout in the decoder for TDS (and 0.1 dropout and 0.1 layer drop in the decoder for Transformers), together with $5\%$ label smoothing, $1\%$ random sampling and 1% word piece sampling. All models use SpecAugment (Park et al., 2019) with an LD policy.
|
| 92 |
+
|
| 93 |
+
Language Model Training All LMs in this section are trained on the standard LIBRISPEECH LM corpus. All word-level LMs use the same vocabulary for training. $n$ -gram LMs are trained with the KenLM toolkit (Heafield, 2011), while the GCNN and Transformer LMs are trained with fairseq ${}^{4}$ toolkit (Ott et al.,2019). The word-level 4-gram and GCNN are trained in the same way as (Likhomanenko et al., 2019). We also train a 6-gram word-piece LM, which has a similar context size to a word-level 4-gram LM, and prunes 5-grams appearing once and 6-gram appearing twice or fewer. The word-piece and word-level GCNN models are trained with Nesterov accelerated gradient descent (Nes-terov, 1983) on 8 GPUs for 22 epochs with a step-wise learning rate schedule starting from 1 and decreasing by a factor of 5 when the loss is on the plateau. Gradient clipping and weight normalization are used following (Dauphin et al., 2017). The word-level Transformer LM is trained with Nesterov accelerated gradient descent on 128 GPUs for 100 epochs with an inverse square root learning rate schedule. During the first ${16}\mathrm{k}$ iterations, a warm-up schedule that linearly increases the learning rate from 1e-7 to 1 is used. Word-level perplexities of all LM variants are listed in Table 1.
|
| 94 |
+
|
| 95 |
+
Table 1. Word-level perplexities of LMs on LIBRISPEECH. Perplexity is computed without unknown words.
|
| 96 |
+
|
| 97 |
+
max width=
|
| 98 |
+
|
| 99 |
+
LANGUAGE MODEL DEV-CLEAN DEV-OTHER
|
| 100 |
+
|
| 101 |
+
1-3
|
| 102 |
+
WORD 4-GRAM 148.0 136.6
|
| 103 |
+
|
| 104 |
+
1-3
|
| 105 |
+
NO LIBRIVOX OVERLAP 152.8 140.0
|
| 106 |
+
|
| 107 |
+
1-3
|
| 108 |
+
WP 6-GRAM 145.4 133.7
|
| 109 |
+
|
| 110 |
+
1-3
|
| 111 |
+
WP GCNN (188M) 61.7 61.9
|
| 112 |
+
|
| 113 |
+
1-3
|
| 114 |
+
WORD GCNN (319M) 57.0 57.9
|
| 115 |
+
|
| 116 |
+
1-3
|
| 117 |
+
WORD TRANSF. (562M) 48.2 50.2
|
| 118 |
+
|
| 119 |
+
1-3
|
| 120 |
+
|
| 121 |
+
§ 5.2. RESULTS
|
| 122 |
+
|
| 123 |
+
LIBRISPEECH Results All our results for LIB-RISPEECH are listed in the top of Table 3 in Appendix. We present results under three scenarios: without any decoding nor external LM (greedy decoding), with one-pass decoding only, and with decoding followed by beam rescoring. The decoding beam size is usually 50 and 500 for Seq2Seq and CTC respectively. We use a beam size of 250 for CTC decoding with a GCNN LM. We train strong baselines on simple ResNet architectures and improve the TDS models significantly compared to past results (Hannun et al., 2019). These convolutional models outperform end-to-end biLSTM models from (Lüscher et al., 2019). Our best acoustic models are Transformers-based and reach 6.98% without any decoding on test-other and 5.17% with decoding and rescoring, demonstrating that end-to-end training can perform as well as traditional bootstrapped systems.
|
| 124 |
+
|
| 125 |
+
LIBRIVOX Results Assuming all pseudo-labels are ground-truth, we train acoustic models on a combination of the 960 hours of labeled audio from LIBRISPEECH in conjunction the pseudo-labeled audio from LIBRIVOX, where batches are uniformly sampled (without weighting) from both LIBRISPEECH and LIBRIVOX datasets. Transformer AMs with both CTC and Seq2Seq loss were trained for 5 days on this combined dataset, achieving WERs on test-other of ${4.88}\%$ and ${2.28}\%$ on test-clean without decoding or use of an LM, which is state-of-the-art even amongst pipelines that use an LM. Results with decoding/rescoring are shown in Table 2, where we reach 2.09% and 4.11% on test-clean and test-other, respectively, and are further improvements on the state-of-the-art. From ablations study, Appendix C and D, we found interesting outcomes: i) increasing the amount of pseudo-labels strictly improves performance, ii) models trained on LIBRIVOX pseudo-labels alone outperform models trained on LIBRISPEECH, iii) a large collection of pseudo-labeled audio helps to learn better acoustic representation and transfer LM knowledge so there is no longer benefit much from decoding with an external LM.
|
| 126 |
+
|
| 127 |
+
${}^{3}$ https://github.com/facebookresearch/ wav2letter
|
| 128 |
+
|
| 129 |
+
${}^{4}$ https://github.com/pytorch/fairseq
|
| 130 |
+
|
| 131 |
+
Table 2. WERs on LIBRISPEECH development and test sets. Our best results are shown in the bottom section (with the number of parameters), and are both trained with Seq2Seq loss. Full results can be found in Appendix Table 3.
|
| 132 |
+
|
| 133 |
+
max width=
|
| 134 |
+
|
| 135 |
+
2|c|AM 2|c|LM 2|c|DEV 2|c|TEST
|
| 136 |
+
|
| 137 |
+
1-8
|
| 138 |
+
TYPE LEXICON TYPE LEXICON CLEAN OTHER CLEAN OTHER
|
| 139 |
+
|
| 140 |
+
1-8
|
| 141 |
+
LAS (Park et al., 2019) 16k WP - - X X 2.8 6.8
|
| 142 |
+
|
| 143 |
+
1-8
|
| 144 |
+
Decoding 16k WP RNN 16k WP X X 2.5 5.8
|
| 145 |
+
|
| 146 |
+
1-8
|
| 147 |
+
HMM/BILSTM ${12}\mathrm{K}\;\mathrm{{CDP}}$ 4GRAM+LSTM WORD 2.2 5.1 2.6 5.5
|
| 148 |
+
|
| 149 |
+
1-8
|
| 150 |
+
+ TRANSF. RESCORING (Lüscher et al., 2019) ${12}\mathrm{K}\;\mathrm{{CDP}}$ +TRANSF. WORD 1.9 4.5 2.3 5.0
|
| 151 |
+
|
| 152 |
+
1-8
|
| 153 |
+
TRANSFORMERS (Karita et al., 2019) BPE RNN WORD 2.2 5.6 2.6 5.7
|
| 154 |
+
|
| 155 |
+
1-8
|
| 156 |
+
CONV. TRANSF. (HAN ET AL., 2019) 6K TRIPHONES 3GRAM, RESCORED +TDNN + LSTM WORD 1.8 5.8 2.2 5.7
|
| 157 |
+
|
| 158 |
+
1-8
|
| 159 |
+
CONV. TRANSF. CHENONES 4GRAM WORD X X 2.60 5.59
|
| 160 |
+
|
| 161 |
+
1-8
|
| 162 |
+
+ TRANSF. RESCORING (WANG ET AL., 2019) CHENONES TRANSF. WORD X X 2.26 4.85
|
| 163 |
+
|
| 164 |
+
1-8
|
| 165 |
+
TRANSF. (270M) – LIBRISPEECH 10K WP - - 2.54 6.67 2.89 6.98
|
| 166 |
+
|
| 167 |
+
1-8
|
| 168 |
+
+DECODING/RESCORING 10K WP GCNN + TRANSF. WORD 2.07 4.79 2.37 5.17
|
| 169 |
+
|
| 170 |
+
1-8
|
| 171 |
+
Transf. (296M) – LibriVox 10K WP - - 2.12 4.59 2.28 4.88
|
| 172 |
+
|
| 173 |
+
1-8
|
| 174 |
+
+DECODING/RESCORING 10K WP GCNN + TRANSF. WORD 2.00 3.65 2.09 4.11
|
| 175 |
+
|
| 176 |
+
1-8
|
| 177 |
+
|
| 178 |
+
§ 6. RELATED WORK
|
| 179 |
+
|
| 180 |
+
Deep neural networks were reintroduced in ASR with HMMs (Hinton et al., 2012), and many of state-of-the-art models still rely on force alignment (Han et al., 2017; Lüscher et al., 2019; Karita et al., 2019). Nonetheless, there have been increasingly competitive end-to-end results trained with CTC (Graves & Jaitly, 2014; Amodei et al., 2016), ASG (Collobert et al., 2016; Zeghidour et al., 2018), LF-MMI (Hadian et al., 2018), sequence-to-sequence (Chan et al., 2016; Chiu et al., 2018a), transduction (Prabhavalkar et al., 2017; He et al., 2019), and differentiable decoding (Collobert et al., 2019a). Listen Attend and Spell (Chan et al., 2016) is a family of end-to-end models based on biL-STMs which achieved state-of-the-art results with improved regularization through data augmentation (Park et al., 2019); we consequently use SpecAugment in all of our experiments. Seq2Seq models are not limited to RNNs; time-depth separable convolutions also give strong results (Hannun et al., 2019). Our best models are transformer-based, as (Lüscher et al., 2019; Karita et al., 2019), which give good results in Seq2Seq settings even without external LMs (Mohamed et al., 2019). In ASR, semi-supervised pseudo-label-style self-training has been explored generally in end-to-end settings in (Soltau et al., 2016; Li et al., 2019a; Kahn et al., 2019a) for both low-resource (Veselj) et al., 2017; Cui et al., 2017) and large-scale (Parthasarathi & Strom, 2019) setups.
|
| 181 |
+
|
| 182 |
+
§ 7. DISCUSSION
|
| 183 |
+
|
| 184 |
+
We presented state-of-the-art results on LIBRISPEECH with end-to-end methods. While allowing for lexicon-free decoding, the ${10}\mathrm{k}$ word-piece tokens used during training limit the amount of striding we can use in our model architectures and can be replaced by AMs outputting words with an arbitrary lexicon (Collobert et al., 2019b). As relative WER gains due to language models shrink (from $\approx {20}\%$ relative-WER without LIBRIVOX to ≈10% with, for GCNN decoding), and as we showed that AMs learn LM-level information, differentiable decoding (Collobert et al., 2019a) is a possible avenue for single-stage AM + LM joint training.
|
| 185 |
+
|
| 186 |
+
We show the effectiveness of a simple pipeline that does not require many training steps. In light of our semi-supervised results without decoding or an LM, we think Seq2Seq/CTC losses, transducers, and differentiable decoding are viable methods to achieve end-to-end state-of-the-art results, without external LMs, through semi-supervised learning.
|
| 187 |
+
|
| 188 |
+
§ 8. ACKNOWLEDGEMENTS
|
| 189 |
+
|
| 190 |
+
We would like to thank Steven Garan for audio recordings of shuffled sentences from LIBRISPEECH dev-other.
|
papers/ICML/ICML 2020/ICML 2020 Workshop/ICML 2020 Workshop SAS/RlVTYWhsky7/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,371 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Self-supervised Pitch Detection by Inverse Audio Synthesis
|
| 2 |
+
|
| 3 |
+
Jesse Engel ${}^{1}$ Rigel Swavely ${}^{1}$ Adam Roberts ${}^{1}$ Lamtharn (Hanoi) Hantrakul ${}^{1}$ Curtis Hawthorne ${}^{1}$
|
| 4 |
+
|
| 5 |
+
## Abstract
|
| 6 |
+
|
| 7 |
+
Audio scene understanding, parsing sound into a hierarchy of meaningful parts, is an open problem in representation learning. Sound is a particularly challenging domain due to its high dimensionality, sequential dependencies and hierarchical structure. Differentiable Digital Signal Processing (DDSP) greatly simplifies the forward problem of generating audio by introducing differentiable synthesizer and effects modules that combine strong signal priors with end-to-end learning. Here, we focus on the inverse problem, inferring synthesis parameters to approximate an audio scene. We demonstrate that DDSP modules can enable a new approach to self-supervision, generating synthetic audio with differentiable synthesizers and training feature extractor networks to infer the synthesis parameters. By building a hierarchy from sinusoidal to harmonic representations, we show that it possible to use such an inverse modeling approach to disentangle pitch from timbre, an important task in audio scene understanding.
|
| 8 |
+
|
| 9 |
+
## 1. Introduction
|
| 10 |
+
|
| 11 |
+
While audio scene analysis is typically associated with source separation (Brown & Cooke, 1994), it also encompasses many sound analysis tasks including pitch detection (Kim et al., 2018; Gfeller et al., 2020), phoneme recognition (Koutras et al., 1999), automatic speech recognition (Coy & Barker, 2007), sound localization (Lyon, 1983), and polyphonic instrument transcription (Hawthorne et al., 2018). Since many sources exhibit harmonic resonance, such as voices and vibrating objects (Smith, 2010), disentangling pitch and timbre is an important step in parsing an audio scene (Moerel et al., 2012; Theunissen & Elie, 2014).
|
| 12 |
+
|
| 13 |
+
Inverse graphics, where the parameters of a rendering engine are inferred from an image, is an appealing approach to parsing visual scenes. Unlike black-box classifiers, the approach is object-oriented, interpretable by design, and can generate high-quality images with modern renderers (Wu et al., 2017a; Yao et al., 2018). In audio, these inverse approaches have been limited to the domain of individual sounds from unrealistic commercial synthesizers due to the lack of a realistic, interpretable and differentiable audio rendering engine (Huang et al., 2014; Hoffman & Cook, 2006; Esling et al., 2019).
|
| 14 |
+
|
| 15 |
+
Hierarchical Scene Decomposition Harmonic harm Tharm recon Audio Sinusoidal Differentiable Audio Synthesis ANTILL recon
|
| 16 |
+
|
| 17 |
+
Figure 1. Diagram of inverse audio synthesis. A feature extraction pipeline $\left( {{F}_{\text{sin }}^{\theta },{F}_{\text{harm }}^{\theta }}\right)$ hierarchically decomposes audio into low-level sinusoidal components (frequency, amplitude), which are combined to extract harmonic components $\left( {f}_{0}\right.$ , amplitude, harmonic distribution). These are the only modules that have learnable parameters $\theta$ . An additional filtered noise component is not shown. These parameters are fed to differentiable audio synthesiz- $\operatorname{ers}\left( {{S}_{\text{sin }},{S}_{\text{harm }}}\right)$ and then to reconstruction losses. An additional consistency loss is enforced on the predicted and resynthesized sinusoidal components. See Section 3 for details.
|
| 18 |
+
|
| 19 |
+
Most realistic generative models of audio require large autoregressive models that are slow, non-differentiable and cannot generate samples mid-training. (Dieleman et al., 2018; Dhariwal et al., 2020; Hawthorne et al., 2019; Engel et al., 2017; Wang et al., 2017). Differentiable Digital Signal Processing (DDSP) (Engel et al., 2020) overcomes these challenges by combining neural networks with differentiable synthesizers to efficiently render realistic audio during training.
|
| 20 |
+
|
| 21 |
+
---
|
| 22 |
+
|
| 23 |
+
${}^{1}$ Google Research, Brain Team. Correspondence to: Jesse Engel <jesseengel@google.com>.
|
| 24 |
+
|
| 25 |
+
Published at the workshop on Self-supervision in Audio and Speech at the ${37}^{\text{th }}$ International Conference on Machine Learning, Vienna, Austria. Copyright 2020 by the author(s).
|
| 26 |
+
|
| 27 |
+
---
|
| 28 |
+
|
| 29 |
+
Finally, self-supervised techniques typically rely on intrinsic properties of data, such as causality (Oord et al., 2018) or identity-invariance to augmentation (Zhai et al., 2019), to automatically generate supervised labels from datasets. Since our DDSP audio renderer is fully interpretable, we can explore a different form of self-supervision where a fairly generic random process generates both synthetic audio and supervised labels for training. We combine this self-supervision with unsupervised reconstruction losses to adapt to new datasets.
|
| 30 |
+
|
| 31 |
+
The key contributions of this paper include:
|
| 32 |
+
|
| 33 |
+
- DDSP-inv: An inverse model of sound using DDSP, capable of factorizing pitch and timbre, with comparable pitch detection to SOTA supervised and self-supervised discriminative methods.
|
| 34 |
+
|
| 35 |
+
- Self-supervised training procedure to train feature extractor networks to infer synthesis parameters from differentiably-rendered synthetic audio.
|
| 36 |
+
|
| 37 |
+
- Sinusoidal Synthesizer: A new DDSP module capable of generating a wide range of audio including inharmonic and polyphonic signals.
|
| 38 |
+
|
| 39 |
+
- Sinusoidal Consistency Loss: A loss function to evaluate the similarity of two arbitrarily-ordered sets of sinusoids and also perform heuristic pitch extraction.
|
| 40 |
+
|
| 41 |
+
Audio samples are provided in the online supplement at https://goo.gl/magenta/ddsp-invand code will be available after publication at https://github.com/magenta/ddsp.
|
| 42 |
+
|
| 43 |
+
## 2. Related Work
|
| 44 |
+
|
| 45 |
+
Differentiable Rendering: Differentiable rendering is a valuable component of inverse graphics models (Loubet et al., 2019; Li et al., 2018b). A natural scene can be "deren-dered" into a structured object-wise representation via a differentiable shape renderer (Yao et al., 2018) or an explicit scene description that can be recomposed with a graphics engine (Wu et al., 2017b). This literature motivates this work, in which we use DDSP as a differentiable audio renderer.
|
| 46 |
+
|
| 47 |
+
Sinusoidal Modeling Synthesis: The techniques developed by Serra & Smith (1990) model sound as a combination of additive sinusoids and a subtractive filtered noise source. Despite being parametric and using heuristics to infer synthesis parameters, it is a highly expressive model of sound with diverse applications and is even used as a general purpose audio codec in MPEG-4 (Tellman et al., 1995; Klapuri et al., 2000; Purnhagen & Meine, 2000). In this work, we train neural networks to do this task with end-to-end learning.
|
| 48 |
+
|
| 49 |
+
Pitch Detection: Estimating the fundamental frequency $\left( {f}_{0}\right)$ of a monophonic audio signal, or pitch detection, is a key task to audio scene understanding. We compare against several state-of-the-art baselines in this work. SWIPE (Camacho & Harris, 2008) performs spectrum template matching between the signal and a sawtooth waveform. CREPE (Kim et al., 2018) is a deep convolutional model classifying pitch labels directly from the waveform. SPICE (Gfeller et al., 2020) removes the need for labels by employing self-supervision to predict the frequency shifts applied to training data. While these discriminative methods are trained specifically to detect pitch, DDSP-inv learns to detect ${f}_{0}$ as a side-effect of disentangling timbre and pitch in a signal.
|
| 50 |
+
|
| 51 |
+
## 3. Model Architecture
|
| 52 |
+
|
| 53 |
+
A diagram and description of our model hierarchy is shown in Figure 1 (DDSP-inv, for inverse modeling with DDSP). We describe each component below.
|
| 54 |
+
|
| 55 |
+
### 3.1. Differentiable Audio Synthesizers
|
| 56 |
+
|
| 57 |
+
Inspired by the work of Serra & Smith (1990), we model sound as a flexible combination of time-dependent sinusoidal oscillators and filtered noise. From the sinusoids we can infer a corresponding harmonic oscillator with a fundamental frequency. Except for the new sinusoidal synthesizer module, all other modules are identical to the DDSP library introduced in Engel et al. (2020). While other available DDSP modules cover aspects such as room reverberation, we do not consider them here since they are not significant factors in the benchmark datasets.
|
| 58 |
+
|
| 59 |
+
Sinusoidal Synthesizer $\left( {S}_{\text{sin }}\right)$ : We start by creating a new DDSP module that consists of a bank of $K$ sinusoids with individually varying amplitudes ${A}_{k}$ and frequencies ${f}_{k}$ . These are flexibly specified by the output of a neural network ${F}_{\text{sin }}^{\theta }$ with parameters $\theta$ over $n$ discrete time steps:
|
| 60 |
+
|
| 61 |
+
$$
|
| 62 |
+
x\left( n\right) = \mathop{\sum }\limits_{{k = 0}}^{{K - 1}}{A}_{k}\left( n\right) \sin \left( {{\phi }_{k}\left( n\right) }\right) , \tag{1}
|
| 63 |
+
$$
|
| 64 |
+
|
| 65 |
+
where ${\phi }_{k}\left( n\right)$ is its instantaneous phase obtained by cumulative summation of the instantaneous frequency ${f}_{k}\left( n\right)$ :
|
| 66 |
+
|
| 67 |
+
$$
|
| 68 |
+
{\phi }_{k}\left( n\right) = {2\pi }\mathop{\sum }\limits_{{m = 0}}^{n}{f}_{k}\left( m\right) , \tag{2}
|
| 69 |
+
$$
|
| 70 |
+
|
| 71 |
+
The sinusoidal encoder ${F}_{sin}^{\theta }$ outputs amplitudes ${A}_{k}$ and frequencies ${f}_{k}$ every ${32}\mathrm{\;{ms}}$ , which are upsampled to audio rate ( ${16}\mathrm{{kHz}}$ ) using overlapping Hann windows and linear interpolation respectively. We highlight a key difference of this module with a Short Time Fourier Transform (STFT). Frequencies of each sinusoidal component are freely predicted by the network each frame, instead of being locked to a fixed linear spacing determined by the FFT window size. This avoids distortion in periodic signals due to phase mismatch between adjacent frames, and spectral leakage between neighboring frequency bins (Engel et al., 2020).
|
| 72 |
+
|
| 73 |
+
Pitch ground truth predict Harm Sins Reconstruction Original Harm Amp Sinusoids Harm Dist
|
| 74 |
+
|
| 75 |
+
Figure 2. Hierarchical decomposition of a sample from the URMP dataset. Left: spectrogram of audio and sinusoidal traces from the sinusoidal encoder ${F}_{\text{sin }}^{\theta }$ . Center: harmonic components including fundamental frequency, amplitude, and distribution of the harmonics from the harmonic encoder ${F}_{harm}^{\theta }$ . Right: sinusoids decoded from harmonic components with the harmonic synthesizer ${S}_{harm}$ and spectrogram of the final reconstructed audio using the sinusoidal synthesizer ${S}_{\text{sin }}$ .
|
| 76 |
+
|
| 77 |
+
Harmonic Synthesizer $\left( {S}_{\text{harm }}\right)$ : For a harmonic oscillator, the harmonic encoder ${F}_{\text{harm }}^{\theta }$ , predicts a single fundamental frequency ${f}_{0}$ , amplitude $A$ , and harmonic distribution ${c}_{k}$ , from the incoming sinusoids. On generation, all the output frequencies are constrained to be harmonic (integer) multiples of a fundamental frequency (pitch),
|
| 78 |
+
|
| 79 |
+
$$
|
| 80 |
+
{f}_{k}\left( n\right) = k{f}_{0}\left( n\right) \tag{3}
|
| 81 |
+
$$
|
| 82 |
+
|
| 83 |
+
Individual amplitudes are deterministically retrieved by multiplying the total amplitude, $A\left( n\right)$ , with the normalized distribution over harmonic amplitudes, ${c}_{k}\left( n\right)$ :
|
| 84 |
+
|
| 85 |
+
$$
|
| 86 |
+
{A}_{k}\left( n\right) = A\left( n\right) {c}_{k}\left( n\right) . \tag{4}
|
| 87 |
+
$$
|
| 88 |
+
|
| 89 |
+
where $\mathop{\sum }\limits_{{k = 0}}^{{K - 1}}{c}_{k}\left( n\right) = 1$ and ${c}_{k}\left( n\right) \geq 0$ .
|
| 90 |
+
|
| 91 |
+
Filtered Noise $\left( {S}_{\text{noise }}\right)$ : As introduced in (Engel et al., 2020), we can model the non-periodic audio components with a linear time-varying filtered noise source. Noise is generated from a uniform distribution. We linearly tile frequency space with 65 bands whose amplitude is modulated each frame by the outputs of the sinusoidal encoder. To ease optimization, we reuse the same filtered noise distribution for both the sinusoidal reconstructions and the harmonic reconstructions.
|
| 92 |
+
|
| 93 |
+
Nonlinearities: For all amplitudes and harmonic distribution components, we constrain network outputs to be positive with a exponentiated sigmoid nonlinearity, ${2\sigma }{\left( x\right) }^{\log {10}} +$ ${10}^{-7}$ , that scales the output to be between 1e-7 and 2 . We constrain sinusoidal frequency predictions between ${20}\mathrm{{Hz}}$ and ${8000}\mathrm{\;{Hz}}$ , and harmonic fundamental frequency predictions between ${20}\mathrm{{Hz}}$ and ${1200}\mathrm{{Hz}}$ . We logarithmically tile 64 bins across this range, then pass network outputs for each frequency component through a softmax nonlinearity across these bins, and take a frequency-bin-weighted sum over the resulting distribution.
|
| 94 |
+
|
| 95 |
+
### 3.2. Feature Extractors
|
| 96 |
+
|
| 97 |
+
Sinusoidal Encoder $\left( {F}_{\text{sin }}^{\theta }\right)$ : The network converts audio $x\left( n\right)$ to sinusoidal amplitudes ${A}_{k}$ , sinusoidal frequencies ${f}_{k}$ , and filtered noise magnitudes. Audio is first transformed to a logmel spectrogram (FFT size=2048, hop size=512, mel bins=229), and then passed through a standard implementation of a ResNet-38 with layer normalization, bottleneck layers, and ReLU nonlinearities (He et al., 2016a;b; Ba et al., 2016). Through four stages, the number of filters increases from 64 to 1024, with the frequency dimension downsam-pling by a factor of two after each stage. A final linear layer feeds the module specific nonlinearities described in Section 3.1.
|
| 98 |
+
|
| 99 |
+
Harmonic Encoder $\left( {F}_{\text{harm }}^{\theta }\right)$ : This network converts the sinusoidal synthesizer components from ${F}_{\text{sin }}^{\theta }$ (amplitudes ${A}_{k}\left( n\right)$ and frequencies ${f}_{k}\left( n\right)$ for each sinusoid) into the harmonic synthesizer components of fundamental frequency ${f}_{0}\left( n\right)$ , amplitude $A\left( n\right)$ , and harmonic distribution ${c}_{k}\left( n\right)$ . Sinusoidal amplitudes and frequencies are first converted to a log scale and fed into a simple network of two fully-connected layers (256 dims), a single gated-recurrent unit layer (512 dims), and two more fully-connected layers (256 dims). Layer normalization and leaky ReLU nonlinearities are used throughout. A final linear layer feeds the module specific nonlinearities described in Section 3.1.
|
| 100 |
+
|
| 101 |
+
### 3.3. Loss Functions
|
| 102 |
+
|
| 103 |
+
We train our network with combination of an audio reconstruction loss, a sinusoidal consistency loss, and a self-supervision loss. We only add the self-supervision loss for synthetic data:
|
| 104 |
+
|
| 105 |
+
$$
|
| 106 |
+
\mathcal{L} = {\mathcal{L}}_{\text{recon }} + {\alpha }_{\text{sin }}{\mathcal{L}}_{\text{sin }} + {\mathcal{L}}_{\text{ss }} \tag{5}
|
| 107 |
+
$$
|
| 108 |
+
|
| 109 |
+
where ${\alpha }_{sin}$ is a weight of 0.1 to empirically match the order of magnitude of the other losses.
|
| 110 |
+
|
| 111 |
+
Reconstruction Loss: Since each level of the hierarchical autoencoder can synthesize audio, we can tie the learned representations back to the ground truth audio at each stage with an audio reconstruction loss. Direct waveform comparisons focus too much on absolute phase differences that are less perceptually relevant (Engel et al., 2019). We instead compare spectrograms and utilize the fact that sinusoidal synthesis maintains phase coherence by design. We balance temporal and frequency resolution by imposing a spectrogram loss at several different FFT sizes $\left( {i \in \{ {64},{128},{256},{512},{1024},{2048}\} }\right)$ (Wang et al.,2020; Engel et al., 2020):
|
| 112 |
+
|
| 113 |
+
$$
|
| 114 |
+
{\mathcal{L}}_{\text{recon }} = \mathop{\sum }\limits_{i}{\begin{Vmatrix}{s}_{i} - {\widehat{s}}_{i}\end{Vmatrix}}_{1} + {\begin{Vmatrix}\log {s}_{i} - \log {\widehat{s}}_{i}\end{Vmatrix}}_{1}. \tag{6}
|
| 115 |
+
$$
|
| 116 |
+
|
| 117 |
+
where ${s}_{i}$ is the magnitude spectrogram of the target audio at a given FFT size, and $\widehat{{s}_{i}}$ is the spectrogram of the reconstructed audio. The total reconstruction loss is a sum of the sinusoidal and harmonic reconstruction losses $\left( {{\mathcal{L}}_{\text{recon }} = {\mathcal{L}}_{\text{recon }}^{\text{sin }} + {\mathcal{L}}_{\text{recon }}^{\text{harm }}}\right) .$
|
| 118 |
+
|
| 119 |
+
Sinusoidal Consistency Loss: To compare the sets of sinusoids on encoding $\left( {F}_{\text{sin }}^{\theta }\right)$ and decoding $\left( {S}_{\text{harm }}\right)$ , we need a permutation invariant loss that can even compare sets of different sizes. We took inspiration from the pitch detection literature; implementing a differentiable version of the Two-Way Mismatch (TWM) algorithm (Maher & Beauchamp, 1994).
|
| 120 |
+
|
| 121 |
+
The TWM algorithm estimates the distance of two sets of sinusoid frequencies $\left( {{f}^{a},{f}^{b}}\right)$ by the frequency distance from one set to it's closest neighbor in the other set. To prevent the local minima of one set from densely tiling frequency space, the distance is calculated in both directions. This is also called the Chamfer Distance in image recognition literature (Barrow et al., 1977).
|
| 122 |
+
|
| 123 |
+
$$
|
| 124 |
+
{D}_{twm} = \mathop{\sum }\limits_{k}\mathop{\min }\limits_{j}\left( \left| {{f}_{k}^{a} - {f}_{j}^{b}}\right| \right) + \mathop{\sum }\limits_{j}\mathop{\min }\limits_{k}\left( \left| {{f}_{k}^{a} - {f}_{j}^{b}}\right| \right) \tag{7}
|
| 125 |
+
$$
|
| 126 |
+
|
| 127 |
+
We approximate this procedure as differentiable loss between two arbitrary sets of sinusoids $\left( {{A}_{k}^{a},{f}_{k}^{a},{A}_{j}^{b},{f}_{j}^{b}}\right)$ , with $K$ and $J$ sinusoids respectively, by creating a Gaussian kernel density estimate (KDE) of $P\left( {{f}_{k}^{a} \mid {A}^{b},{f}^{b}}\right)$ and $P\left( {{f}_{j}^{b} \mid {A}^{a},{f}^{b}}\right)$ :
|
| 128 |
+
|
| 129 |
+
$$
|
| 130 |
+
p\left( {{f}_{k}^{a} \mid {A}^{b},{f}^{b}}\right) = \mathop{\sum }\limits_{j}\frac{{A}_{j}^{b}}{\sigma \sqrt{2\pi }}\exp \frac{-{\left( {f}_{k}^{a} - {f}_{j}^{b}\right) }^{2}}{2{\sigma }^{2}} \tag{8}
|
| 131 |
+
$$
|
| 132 |
+
|
| 133 |
+
where ${A}_{j}$ are the frame-wise normalized amplitudes, and ${f}_{j}$ are the component frequencies in units of semitones (logarithmically spaced). The standard deviation of the KDE gaussians, $\sigma$ , is a hyperparameter we set to 0.1 semitones.
|
| 134 |
+
|
| 135 |
+
We then get the loss as a weighted average of the two-way negative log-likelihood:
|
| 136 |
+
|
| 137 |
+
$$
|
| 138 |
+
{\mathcal{L}}_{\text{sin }} = - \mathop{\sum }\limits_{k}{A}_{k}^{a}\log p\left( {{f}_{k}^{a} \mid {A}^{b},{f}^{b}}\right)
|
| 139 |
+
$$
|
| 140 |
+
|
| 141 |
+
$$
|
| 142 |
+
- \mathop{\sum }\limits_{j}{A}_{j}^{b}\log p\left( {{f}_{j}^{b} \mid {A}^{a},{f}^{a}}\right) \tag{9}
|
| 143 |
+
$$
|
| 144 |
+
|
| 145 |
+
$$
|
| 146 |
+
+ {\begin{Vmatrix}\overline{{A}^{a}} - \overline{{A}^{b}}\end{Vmatrix}}_{1}
|
| 147 |
+
$$
|
| 148 |
+
|
| 149 |
+
where we use a third term to keep amplitudes bounded by matching their their average value $\bar{A}$ in each frame. An example pair of distributions is shown in Figure 3.
|
| 150 |
+
|
| 151 |
+
Amplitude Frequency
|
| 152 |
+
|
| 153 |
+
Figure 3. Sinusoidal Consistency Loss. Similarity between two sets of sinusoids by a two-way Gaussian kernel density estimate. Stems represent amplitudes of the sinusoids and curves represent the conditional probability distributions of the sinusoids in one plot given the sinusoids in the other plot. TWM loss is minimized when the stems are at the peaks of the Gaussians in both plots.
|
| 154 |
+
|
| 155 |
+
TWM Heuristic: We can also use the sinusoidal consistency loss like the TWM algorithm, as a baseline heuristic for pitch extraction from sinusoids. In this modification, in each time frame we consider all the sinusoids, ${f}_{k}$ , to be potential candidates as the fundamental frequency, ${f}_{0}$ , and build a series of harmonics off of each candidate. We then calculate ${\mathcal{L}}_{\text{sin }}$ for each series of harmonics against the original set of sinusoids and take the candidate with the minimum loss.
|
| 156 |
+
|
| 157 |
+
Self Supervised Loss on Synthetic Data: To learn the correct scene decompositions, we found self-supervision with synthetic data to be an essential addition to reconstruction losses on real data. A bad minimum exists, where fairly good reconstructons are possible by predicting an extremely low fundamental frequency and selectively activating only a few harmonics. This is equivalent to the network learning an STFT representation of the audio, where it chooses the tight linear spacing between frequency bins.
|
| 158 |
+
|
| 159 |
+
Self-supervised training overcomes this by imposing an implicit prior on the synthesizer parameters. Similar to domain randomization (Tobin et al., 2017), we find diversity of the synthetic data is more important than realism. In our case, we generate notes with variable length, fundamental frequency $\left( {f}_{0}\right)$ , amplitude(A), harmonic distribution(c)and noise magnitudes(N). We also add random pitch modulation and noise to all parameters to increase data diversity. Examples are shown in the Supplement Figure 4 alongside further details. The self-supervised loss is given between the true parameters, and those estimated from the synthetic audio (denoted by a hat):
|
| 160 |
+
|
| 161 |
+
$$
|
| 162 |
+
{\mathcal{L}}_{ss} = {\begin{Vmatrix}{f}_{0} - {\widehat{f}}_{0}\end{Vmatrix}}_{1}
|
| 163 |
+
$$
|
| 164 |
+
|
| 165 |
+
$$
|
| 166 |
+
+ {\alpha }_{A}\parallel A - \widehat{A}{\parallel }_{1}
|
| 167 |
+
$$
|
| 168 |
+
|
| 169 |
+
$$
|
| 170 |
+
+ {\alpha }_{c}\parallel c - \widehat{c}{\parallel }_{1} \tag{10}
|
| 171 |
+
$$
|
| 172 |
+
|
| 173 |
+
$$
|
| 174 |
+
+ {\alpha }_{N}\parallel N - \widehat{N}{\parallel }_{1}
|
| 175 |
+
$$
|
| 176 |
+
|
| 177 |
+
$$
|
| 178 |
+
+ {\alpha }_{\text{sin }}{\mathcal{L}}_{\text{sin }}\left( {{A}_{k},{f}_{k},{\widehat{A}}_{k},{\widehat{f}}_{k}}\right)
|
| 179 |
+
$$
|
| 180 |
+
|
| 181 |
+
where ${\alpha }_{A},{\alpha }_{c},{\alpha }_{N}$ , and ${\alpha }_{sin}$ are loss weights set to 10,100, 100 , and 0.1 to empirically match the order of magnitude of the other losses.
|
| 182 |
+
|
| 183 |
+
## 4. Experiments
|
| 184 |
+
|
| 185 |
+
### 4.1. Datasets
|
| 186 |
+
|
| 187 |
+
We use the following common pitch detection benchmarks in our experiments. We resample all audio to ${16}\mathrm{{kHz}}$ , create 4 second long training examples, and randomly partition a 80-20 train-test split.
|
| 188 |
+
|
| 189 |
+
MIR-1K: Hsu & Jang (2009) contains 1,000 clips of people singing Chinese pop songs. Accompaniment music was recorded on the left channel and singing on the right. For our experiments, we used only the singing audio. The dataset includes manual annotations for pitch contours.
|
| 190 |
+
|
| 191 |
+
MDB-stem-synth: Salamon et al. (2017) contains solo recordings of a variety of instruments that were analyzed with pitch tracking techniques and then resynthesized to ensure fully accurate pitch annotations.
|
| 192 |
+
|
| 193 |
+
URMP: Li et al. (2018a) contains recordings of pieces played by small orchestral ensembles. Each instrument for a given piece was recorded in isolation and then later mixed together with the other instruments for the final track. We used only the isolated recordings.
|
| 194 |
+
|
| 195 |
+
<table><tr><td>Raw Pitch Accuracy</td><td>MIR-1K</td><td>MDB-stem</td><td>URMP</td></tr><tr><td colspan="4">Supervised</td></tr><tr><td>SWIPE</td><td>86.6</td><td>90.7</td><td>-</td></tr><tr><td>CREPE</td><td>90.1</td><td>92.7</td><td>92.2</td></tr><tr><td colspan="4">Self-Supervised</td></tr><tr><td>SPICE</td><td>90.6</td><td>89.1</td><td>-</td></tr><tr><td>DDSP-inv (this work)</td><td>91.8</td><td>88.5</td><td>91.0</td></tr></table>
|
| 196 |
+
|
| 197 |
+
Table 1. Raw pitch detection accuracy. Across a range of instrumental and vocal datasets, DDSP-inv is competitive with SOTA supervised and self-supervised discriminative methods, while also parsing the audio into an interpretable hierarchy of features.
|
| 198 |
+
|
| 199 |
+
### 4.2. Training Procedure
|
| 200 |
+
|
| 201 |
+
We find training is more stable by providing a curriculum of first pretraining on synthetic data $( \sim 1\mathrm{M}$ steps $)$ and then fine-tuning training on batches of mixed synthetic and real data $\left( { \sim {100}\mathrm{k}\text{steps}}\right)$ . We use the ADAM optimizer with a batch size of 64 and learning rate of $3\mathrm{e} - 4$ , and exponential learning rate decay 0.98 every 10,000 steps (Kingma & Ba, 2015). We also find it helpful to stop direct gradient flow from the harmonic encoder back to the sinusoidal encoder. Note that the two levels are still implicitly connected during training via the sinusoidal consistency loss.
|
| 202 |
+
|
| 203 |
+
### 4.3. Metrics
|
| 204 |
+
|
| 205 |
+
We evaluate all models with the standard metrics of Raw Pitch Accuracy (RPA) and Raw Chroma Accuracy (RCA). (Poliner et al., 2007). RPA measures the percentage of voiced frames in which the estimated pitch is within half a semitone of the ground truth pitch. Voiced regions are taken to be frames where the ground truth pitch frequency is greater than 0 . RCA is similar to RPA but does not penalize octave errors. The frame is accurate if the predicted pitch is within half a semitone of any power of 2 of the ground truth. Both metrics are computed using the mir_eval python library (Raffel et al., 2014).
|
| 206 |
+
|
| 207 |
+
### 4.4. Results
|
| 208 |
+
|
| 209 |
+
Table 1 shows a comparison of SOTA pitch detection methods, both supervised and self-supervised. DDSP-inv outperforms even the supervised models on the singing data of ${MIR} - {1K}$ and is comparable to other self-supervised methods on the other datasets. Note that while the other models specifically trained to detect pitch, DDSP-inv implicitly learns to detect pitch in a hierarchy of interpretable features.
|
| 210 |
+
|
| 211 |
+
Table 2 shows the contributions of the harmonic model and real data to model performance. Using the predicted pitch of the harmonic model significantly improves accuracy over the baseline of the Two-way Mismatch (TWM) heuristic on predicted sinusoids. It also dramatically reduces the amount
|
| 212 |
+
|
| 213 |
+
<table><tr><td>RPA (RCA)</td><td>MIR-1K</td><td>MDB-stem</td><td>URMP</td></tr><tr><td colspan="4">Synthetic Data</td></tr><tr><td>TWM</td><td>65.0 (78.6)</td><td>45.6 (75.4)</td><td>50.1 (78.8)</td></tr><tr><td>DDSP-inv</td><td>77.3 (78.7)</td><td>86.9 (87.1)</td><td>65.3 (69.0)</td></tr><tr><td colspan="4">Synthetic & Real</td></tr><tr><td>TWM</td><td>67.2 (86.8)</td><td>60.5 (80.5)</td><td>77.0 (89.7)</td></tr><tr><td>DDSP-inv</td><td>91.8 (92.0)</td><td>88.5 (89.6)</td><td>91.0 (91.8)</td></tr></table>
|
| 214 |
+
|
| 215 |
+
Table 2. Comparison of pitch detection using f0 from the harmonic encoder $\left( {{F}_{\text{harm }}^{\theta }\text{, DDSP-inv) versus }{f0}}\right.$ from the sinusoidal encoder $\left( {F}_{sin}^{\theta }\right)$ with TWM heuristic. The harmonic model improves accuracy and reduces octave errors, as shown by the reduced gap between RPA and RCA. Real data improves performance, but synthetic data alone is suprisingly effective for some datasets.
|
| 216 |
+
|
| 217 |
+
of octave errors, as shown by the reduced gap between RPA and RCA. While adding real data makes performance competitive with SOTA, the model achieves fairly good accuracy with synthetic data alone, especially on the ${MDB}$ - stem-synth dataset.
|
| 218 |
+
|
| 219 |
+
## 5. Conclusion and Future Work
|
| 220 |
+
|
| 221 |
+
We have presented an interpretable hierarchical model of audio that disentangles timbre and pitch through self-supervised inversion of audio synthesis. We believe this forms a promising foundation for learning higher levels of structure, such as discrete tokens, and extensions to more complicated audio scenes, including polyphonic audio with multiple sources.
|
| 222 |
+
|
| 223 |
+
## References
|
| 224 |
+
|
| 225 |
+
Ba, J. L., Kiros, J. R., and Hinton, G. E. Layer normalization. arXiv preprint arXiv:1607.06450, 2016.
|
| 226 |
+
|
| 227 |
+
Barrow, H., Tenenbaum, J., Bolles, R., and Wolf, H. Parametric correspondence and chamfer matching: Two new techniques for image matching. In Proceedings: Image Understanding Workshop, pp. 21-27. Science Applications, Inc Arlington, VA, 1977.
|
| 228 |
+
|
| 229 |
+
Brown, G. J. and Cooke, M. Computational auditory scene analysis. Computer Speech Language, 8(4):297 - 336, 1994. ISSN 0885-2308. doi: https://doi.org/10.1006/csla.1994.1016.URL http://www.sciencedirect.com/science/ article/pii/S0885230884710163.
|
| 230 |
+
|
| 231 |
+
Camacho, A. and Harris, J. G. A sawtooth waveform inspired pitch estimator for speech and music. The Journal of the Acoustical Society of America, 124(3):1638-1652, 2008.
|
| 232 |
+
|
| 233 |
+
Coy, A. and Barker, J. An automatic speech recognition sys-
|
| 234 |
+
|
| 235 |
+
tem based on the scene analysis account of auditory perception. Speech Communication, 49:384-401, 05 2007. doi: 10.1016/j.specom.2006.11.002.
|
| 236 |
+
|
| 237 |
+
Dhariwal, P., Jun, H., Payne, C., Kim, J. W., Radford, A., and Sutskever, I. Jukebox: A generative model for music, 2020.
|
| 238 |
+
|
| 239 |
+
Dieleman, S., van den Oord, A., and Simonyan, K. The challenge of realistic music generation: modelling raw audio at scale. In Bengio, S., Wallach, H., Larochelle, H., Grauman, K., Cesa-Bianchi, N., and Garnett, R. (eds.), Advances in Neural Information Processing Systems 31, pp. 7989-7999. Curran Associates, Inc., 2018.
|
| 240 |
+
|
| 241 |
+
Engel, J., Resnick, C., Roberts, A., Dieleman, S., Eck, D., Simonyan, K., and Norouzi, M. Neural audio synthesis of musical notes with WaveNet autoencoders. In ICML, 2017.
|
| 242 |
+
|
| 243 |
+
Engel, J., Agrawal, K. K., Chen, S., Gulrajani, I., Donahue, C., and Roberts, A. GANSynth: Adversarial neural audio synthesis. In International Conference on Learning Representations, 2019.
|
| 244 |
+
|
| 245 |
+
Engel, J., Hantrakul, L. H., Gu, C., and Roberts, A. DDSP: Differentiable digital signal processing. In International Conference on Learning Representations, 2020.
|
| 246 |
+
|
| 247 |
+
Esling, P., Masuda, N., Bardet, A., Despres, R., and Chemla-Romeu-Santos, A. Flow synthesizer: Universal audio synthesizer control with normalizing flows. Applied Sciences, 10(1):302, Dec 2019. ISSN 2076-3417. doi: 10.3390/app10010302.
|
| 248 |
+
|
| 249 |
+
Gfeller, B., Frank, C., Roblek, D., Sharifi, M., Tagliasacchi, M., and Velimirović, M. Spice: Self-supervised pitch estimation. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 28:1118-1128, 2020.
|
| 250 |
+
|
| 251 |
+
Hawthorne, C., Elsen, E., Song, J., Roberts, A., Simon, I., Raffel, C., Engel, J., Oore, S., and Eck, D. Onsets and frames: Dual-objective piano transcription. In Proceedings of the 19th International Society for Music Information Retrieval Conference, ISMIR 2018, Paris, France, 2018, 2018.
|
| 252 |
+
|
| 253 |
+
Hawthorne, C., Stasyuk, A., Roberts, A., Simon, I., Huang, C.-Z. A., Dieleman, S., Elsen, E., Engel, J., and Eck, D. Enabling factorized piano music modeling and generation with the MAESTRO dataset. In International Conference on Learning Representations, 2019.
|
| 254 |
+
|
| 255 |
+
He, K., Zhang, X., Ren, S., and Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770-778, 2016a.
|
| 256 |
+
|
| 257 |
+
He, K., Zhang, X., Ren, S., and Sun, J. Identity mappings in deep residual networks. In European conference on computer vision, pp. 630-645. Springer, 2016b.
|
| 258 |
+
|
| 259 |
+
Hoffman, M. D. and Cook, P. R. Feature-based synthesis: Mapping acoustic and perceptual features onto synthesis parameters. In ICMC. Citeseer, 2006.
|
| 260 |
+
|
| 261 |
+
Hsu, C.-L. and Jang, J.-S. R. On the improvement of singing voice separation for monaural recordings using the mir- 1k dataset. IEEE Transactions on Audio, Speech, and Language Processing, 18(2):310-319, 2009.
|
| 262 |
+
|
| 263 |
+
Huang, C.-Z. A., Duvenaud, D., Arnold, K. C., Partridge, B., Oberholtzer, J. W., and Gajos, K. Z. Active learning of intuitive control knobs for synthesizers using gaussian processes. In Proceedings of the 19th international conference on Intelligent User Interfaces, pp. 115-124. ACM, 2014.
|
| 264 |
+
|
| 265 |
+
Jeffreys, H. An invariant form for the prior probability in estimation problems. Proceedings of the Royal Society of London. Series A. Mathematical and Physical Sciences, 186(1007):453-461, 1946.
|
| 266 |
+
|
| 267 |
+
Kim, J. W., Salamon, J., Li, P., and Bello, J. P. Crepe: A convolutional representation for pitch estimation. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 161-165. IEEE, 2018.
|
| 268 |
+
|
| 269 |
+
Kingma, D. P. and Ba, J. Adam: A method for stochastic optimization. In ICLR (Poster), 2015.
|
| 270 |
+
|
| 271 |
+
Klapuri, A., Virtanen, T., and Holm, J.-M. Robust mul-tipitch estimation for the analysis and manipulation of polyphonic musical signals. In Proc. COST-G6 Conference on Digital Audio Effects, pp. 233-236, 2000.
|
| 272 |
+
|
| 273 |
+
Koutras, A., Dermatas, E., and George, K. Blind signal separation and speech recognition in the frequency domain. volume 1, pp. 427-430 vol.1, 02 1999. ISBN 0-7803-5682-9. doi: 10.1109/ICECS.1999.812314.
|
| 274 |
+
|
| 275 |
+
Li, B., Liu, X., Dinesh, K., Duan, Z., and Sharma, G. Creating a multitrack classical music performance dataset for multimodal music analysis: Challenges, insights, and applications. IEEE Transactions on Multimedia, 21(2): 522-535, 2018a.
|
| 276 |
+
|
| 277 |
+
Li, T.-M., Aittala, M., Durand, F., and Lehtinen, J. Differentiable monte carlo ray tracing through edge sampling. ACM Trans. Graph. (Proc. SIGGRAPH Asia), 37 (6):222:1-222:11, 2018b.
|
| 278 |
+
|
| 279 |
+
Loubet, G., Holzschuch, N., and Jakob, W. Reparameteriz-ing discontinuous integrands for differentiable rendering. ACM Transactions on Graphics (TOG), 38(6):1-14, 2019.
|
| 280 |
+
|
| 281 |
+
Lyon, R. A computational model of binaural localization
|
| 282 |
+
|
| 283 |
+
and separation. In ICASSP'83. IEEE International Conference on Acoustics, Speech, and Signal Processing, volume 8, pp. 1148-1151. IEEE, 1983.
|
| 284 |
+
|
| 285 |
+
Maher, R. C. and Beauchamp, J. W. Fundamental frequency estimation of musical signals using a two-way mismatch procedure. The Journal of the Acoustical Society of America, 95(4):2254-2263, 1994.
|
| 286 |
+
|
| 287 |
+
Moerel, M., De Martino, F., and Formisano, E. Processing of natural sounds in human auditory cortex: tonotopy, spectral tuning, and relation to voice sensitivity. Journal of Neuroscience, 32(41):14205-14216, 2012.
|
| 288 |
+
|
| 289 |
+
Oord, A. v. d., Li, Y., and Vinyals, O. Representation learning with contrastive predictive coding. arXiv preprint arXiv:1807.03748, 2018.
|
| 290 |
+
|
| 291 |
+
Poliner, G. E., Ellis, D. P. W., Ehmann, A. F., Gomez, E., Streich, S., and Ong, B. Melody transcription from music audio: Approaches and evaluation. IEEE Transactions on Audio, Speech, and Language Processing, 15(4):1247- 1256, 2007.
|
| 292 |
+
|
| 293 |
+
Purnhagen, H. and Meine, N. Hiln-the mpeg-4 parametric audio coding tools. In 2000 IEEE International Symposium on Circuits and Systems. Emerging Technologies for the 21st Century. Proceedings (IEEE Cat No. 00CH36353), volume 3, pp. 201-204. IEEE, 2000.
|
| 294 |
+
|
| 295 |
+
Raffel, C., Mcfee, B., Humphrey, E. J., Salamon, J., Nieto, O., Liang, D., Ellis, D. P. W., Raffel, C. C., Mcfee, B., and Humphrey, E. J. mir_eval: a transparent implementation of common mir metrics. In In Proceedings of the 15th International Society for Music Information Retrieval Conference, ISMIR, 2014.
|
| 296 |
+
|
| 297 |
+
Salamon, J., Bittner, R. M., Bonada, J., Bosch, J. J., Gómez, E., and Bello, J. P. An analysis/synthesis framework for automatic f0 annotation of multitrack datasets. In ISMIR, pp. 71-78, 2017.
|
| 298 |
+
|
| 299 |
+
Serra, X. and Smith, J. Spectral modeling synthesis: A sound analysis/synthesis system based on a deterministic plus stochastic decomposition. Computer Music Journal, 14(4):12-24, 1990.
|
| 300 |
+
|
| 301 |
+
Smith, J. O. Physical audio signal processing: For virtual musical instruments and audio effects. W3K publishing, 2010.
|
| 302 |
+
|
| 303 |
+
Tellman, E., Haken, L., and Holloway, B. Timbre morphing of sounds with unequal numbers of features. Journal of the Audio Engineering Society, 43(9):678-689, 1995.
|
| 304 |
+
|
| 305 |
+
Theunissen, F. E. and Elie, J. E. Neural processing of natural sounds. Nature Reviews Neuroscience, 15(6):355, 2014.
|
| 306 |
+
|
| 307 |
+
Tobin, J., Fong, R., Ray, A., Schneider, J., Zaremba, W., and Abbeel, P. Domain randomization for transferring deep neural networks from simulation to the real world. In 2017 IEEE/RSJ international conference on intelligent robots and systems (IROS), pp. 23-30. IEEE, 2017.
|
| 308 |
+
|
| 309 |
+
Wang, X., Takaki, S., and Yamagishi, J. Neural source-filter waveform models for statistical parametric speech synthesis. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 28:402-415, 2020.
|
| 310 |
+
|
| 311 |
+
Wang, Y., Skerry-Ryan, R., Stanton, D., Wu, Y., Weiss, R. J., Jaitly, N., Yang, Z., Xiao, Y., Chen, Z., Bengio, S., et al. Tacotron: Towards end-to-end speech synthesis. In INTERSPEECH, 2017.
|
| 312 |
+
|
| 313 |
+
Wu, J., Tenenbaum, J. B., and Kohli, P. Neural scene de-rendering. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017a.
|
| 314 |
+
|
| 315 |
+
Wu, J., Tenenbaum, J. B., and Kohli, P. Neural scene de-rendering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 699-707, 2017b.
|
| 316 |
+
|
| 317 |
+
Yao, S., Hsu, T. M., Zhu, J.-Y., Wu, J., Torralba, A., Freeman, B., and Tenenbaum, J. 3d-aware scene manipulation via inverse graphics. In Advances in neural information processing systems, pp. 1887-1898, 2018.
|
| 318 |
+
|
| 319 |
+
Zhai, X., Oliver, A., Kolesnikov, A., and Beyer, L. S41: Self-supervised semi-supervised learning. In Proceedings of the IEEE international conference on computer vision, pp. 1476-1485, 2019.
|
| 320 |
+
|
| 321 |
+
## Supplement
|
| 322 |
+
|
| 323 |
+
### 5.1. Synthetic Data
|
| 324 |
+
|
| 325 |
+

|
| 326 |
+
|
| 327 |
+
Figure 4. Example spectrograms of synthetic data. Notes are first given random lengths and fundamental frequency, with a possibility of being silent. Notes are then given a random amplitude, harmonic distribution, noise distribution at their start and end, and interpolated between. Additional vibrato and parameter noise is then added. Parameters were tuned until the authors subjectively felt that it produced a cool diversity of sounds, even if not particularly realistic. Exact details can be found in the code at https://github.com/ magenta/ddsp.
|
| 328 |
+
|
| 329 |
+
## Algorithm 1 Generate Synthetic Example
|
| 330 |
+
|
| 331 |
+
---
|
| 332 |
+
|
| 333 |
+
t <- random note length
|
| 334 |
+
|
| 335 |
+
With probability p :
|
| 336 |
+
|
| 337 |
+
Return silence for length t
|
| 338 |
+
|
| 339 |
+
Else:
|
| 340 |
+
|
| 341 |
+
A_start, A_end <- random harmonic amplitudes
|
| 342 |
+
|
| 343 |
+
A <- interpolate(A_start, A_end, t) + noise
|
| 344 |
+
|
| 345 |
+
c_start, c_end <- random harmonic distributions
|
| 346 |
+
|
| 347 |
+
c <- interpolate(c_start, c_end, t) + noise
|
| 348 |
+
|
| 349 |
+
f_0 <- random frequency + random vibrato + noise
|
| 350 |
+
|
| 351 |
+
n_start, n_end <- random noise distributions
|
| 352 |
+
|
| 353 |
+
n <- interpolate(n_start, n_end, t) + noise
|
| 354 |
+
|
| 355 |
+
Return A, c, f_0, n
|
| 356 |
+
|
| 357 |
+
---
|
| 358 |
+
|
| 359 |
+
### 5.2. Connection of TWM to Jefferys Divergence
|
| 360 |
+
|
| 361 |
+
It's interesting to note that the sinusoidal consistency loss corresponds to a Jefferys Divergence (Jeffreys, 1946) between two Gaussian KDE distributions(p, q):
|
| 362 |
+
|
| 363 |
+
$$
|
| 364 |
+
{D}_{J} = \frac{1}{2}{D}_{KL}\left( {p\parallel q}\right) + \frac{1}{2}{D}_{KL}\left( {q\parallel p}\right)
|
| 365 |
+
$$
|
| 366 |
+
|
| 367 |
+
$$
|
| 368 |
+
= - \frac{1}{2}\left\lbrack {{}_{{f}_{k}^{a} \sim p\left( {{f}_{k}^{a} \mid {A}^{a}}\right) }\log p\left( {{f}_{k}^{a} \mid {A}^{b},{f}^{b}}\right) + \underset{{f}_{j}^{b} \sim p\left( {{f}_{j}^{b} \mid {A}^{b}}\right) }{\mathbb{E}}\log p\left( {{f}_{j}^{b} \mid {A}^{a},{f}^{a}}\right) }\right\rbrack \tag{11}
|
| 369 |
+
$$
|
| 370 |
+
|
| 371 |
+
which is equivalent to Equation 9 (except a factor of $1/2$ ) in the limit that frequencies ${f}_{k}$ are sampled proportionally to their normalized amplitudes ${A}_{k}$ .
|
papers/ICML/ICML 2020/ICML 2020 Workshop/ICML 2020 Workshop SAS/RlVTYWhsky7/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,261 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ SELF-SUPERVISED PITCH DETECTION BY INVERSE AUDIO SYNTHESIS
|
| 2 |
+
|
| 3 |
+
Jesse Engel ${}^{1}$ Rigel Swavely ${}^{1}$ Adam Roberts ${}^{1}$ Lamtharn (Hanoi) Hantrakul ${}^{1}$ Curtis Hawthorne ${}^{1}$
|
| 4 |
+
|
| 5 |
+
§ ABSTRACT
|
| 6 |
+
|
| 7 |
+
Audio scene understanding, parsing sound into a hierarchy of meaningful parts, is an open problem in representation learning. Sound is a particularly challenging domain due to its high dimensionality, sequential dependencies and hierarchical structure. Differentiable Digital Signal Processing (DDSP) greatly simplifies the forward problem of generating audio by introducing differentiable synthesizer and effects modules that combine strong signal priors with end-to-end learning. Here, we focus on the inverse problem, inferring synthesis parameters to approximate an audio scene. We demonstrate that DDSP modules can enable a new approach to self-supervision, generating synthetic audio with differentiable synthesizers and training feature extractor networks to infer the synthesis parameters. By building a hierarchy from sinusoidal to harmonic representations, we show that it possible to use such an inverse modeling approach to disentangle pitch from timbre, an important task in audio scene understanding.
|
| 8 |
+
|
| 9 |
+
§ 1. INTRODUCTION
|
| 10 |
+
|
| 11 |
+
While audio scene analysis is typically associated with source separation (Brown & Cooke, 1994), it also encompasses many sound analysis tasks including pitch detection (Kim et al., 2018; Gfeller et al., 2020), phoneme recognition (Koutras et al., 1999), automatic speech recognition (Coy & Barker, 2007), sound localization (Lyon, 1983), and polyphonic instrument transcription (Hawthorne et al., 2018). Since many sources exhibit harmonic resonance, such as voices and vibrating objects (Smith, 2010), disentangling pitch and timbre is an important step in parsing an audio scene (Moerel et al., 2012; Theunissen & Elie, 2014).
|
| 12 |
+
|
| 13 |
+
Inverse graphics, where the parameters of a rendering engine are inferred from an image, is an appealing approach to parsing visual scenes. Unlike black-box classifiers, the approach is object-oriented, interpretable by design, and can generate high-quality images with modern renderers (Wu et al., 2017a; Yao et al., 2018). In audio, these inverse approaches have been limited to the domain of individual sounds from unrealistic commercial synthesizers due to the lack of a realistic, interpretable and differentiable audio rendering engine (Huang et al., 2014; Hoffman & Cook, 2006; Esling et al., 2019).
|
| 14 |
+
|
| 15 |
+
< g r a p h i c s >
|
| 16 |
+
|
| 17 |
+
Figure 1. Diagram of inverse audio synthesis. A feature extraction pipeline $\left( {{F}_{\text{ sin }}^{\theta },{F}_{\text{ harm }}^{\theta }}\right)$ hierarchically decomposes audio into low-level sinusoidal components (frequency, amplitude), which are combined to extract harmonic components $\left( {f}_{0}\right.$ , amplitude, harmonic distribution). These are the only modules that have learnable parameters $\theta$ . An additional filtered noise component is not shown. These parameters are fed to differentiable audio synthesiz- $\operatorname{ers}\left( {{S}_{\text{ sin }},{S}_{\text{ harm }}}\right)$ and then to reconstruction losses. An additional consistency loss is enforced on the predicted and resynthesized sinusoidal components. See Section 3 for details.
|
| 18 |
+
|
| 19 |
+
Most realistic generative models of audio require large autoregressive models that are slow, non-differentiable and cannot generate samples mid-training. (Dieleman et al., 2018; Dhariwal et al., 2020; Hawthorne et al., 2019; Engel et al., 2017; Wang et al., 2017). Differentiable Digital Signal Processing (DDSP) (Engel et al., 2020) overcomes these challenges by combining neural networks with differentiable synthesizers to efficiently render realistic audio during training.
|
| 20 |
+
|
| 21 |
+
${}^{1}$ Google Research, Brain Team. Correspondence to: Jesse Engel <jesseengel@google.com>.
|
| 22 |
+
|
| 23 |
+
Published at the workshop on Self-supervision in Audio and Speech at the ${37}^{\text{ th }}$ International Conference on Machine Learning, Vienna, Austria. Copyright 2020 by the author(s).
|
| 24 |
+
|
| 25 |
+
Finally, self-supervised techniques typically rely on intrinsic properties of data, such as causality (Oord et al., 2018) or identity-invariance to augmentation (Zhai et al., 2019), to automatically generate supervised labels from datasets. Since our DDSP audio renderer is fully interpretable, we can explore a different form of self-supervision where a fairly generic random process generates both synthetic audio and supervised labels for training. We combine this self-supervision with unsupervised reconstruction losses to adapt to new datasets.
|
| 26 |
+
|
| 27 |
+
The key contributions of this paper include:
|
| 28 |
+
|
| 29 |
+
* DDSP-inv: An inverse model of sound using DDSP, capable of factorizing pitch and timbre, with comparable pitch detection to SOTA supervised and self-supervised discriminative methods.
|
| 30 |
+
|
| 31 |
+
* Self-supervised training procedure to train feature extractor networks to infer synthesis parameters from differentiably-rendered synthetic audio.
|
| 32 |
+
|
| 33 |
+
* Sinusoidal Synthesizer: A new DDSP module capable of generating a wide range of audio including inharmonic and polyphonic signals.
|
| 34 |
+
|
| 35 |
+
* Sinusoidal Consistency Loss: A loss function to evaluate the similarity of two arbitrarily-ordered sets of sinusoids and also perform heuristic pitch extraction.
|
| 36 |
+
|
| 37 |
+
Audio samples are provided in the online supplement at https://goo.gl/magenta/ddsp-invand code will be available after publication at https://github.com/magenta/ddsp.
|
| 38 |
+
|
| 39 |
+
§ 2. RELATED WORK
|
| 40 |
+
|
| 41 |
+
Differentiable Rendering: Differentiable rendering is a valuable component of inverse graphics models (Loubet et al., 2019; Li et al., 2018b). A natural scene can be "deren-dered" into a structured object-wise representation via a differentiable shape renderer (Yao et al., 2018) or an explicit scene description that can be recomposed with a graphics engine (Wu et al., 2017b). This literature motivates this work, in which we use DDSP as a differentiable audio renderer.
|
| 42 |
+
|
| 43 |
+
Sinusoidal Modeling Synthesis: The techniques developed by Serra & Smith (1990) model sound as a combination of additive sinusoids and a subtractive filtered noise source. Despite being parametric and using heuristics to infer synthesis parameters, it is a highly expressive model of sound with diverse applications and is even used as a general purpose audio codec in MPEG-4 (Tellman et al., 1995; Klapuri et al., 2000; Purnhagen & Meine, 2000). In this work, we train neural networks to do this task with end-to-end learning.
|
| 44 |
+
|
| 45 |
+
Pitch Detection: Estimating the fundamental frequency $\left( {f}_{0}\right)$ of a monophonic audio signal, or pitch detection, is a key task to audio scene understanding. We compare against several state-of-the-art baselines in this work. SWIPE (Camacho & Harris, 2008) performs spectrum template matching between the signal and a sawtooth waveform. CREPE (Kim et al., 2018) is a deep convolutional model classifying pitch labels directly from the waveform. SPICE (Gfeller et al., 2020) removes the need for labels by employing self-supervision to predict the frequency shifts applied to training data. While these discriminative methods are trained specifically to detect pitch, DDSP-inv learns to detect ${f}_{0}$ as a side-effect of disentangling timbre and pitch in a signal.
|
| 46 |
+
|
| 47 |
+
§ 3. MODEL ARCHITECTURE
|
| 48 |
+
|
| 49 |
+
A diagram and description of our model hierarchy is shown in Figure 1 (DDSP-inv, for inverse modeling with DDSP). We describe each component below.
|
| 50 |
+
|
| 51 |
+
§ 3.1. DIFFERENTIABLE AUDIO SYNTHESIZERS
|
| 52 |
+
|
| 53 |
+
Inspired by the work of Serra & Smith (1990), we model sound as a flexible combination of time-dependent sinusoidal oscillators and filtered noise. From the sinusoids we can infer a corresponding harmonic oscillator with a fundamental frequency. Except for the new sinusoidal synthesizer module, all other modules are identical to the DDSP library introduced in Engel et al. (2020). While other available DDSP modules cover aspects such as room reverberation, we do not consider them here since they are not significant factors in the benchmark datasets.
|
| 54 |
+
|
| 55 |
+
Sinusoidal Synthesizer $\left( {S}_{\text{ sin }}\right)$ : We start by creating a new DDSP module that consists of a bank of $K$ sinusoids with individually varying amplitudes ${A}_{k}$ and frequencies ${f}_{k}$ . These are flexibly specified by the output of a neural network ${F}_{\text{ sin }}^{\theta }$ with parameters $\theta$ over $n$ discrete time steps:
|
| 56 |
+
|
| 57 |
+
$$
|
| 58 |
+
x\left( n\right) = \mathop{\sum }\limits_{{k = 0}}^{{K - 1}}{A}_{k}\left( n\right) \sin \left( {{\phi }_{k}\left( n\right) }\right) , \tag{1}
|
| 59 |
+
$$
|
| 60 |
+
|
| 61 |
+
where ${\phi }_{k}\left( n\right)$ is its instantaneous phase obtained by cumulative summation of the instantaneous frequency ${f}_{k}\left( n\right)$ :
|
| 62 |
+
|
| 63 |
+
$$
|
| 64 |
+
{\phi }_{k}\left( n\right) = {2\pi }\mathop{\sum }\limits_{{m = 0}}^{n}{f}_{k}\left( m\right) , \tag{2}
|
| 65 |
+
$$
|
| 66 |
+
|
| 67 |
+
The sinusoidal encoder ${F}_{sin}^{\theta }$ outputs amplitudes ${A}_{k}$ and frequencies ${f}_{k}$ every ${32}\mathrm{\;{ms}}$ , which are upsampled to audio rate ( ${16}\mathrm{{kHz}}$ ) using overlapping Hann windows and linear interpolation respectively. We highlight a key difference of this module with a Short Time Fourier Transform (STFT). Frequencies of each sinusoidal component are freely predicted by the network each frame, instead of being locked to a fixed linear spacing determined by the FFT window size. This avoids distortion in periodic signals due to phase mismatch between adjacent frames, and spectral leakage between neighboring frequency bins (Engel et al., 2020).
|
| 68 |
+
|
| 69 |
+
< g r a p h i c s >
|
| 70 |
+
|
| 71 |
+
Figure 2. Hierarchical decomposition of a sample from the URMP dataset. Left: spectrogram of audio and sinusoidal traces from the sinusoidal encoder ${F}_{\text{ sin }}^{\theta }$ . Center: harmonic components including fundamental frequency, amplitude, and distribution of the harmonics from the harmonic encoder ${F}_{harm}^{\theta }$ . Right: sinusoids decoded from harmonic components with the harmonic synthesizer ${S}_{harm}$ and spectrogram of the final reconstructed audio using the sinusoidal synthesizer ${S}_{\text{ sin }}$ .
|
| 72 |
+
|
| 73 |
+
Harmonic Synthesizer $\left( {S}_{\text{ harm }}\right)$ : For a harmonic oscillator, the harmonic encoder ${F}_{\text{ harm }}^{\theta }$ , predicts a single fundamental frequency ${f}_{0}$ , amplitude $A$ , and harmonic distribution ${c}_{k}$ , from the incoming sinusoids. On generation, all the output frequencies are constrained to be harmonic (integer) multiples of a fundamental frequency (pitch),
|
| 74 |
+
|
| 75 |
+
$$
|
| 76 |
+
{f}_{k}\left( n\right) = k{f}_{0}\left( n\right) \tag{3}
|
| 77 |
+
$$
|
| 78 |
+
|
| 79 |
+
Individual amplitudes are deterministically retrieved by multiplying the total amplitude, $A\left( n\right)$ , with the normalized distribution over harmonic amplitudes, ${c}_{k}\left( n\right)$ :
|
| 80 |
+
|
| 81 |
+
$$
|
| 82 |
+
{A}_{k}\left( n\right) = A\left( n\right) {c}_{k}\left( n\right) . \tag{4}
|
| 83 |
+
$$
|
| 84 |
+
|
| 85 |
+
where $\mathop{\sum }\limits_{{k = 0}}^{{K - 1}}{c}_{k}\left( n\right) = 1$ and ${c}_{k}\left( n\right) \geq 0$ .
|
| 86 |
+
|
| 87 |
+
Filtered Noise $\left( {S}_{\text{ noise }}\right)$ : As introduced in (Engel et al., 2020), we can model the non-periodic audio components with a linear time-varying filtered noise source. Noise is generated from a uniform distribution. We linearly tile frequency space with 65 bands whose amplitude is modulated each frame by the outputs of the sinusoidal encoder. To ease optimization, we reuse the same filtered noise distribution for both the sinusoidal reconstructions and the harmonic reconstructions.
|
| 88 |
+
|
| 89 |
+
Nonlinearities: For all amplitudes and harmonic distribution components, we constrain network outputs to be positive with a exponentiated sigmoid nonlinearity, ${2\sigma }{\left( x\right) }^{\log {10}} +$ ${10}^{-7}$ , that scales the output to be between 1e-7 and 2 . We constrain sinusoidal frequency predictions between ${20}\mathrm{{Hz}}$ and ${8000}\mathrm{\;{Hz}}$ , and harmonic fundamental frequency predictions between ${20}\mathrm{{Hz}}$ and ${1200}\mathrm{{Hz}}$ . We logarithmically tile 64 bins across this range, then pass network outputs for each frequency component through a softmax nonlinearity across these bins, and take a frequency-bin-weighted sum over the resulting distribution.
|
| 90 |
+
|
| 91 |
+
§ 3.2. FEATURE EXTRACTORS
|
| 92 |
+
|
| 93 |
+
Sinusoidal Encoder $\left( {F}_{\text{ sin }}^{\theta }\right)$ : The network converts audio $x\left( n\right)$ to sinusoidal amplitudes ${A}_{k}$ , sinusoidal frequencies ${f}_{k}$ , and filtered noise magnitudes. Audio is first transformed to a logmel spectrogram (FFT size=2048, hop size=512, mel bins=229), and then passed through a standard implementation of a ResNet-38 with layer normalization, bottleneck layers, and ReLU nonlinearities (He et al., 2016a;b; Ba et al., 2016). Through four stages, the number of filters increases from 64 to 1024, with the frequency dimension downsam-pling by a factor of two after each stage. A final linear layer feeds the module specific nonlinearities described in Section 3.1.
|
| 94 |
+
|
| 95 |
+
Harmonic Encoder $\left( {F}_{\text{ harm }}^{\theta }\right)$ : This network converts the sinusoidal synthesizer components from ${F}_{\text{ sin }}^{\theta }$ (amplitudes ${A}_{k}\left( n\right)$ and frequencies ${f}_{k}\left( n\right)$ for each sinusoid) into the harmonic synthesizer components of fundamental frequency ${f}_{0}\left( n\right)$ , amplitude $A\left( n\right)$ , and harmonic distribution ${c}_{k}\left( n\right)$ . Sinusoidal amplitudes and frequencies are first converted to a log scale and fed into a simple network of two fully-connected layers (256 dims), a single gated-recurrent unit layer (512 dims), and two more fully-connected layers (256 dims). Layer normalization and leaky ReLU nonlinearities are used throughout. A final linear layer feeds the module specific nonlinearities described in Section 3.1.
|
| 96 |
+
|
| 97 |
+
§ 3.3. LOSS FUNCTIONS
|
| 98 |
+
|
| 99 |
+
We train our network with combination of an audio reconstruction loss, a sinusoidal consistency loss, and a self-supervision loss. We only add the self-supervision loss for synthetic data:
|
| 100 |
+
|
| 101 |
+
$$
|
| 102 |
+
\mathcal{L} = {\mathcal{L}}_{\text{ recon }} + {\alpha }_{\text{ sin }}{\mathcal{L}}_{\text{ sin }} + {\mathcal{L}}_{\text{ ss }} \tag{5}
|
| 103 |
+
$$
|
| 104 |
+
|
| 105 |
+
where ${\alpha }_{sin}$ is a weight of 0.1 to empirically match the order of magnitude of the other losses.
|
| 106 |
+
|
| 107 |
+
Reconstruction Loss: Since each level of the hierarchical autoencoder can synthesize audio, we can tie the learned representations back to the ground truth audio at each stage with an audio reconstruction loss. Direct waveform comparisons focus too much on absolute phase differences that are less perceptually relevant (Engel et al., 2019). We instead compare spectrograms and utilize the fact that sinusoidal synthesis maintains phase coherence by design. We balance temporal and frequency resolution by imposing a spectrogram loss at several different FFT sizes $\left( {i \in \{ {64},{128},{256},{512},{1024},{2048}\} }\right)$ (Wang et al.,2020; Engel et al., 2020):
|
| 108 |
+
|
| 109 |
+
$$
|
| 110 |
+
{\mathcal{L}}_{\text{ recon }} = \mathop{\sum }\limits_{i}{\begin{Vmatrix}{s}_{i} - {\widehat{s}}_{i}\end{Vmatrix}}_{1} + {\begin{Vmatrix}\log {s}_{i} - \log {\widehat{s}}_{i}\end{Vmatrix}}_{1}. \tag{6}
|
| 111 |
+
$$
|
| 112 |
+
|
| 113 |
+
where ${s}_{i}$ is the magnitude spectrogram of the target audio at a given FFT size, and $\widehat{{s}_{i}}$ is the spectrogram of the reconstructed audio. The total reconstruction loss is a sum of the sinusoidal and harmonic reconstruction losses $\left( {{\mathcal{L}}_{\text{ recon }} = {\mathcal{L}}_{\text{ recon }}^{\text{ sin }} + {\mathcal{L}}_{\text{ recon }}^{\text{ harm }}}\right) .$
|
| 114 |
+
|
| 115 |
+
Sinusoidal Consistency Loss: To compare the sets of sinusoids on encoding $\left( {F}_{\text{ sin }}^{\theta }\right)$ and decoding $\left( {S}_{\text{ harm }}\right)$ , we need a permutation invariant loss that can even compare sets of different sizes. We took inspiration from the pitch detection literature; implementing a differentiable version of the Two-Way Mismatch (TWM) algorithm (Maher & Beauchamp, 1994).
|
| 116 |
+
|
| 117 |
+
The TWM algorithm estimates the distance of two sets of sinusoid frequencies $\left( {{f}^{a},{f}^{b}}\right)$ by the frequency distance from one set to it's closest neighbor in the other set. To prevent the local minima of one set from densely tiling frequency space, the distance is calculated in both directions. This is also called the Chamfer Distance in image recognition literature (Barrow et al., 1977).
|
| 118 |
+
|
| 119 |
+
$$
|
| 120 |
+
{D}_{twm} = \mathop{\sum }\limits_{k}\mathop{\min }\limits_{j}\left( \left| {{f}_{k}^{a} - {f}_{j}^{b}}\right| \right) + \mathop{\sum }\limits_{j}\mathop{\min }\limits_{k}\left( \left| {{f}_{k}^{a} - {f}_{j}^{b}}\right| \right) \tag{7}
|
| 121 |
+
$$
|
| 122 |
+
|
| 123 |
+
We approximate this procedure as differentiable loss between two arbitrary sets of sinusoids $\left( {{A}_{k}^{a},{f}_{k}^{a},{A}_{j}^{b},{f}_{j}^{b}}\right)$ , with $K$ and $J$ sinusoids respectively, by creating a Gaussian kernel density estimate (KDE) of $P\left( {{f}_{k}^{a} \mid {A}^{b},{f}^{b}}\right)$ and $P\left( {{f}_{j}^{b} \mid {A}^{a},{f}^{b}}\right)$ :
|
| 124 |
+
|
| 125 |
+
$$
|
| 126 |
+
p\left( {{f}_{k}^{a} \mid {A}^{b},{f}^{b}}\right) = \mathop{\sum }\limits_{j}\frac{{A}_{j}^{b}}{\sigma \sqrt{2\pi }}\exp \frac{-{\left( {f}_{k}^{a} - {f}_{j}^{b}\right) }^{2}}{2{\sigma }^{2}} \tag{8}
|
| 127 |
+
$$
|
| 128 |
+
|
| 129 |
+
where ${A}_{j}$ are the frame-wise normalized amplitudes, and ${f}_{j}$ are the component frequencies in units of semitones (logarithmically spaced). The standard deviation of the KDE gaussians, $\sigma$ , is a hyperparameter we set to 0.1 semitones.
|
| 130 |
+
|
| 131 |
+
We then get the loss as a weighted average of the two-way negative log-likelihood:
|
| 132 |
+
|
| 133 |
+
$$
|
| 134 |
+
{\mathcal{L}}_{\text{ sin }} = - \mathop{\sum }\limits_{k}{A}_{k}^{a}\log p\left( {{f}_{k}^{a} \mid {A}^{b},{f}^{b}}\right)
|
| 135 |
+
$$
|
| 136 |
+
|
| 137 |
+
$$
|
| 138 |
+
- \mathop{\sum }\limits_{j}{A}_{j}^{b}\log p\left( {{f}_{j}^{b} \mid {A}^{a},{f}^{a}}\right) \tag{9}
|
| 139 |
+
$$
|
| 140 |
+
|
| 141 |
+
$$
|
| 142 |
+
+ {\begin{Vmatrix}\overline{{A}^{a}} - \overline{{A}^{b}}\end{Vmatrix}}_{1}
|
| 143 |
+
$$
|
| 144 |
+
|
| 145 |
+
where we use a third term to keep amplitudes bounded by matching their their average value $\bar{A}$ in each frame. An example pair of distributions is shown in Figure 3.
|
| 146 |
+
|
| 147 |
+
< g r a p h i c s >
|
| 148 |
+
|
| 149 |
+
Figure 3. Sinusoidal Consistency Loss. Similarity between two sets of sinusoids by a two-way Gaussian kernel density estimate. Stems represent amplitudes of the sinusoids and curves represent the conditional probability distributions of the sinusoids in one plot given the sinusoids in the other plot. TWM loss is minimized when the stems are at the peaks of the Gaussians in both plots.
|
| 150 |
+
|
| 151 |
+
TWM Heuristic: We can also use the sinusoidal consistency loss like the TWM algorithm, as a baseline heuristic for pitch extraction from sinusoids. In this modification, in each time frame we consider all the sinusoids, ${f}_{k}$ , to be potential candidates as the fundamental frequency, ${f}_{0}$ , and build a series of harmonics off of each candidate. We then calculate ${\mathcal{L}}_{\text{ sin }}$ for each series of harmonics against the original set of sinusoids and take the candidate with the minimum loss.
|
| 152 |
+
|
| 153 |
+
Self Supervised Loss on Synthetic Data: To learn the correct scene decompositions, we found self-supervision with synthetic data to be an essential addition to reconstruction losses on real data. A bad minimum exists, where fairly good reconstructons are possible by predicting an extremely low fundamental frequency and selectively activating only a few harmonics. This is equivalent to the network learning an STFT representation of the audio, where it chooses the tight linear spacing between frequency bins.
|
| 154 |
+
|
| 155 |
+
Self-supervised training overcomes this by imposing an implicit prior on the synthesizer parameters. Similar to domain randomization (Tobin et al., 2017), we find diversity of the synthetic data is more important than realism. In our case, we generate notes with variable length, fundamental frequency $\left( {f}_{0}\right)$ , amplitude(A), harmonic distribution(c)and noise magnitudes(N). We also add random pitch modulation and noise to all parameters to increase data diversity. Examples are shown in the Supplement Figure 4 alongside further details. The self-supervised loss is given between the true parameters, and those estimated from the synthetic audio (denoted by a hat):
|
| 156 |
+
|
| 157 |
+
$$
|
| 158 |
+
{\mathcal{L}}_{ss} = {\begin{Vmatrix}{f}_{0} - {\widehat{f}}_{0}\end{Vmatrix}}_{1}
|
| 159 |
+
$$
|
| 160 |
+
|
| 161 |
+
$$
|
| 162 |
+
+ {\alpha }_{A}\parallel A - \widehat{A}{\parallel }_{1}
|
| 163 |
+
$$
|
| 164 |
+
|
| 165 |
+
$$
|
| 166 |
+
+ {\alpha }_{c}\parallel c - \widehat{c}{\parallel }_{1} \tag{10}
|
| 167 |
+
$$
|
| 168 |
+
|
| 169 |
+
$$
|
| 170 |
+
+ {\alpha }_{N}\parallel N - \widehat{N}{\parallel }_{1}
|
| 171 |
+
$$
|
| 172 |
+
|
| 173 |
+
$$
|
| 174 |
+
+ {\alpha }_{\text{ sin }}{\mathcal{L}}_{\text{ sin }}\left( {{A}_{k},{f}_{k},{\widehat{A}}_{k},{\widehat{f}}_{k}}\right)
|
| 175 |
+
$$
|
| 176 |
+
|
| 177 |
+
where ${\alpha }_{A},{\alpha }_{c},{\alpha }_{N}$ , and ${\alpha }_{sin}$ are loss weights set to 10,100, 100, and 0.1 to empirically match the order of magnitude of the other losses.
|
| 178 |
+
|
| 179 |
+
§ 4. EXPERIMENTS
|
| 180 |
+
|
| 181 |
+
§ 4.1. DATASETS
|
| 182 |
+
|
| 183 |
+
We use the following common pitch detection benchmarks in our experiments. We resample all audio to ${16}\mathrm{{kHz}}$ , create 4 second long training examples, and randomly partition a 80-20 train-test split.
|
| 184 |
+
|
| 185 |
+
MIR-1K: Hsu & Jang (2009) contains 1,000 clips of people singing Chinese pop songs. Accompaniment music was recorded on the left channel and singing on the right. For our experiments, we used only the singing audio. The dataset includes manual annotations for pitch contours.
|
| 186 |
+
|
| 187 |
+
MDB-stem-synth: Salamon et al. (2017) contains solo recordings of a variety of instruments that were analyzed with pitch tracking techniques and then resynthesized to ensure fully accurate pitch annotations.
|
| 188 |
+
|
| 189 |
+
URMP: Li et al. (2018a) contains recordings of pieces played by small orchestral ensembles. Each instrument for a given piece was recorded in isolation and then later mixed together with the other instruments for the final track. We used only the isolated recordings.
|
| 190 |
+
|
| 191 |
+
max width=
|
| 192 |
+
|
| 193 |
+
Raw Pitch Accuracy MIR-1K MDB-stem URMP
|
| 194 |
+
|
| 195 |
+
1-4
|
| 196 |
+
4|c|Supervised
|
| 197 |
+
|
| 198 |
+
1-4
|
| 199 |
+
SWIPE 86.6 90.7 -
|
| 200 |
+
|
| 201 |
+
1-4
|
| 202 |
+
CREPE 90.1 92.7 92.2
|
| 203 |
+
|
| 204 |
+
1-4
|
| 205 |
+
4|c|Self-Supervised
|
| 206 |
+
|
| 207 |
+
1-4
|
| 208 |
+
SPICE 90.6 89.1 -
|
| 209 |
+
|
| 210 |
+
1-4
|
| 211 |
+
DDSP-inv (this work) 91.8 88.5 91.0
|
| 212 |
+
|
| 213 |
+
1-4
|
| 214 |
+
|
| 215 |
+
Table 1. Raw pitch detection accuracy. Across a range of instrumental and vocal datasets, DDSP-inv is competitive with SOTA supervised and self-supervised discriminative methods, while also parsing the audio into an interpretable hierarchy of features.
|
| 216 |
+
|
| 217 |
+
§ 4.2. TRAINING PROCEDURE
|
| 218 |
+
|
| 219 |
+
We find training is more stable by providing a curriculum of first pretraining on synthetic data $( \sim 1\mathrm{M}$ steps $)$ and then fine-tuning training on batches of mixed synthetic and real data $\left( { \sim {100}\mathrm{k}\text{ steps }}\right)$ . We use the ADAM optimizer with a batch size of 64 and learning rate of $3\mathrm{e} - 4$ , and exponential learning rate decay 0.98 every 10,000 steps (Kingma & Ba, 2015). We also find it helpful to stop direct gradient flow from the harmonic encoder back to the sinusoidal encoder. Note that the two levels are still implicitly connected during training via the sinusoidal consistency loss.
|
| 220 |
+
|
| 221 |
+
§ 4.3. METRICS
|
| 222 |
+
|
| 223 |
+
We evaluate all models with the standard metrics of Raw Pitch Accuracy (RPA) and Raw Chroma Accuracy (RCA). (Poliner et al., 2007). RPA measures the percentage of voiced frames in which the estimated pitch is within half a semitone of the ground truth pitch. Voiced regions are taken to be frames where the ground truth pitch frequency is greater than 0 . RCA is similar to RPA but does not penalize octave errors. The frame is accurate if the predicted pitch is within half a semitone of any power of 2 of the ground truth. Both metrics are computed using the mir_eval python library (Raffel et al., 2014).
|
| 224 |
+
|
| 225 |
+
§ 4.4. RESULTS
|
| 226 |
+
|
| 227 |
+
Table 1 shows a comparison of SOTA pitch detection methods, both supervised and self-supervised. DDSP-inv outperforms even the supervised models on the singing data of ${MIR} - {1K}$ and is comparable to other self-supervised methods on the other datasets. Note that while the other models specifically trained to detect pitch, DDSP-inv implicitly learns to detect pitch in a hierarchy of interpretable features.
|
| 228 |
+
|
| 229 |
+
Table 2 shows the contributions of the harmonic model and real data to model performance. Using the predicted pitch of the harmonic model significantly improves accuracy over the baseline of the Two-way Mismatch (TWM) heuristic on predicted sinusoids. It also dramatically reduces the amount
|
| 230 |
+
|
| 231 |
+
max width=
|
| 232 |
+
|
| 233 |
+
RPA (RCA) MIR-1K MDB-stem URMP
|
| 234 |
+
|
| 235 |
+
1-4
|
| 236 |
+
4|c|Synthetic Data
|
| 237 |
+
|
| 238 |
+
1-4
|
| 239 |
+
TWM 65.0 (78.6) 45.6 (75.4) 50.1 (78.8)
|
| 240 |
+
|
| 241 |
+
1-4
|
| 242 |
+
DDSP-inv 77.3 (78.7) 86.9 (87.1) 65.3 (69.0)
|
| 243 |
+
|
| 244 |
+
1-4
|
| 245 |
+
4|c|Synthetic & Real
|
| 246 |
+
|
| 247 |
+
1-4
|
| 248 |
+
TWM 67.2 (86.8) 60.5 (80.5) 77.0 (89.7)
|
| 249 |
+
|
| 250 |
+
1-4
|
| 251 |
+
DDSP-inv 91.8 (92.0) 88.5 (89.6) 91.0 (91.8)
|
| 252 |
+
|
| 253 |
+
1-4
|
| 254 |
+
|
| 255 |
+
Table 2. Comparison of pitch detection using f0 from the harmonic encoder $\left( {{F}_{\text{ harm }}^{\theta }\text{ , DDSP-inv) versus }{f0}}\right.$ from the sinusoidal encoder $\left( {F}_{sin}^{\theta }\right)$ with TWM heuristic. The harmonic model improves accuracy and reduces octave errors, as shown by the reduced gap between RPA and RCA. Real data improves performance, but synthetic data alone is suprisingly effective for some datasets.
|
| 256 |
+
|
| 257 |
+
of octave errors, as shown by the reduced gap between RPA and RCA. While adding real data makes performance competitive with SOTA, the model achieves fairly good accuracy with synthetic data alone, especially on the ${MDB}$ - stem-synth dataset.
|
| 258 |
+
|
| 259 |
+
§ 5. CONCLUSION AND FUTURE WORK
|
| 260 |
+
|
| 261 |
+
We have presented an interpretable hierarchical model of audio that disentangles timbre and pitch through self-supervised inversion of audio synthesis. We believe this forms a promising foundation for learning higher levels of structure, such as discrete tokens, and extensions to more complicated audio scenes, including polyphonic audio with multiple sources.
|
papers/ICML/ICML 2020/ICML 2020 Workshop/ICML 2020 Workshop SAS/SR2L__h9q9p/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,235 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Investigating Self-supervised Pre-training for End-to-end Speech Translation
|
| 2 |
+
|
| 3 |
+
Ha Nguyen ${}^{12}$ Fethi Bougares ${}^{3}$ Natalia Tomashenko ${}^{2}$ Yannick Estève ${}^{2}$ Laurent Besacier ${}^{1}$
|
| 4 |
+
|
| 5 |
+
## Abstract
|
| 6 |
+
|
| 7 |
+
Self-supervised learning from raw speech has been proven beneficial to improve automatic speech recognition (ASR). We investigate here its impact on end-to-end automatic speech translation (AST) performance. We use a contrastive predictive coding (CPC) model pre-trained from unlabeled speech as a feature extractor for a downstream AST task. We show that self-supervised pre-training is particularly efficient in low resource settings and that fine-tuning CPC models on the AST training data further improves performance. Even in higher resource settings, en-sembling AST models trained with filter-bank and CPC representations leads to near state-of-the-art models without using any ASR pre-training. This might be particularly beneficial when one needs to develop a system that translates from speech in a language with poorly standardized orthography or even from speech in an unwritten language.
|
| 8 |
+
|
| 9 |
+
## 1. Introduction
|
| 10 |
+
|
| 11 |
+
Self-supervised learning using huge unlabeled data has been explored with very promising results for image processing (Chen et al., 2020) and natural language processing (Devlin et al., 2018). Recent works investigated self-supervised representation learning from speech (Baevski et al., 2019; Kawakami et al., 2020; Chung & Glass, 2019). They were successful to improve performance on downstream tasks such as speech recognition. These recent works suggest that it is possible to reduce dependence on labeled data for building speech systems through acoustic representation learning. We investigate the possibility to leverage unlabeled speech for end-to-end automatic speech translation (AST). We focus on scenarios where (a) recordings in source language are not transcribed ${}^{1}$ (no ASR pre-training is possible),(b) only a small-medium amount of training data (speech aligned to translations) is available, (c) a larger amount of unlabeled speech can be used. This scenario is typical of situations when one builds a system that translates from speech in a language with poorly standardized orthography or even from an unwritten language.
|
| 12 |
+
|
| 13 |
+
In summary, our contributions are: (1) we propose an in-depth study on the impact of self-supervised pre-training for AST, (2) we show that fine-tuning pre-trained representations on the AST training data is beneficial and that self-supervised pre-training is particularly efficient in low resource settings, (3) even in high resource settings, ensem-bling models trained with filter-bank and self-supervised representations leads to near state-of-the-art models without using ASR pre-training, (4) we analyze the representations learnt and show that they allow to better discriminate phones, better align source and target sequences, and are more robust to speaker variability.
|
| 14 |
+
|
| 15 |
+
## 2. Related Works
|
| 16 |
+
|
| 17 |
+
### 2.1. Self-supervised learning from speech
|
| 18 |
+
|
| 19 |
+
Self-supervised learning from speech consists in resolving pseudo-tasks not requiring human annotations as a pretraining to the real tasks to solve. These pseudo-tasks target predicting next samples or solving ordering problems. Autoregressive predictive coding (APC) (Chung et al., 2019; Chung & Glass, 2020) considers the sequential structure of speech and predicts information about a future frame. An easier learning objective is introduced in Contrastive Predictive Coding (CPC) which consists in distinguishing a true future audio frame from negatives (Baevski et al., 2019; Schneider et al., 2019; Kahn et al., 2019). (Chung & Glass, 2019) shows that such representations are useful to improve several speech tasks while (Kawakami et al., 2020) extends those works by looking at the representations' robustness to domain and language shifts. In the same vein, (Rivière et al., 2020) compares self-supervised and supervised pre-training for ASR and shows that CPC pre-training extracts features that transfer well to other languages, being on par or even outperforming supervised pre-training. Another promising way is to use speech enhancement as a task for feature representation learning (Ravanelli et al., 2020; Engel et al., 2020). Finally, several self-supervised tasks can be jointly tackled to discover better speech representations (Pascual et al., 2019).
|
| 20 |
+
|
| 21 |
+
---
|
| 22 |
+
|
| 23 |
+
${}^{1}$ LIG - Université Grenoble Alpes, France ${}^{2}$ LIA - Avignon Université, France ${}^{3}$ LIUM - Le Mans Université, France. Correspondence to: Ha Nguyen <manh-ha.nguyen@univ-grenoble-alpes.fr>.
|
| 24 |
+
|
| 25 |
+
Published at the workshop on Self-supervision in Audio and Speech at the ${37}^{\text{th }}$ International Conference on Machine Learning, Vienna, Austria. Copyright 2020 by the author(s).
|
| 26 |
+
|
| 27 |
+
${}^{1}$ Transcription not available or language poorly written
|
| 28 |
+
|
| 29 |
+
---
|
| 30 |
+
|
| 31 |
+
### 2.2. End-to-end Automatic Speech Translation
|
| 32 |
+
|
| 33 |
+
Previous automatic speech-to-text translation (AST) systems operate in two steps: source language automatic speech recognition (ASR) and source-to-target text machine translation (MT). However, recent works have attempted to build end-to-end AST without using source language transcription during learning or decoding (Bérard et al., 2016; Weiss et al., 2017) or using it at training time only (Bérard et al., 2018). Recently several extensions of these pioneering works were introduced: low resource AST (Bansal et al., 2018), unsupervised AST (Chung et al., 2018), end-to-end speech-to-speech translation (Translatotron) (Jia et al., 2019), multilingual AST (Di Gangi et al., 2019). Improvements of end-to-end AST were also proposed using weakly supervised data (Jia et al., 2018) or adding a second attention mechanism (Sperber et al., 2019). While supervised pre-training for AST was investigated (see for instance (Bérard et al., 2018)), we are aware of a single research group (Chung & Glass, 2019; 2020) that investigated self-supervised pretraining for AST. However their experiments were done in a high resource setting and AST (for which only marginal gains were displayed) was solely investigated among other tasks, without an in-depth analysis of the representations learnt.
|
| 34 |
+
|
| 35 |
+
## 3. Self-supervised Pre-training from Speech
|
| 36 |
+
|
| 37 |
+
### 3.1. Contrastive predictive coding model
|
| 38 |
+
|
| 39 |
+
We use the self-supervised pre-training model introduced in (Schneider et al., 2019) (wav2vec) which is based on contrastive predictive coding. The model uses (1) an encoder network that converts the audio signal in a latent representation (from raw speech samples $x$ into a feature representation $z$ ), and (2) a context network that aggregates multiple time steps to build contextualized representations (from a sequence ${z}_{i - v},\ldots ,{z}_{i}$ into a context vector ${c}_{i}$ ). ${}^{2}$ The full model (encoder+context) is trained end-to-end to distinguish a sample ${z}_{i + k}$ that is $\mathrm{k}$ steps in the future from negative samples $\widetilde{z}$ uniformly chosen from the same audio sequence. A contrastive loss is minimized for each step $k = 1,\ldots , K$ and the overall loss is summed over different step sizes (more details in (Schneider et al., 2019)).
|
| 40 |
+
|
| 41 |
+
Table 1. Statistics of different How 2 data partitions
|
| 42 |
+
|
| 43 |
+
<table><tr><td>Partition</td><td>#segments</td><td>#hours</td><td>#src_w</td><td>#tgt_w</td></tr><tr><td>10%</td><td>17,751</td><td>28</td><td>313K</td><td>295K</td></tr><tr><td>20%</td><td>35,858</td><td>56</td><td>626K</td><td>591K</td></tr><tr><td>30%</td><td>53,698</td><td>84</td><td>887K</td><td>940K</td></tr><tr><td>60%</td><td>107,676</td><td>169</td><td>1778K</td><td>1883K</td></tr><tr><td>full</td><td>179,438</td><td>281</td><td>2963K</td><td>3139K</td></tr></table>
|
| 44 |
+
|
| 45 |
+
### 3.2. Pre-trained models for English
|
| 46 |
+
|
| 47 |
+
We use an off-the-shelf model provided for English. ${}^{3}$ It is trained on Librispeech corpus (Panayotov et al., 2015). We also investigate if fine-tuning the model on our task specific data is beneficial. For this, we fine-tune wav2vec on the full speech corpora used for our AST experiments (see next section). It is important to note that no transcripts nor translations are needed for this step which requires only raw speech. After fine-tuning wav2vec, we input the representations produced by the context network ${c}_{i}$ to the AST encoder instead of filter-bank features (see Figure 1).
|
| 48 |
+
|
| 49 |
+
## 4. End-to-end Speech Translation Experiments
|
| 50 |
+
|
| 51 |
+
### 4.1. Experimental setup
|
| 52 |
+
|
| 53 |
+
#### 4.1.1. DATA
|
| 54 |
+
|
| 55 |
+
How2 corpus (Sanabria et al., 2018) is used for our main experiments. This corpus contains about 297.6 hours of speech, which is transcribed and translated into 3.3 million of English words and 3.1 million of Portuguese words respectively. ${}^{4}$ From this version of data, we first filter out too long sentences (sentences longer than 30 seconds or 400 characters). Then, in order to simulate lower resource scenarios, we randomly split the corpus into four sub-corpora of roughly ${10}\% ,{20}\% ,{30}\%$ , and ${60}\%$ of the filtered full corpus. Our splits guarantee that smaller partitions are fully included in the bigger ones. The statistics of all the partitions and the filtered version of full corpora can be found in Table 1.
|
| 56 |
+
|
| 57 |
+
#### 4.1.2. SPEECH FEATURES AND DATA AUGMENTATION
|
| 58 |
+
|
| 59 |
+
As shown in Figure 1, we extract either wav2vec features or filter-bank+pitch features (later denoted as fbanks) from speech input. ${}^{5}$ Depending on the experiments, mean and variance normalization (MVN) is optionally applied to the generated features. For wav2vec feature extraction, we either use an off-the-shelf model trained on LibriSpeech (Panayotov et al., 2015) or a model fine-tuned on How2 training set. MVN parameters are estimated on the speech translation training set and then applied to all train/dev/test sets. Overall, we have 4 different self-supervised representations named wav2vec, wav2vec + norm, wav2vec + FT (fined-tuned wav2vec) and wav2vec $+ {FT} +$ norm. All those wav2vec features are of dimension 512. We compare the above representations to conventional filter-bank features. Similar to (Nguyen et al., 2019), we extract 80-dimensional Mel filter-bank features, concatenated with 3-dimensional pitch features from windows of ${25}\mathrm{{ms}}$ , and a frame shift of 10ms. MVN is used in the same manner as for wav2vec features. This gives us 2 additional speech representations named fbanks and fbanks + norm respectively (their dimension is ${83}){.}^{6}$ Data augmentation through speed perturbation is also applied with factors of0.9,1.0, and 1.1 to the training data. Our development set is made of 1,984 sentences randomly excluded from the training set. How2 val set is used as our test data.
|
| 60 |
+
|
| 61 |
+
---
|
| 62 |
+
|
| 63 |
+
${}^{3}$ https://github.com/pytorch/fairseq/blob/ master/examples/wav2vec/
|
| 64 |
+
|
| 65 |
+
${}^{4}$ As shown by (Nguyen et al.,2019), How 2 is sensitive to the downloading moment. Our version was downloaded in July, 2019.
|
| 66 |
+
|
| 67 |
+
${}^{5}$ Our preliminary experiments on How 2 10% with MFCC features which lead to similar performance as filter-bank are not
|
| 68 |
+
|
| 69 |
+
${}^{2}$ Practically, each ${z}_{i}$ encodes ${30ms}$ of speech every ${10ms}$ . As for ${c}_{i}$ , the total receptive field of the context network is ${210}\mathrm{\;{ms}}$ .
|
| 70 |
+
|
| 71 |
+
---
|
| 72 |
+
|
| 73 |
+
### 4.2. Speech-to-text translation model
|
| 74 |
+
|
| 75 |
+
#### 4.2.1. ARCHITECTURE.
|
| 76 |
+
|
| 77 |
+
We use an attention-based encoder-decoder architecture, whose encoder is illustrated in Figure 1. The encoder is a stack of two VGG-like (Simonyan & Zisserman, 2015) CNN blocks followed by five 1024-dimensional BLSTM layers. Each VGG block contains two 2D-convolution layers just before a 2D-maxpooling layer, which aims to reduce both time(T)and frequency dimension(D)of the input speech features by a factor of 2 . These two VGG blocks transform input speech features’ shape from $\left( {T \times D}\right)$ to $\left( {T/4 \times D/4}\right)$ . Bahdanau's attention mechanism (Bahdanau et al., 2015) is used in all our experiments. The decoder is a stack of two 1024-dimensional LSTM layers. As proven effective in (Nguyen et al., 2019), this model is consistently used for all the experiments with fbanks features presented throughout this paper. However wav2vec features have higher dimension (512) than fbanks (83). In order to compare both input representations with a similar parameter budget in the architecture (and also because training an architecture with input features of dimension 512 would be substantially more computationally expensive), we add a projection block at the bottom of the encoder. ${}^{7}$ This block (containing a linear layer followed by a ReLU) reduces the wav2vec's feature size from 512 to 83 (see Figure 1).
|
| 78 |
+
|
| 79 |
+

|
| 80 |
+
|
| 81 |
+
Figure 1. Architecture of the speech encoder: a stack of two VGG blocks followed by 5 BLSTM layers. We use as input (1) wav2vec features (that pass through an additional projection layer to reduce their dimension from 512 to 83), or (2) filter-bank+pitch features. The input features are optionally normalized (MVN).
|
| 82 |
+
|
| 83 |
+
#### 4.2.2. HYPERPARAMETERS' DETAILS
|
| 84 |
+
|
| 85 |
+
Models are trained in maximum 20 epochs with early stopping after 3 epochs if the accuracy on the dev set does not improve. Adadelta is chosen as optimizer and dropout is set to 0.3 on the encoder side. We decode all our models with beam size of 10 .
|
| 86 |
+
|
| 87 |
+
### 4.3. Experimental results on How2
|
| 88 |
+
|
| 89 |
+
On each partition of How2 corpus, we train 6 models which take as input different speech representations presented in section 4.1.2, thus in total 30 models shown in Table 2. We evaluate on How2 val set, which contains 2, 022 segments (about 3.2 hours of speech), in the same conditions as (Nguyen et al., 2019). It is clear from the table that in low resource settings (28 and 56 hours), self-supervised representations (wav2vec) significantly outperform fbanks. Figure 2a confirms this and shows that models trained with wav2vec representations converge better and faster. The impact of normalization and fine-tuning is also notable from both Table 2 and Figure 2a. In very low resource settings (like 28 hours), fine-tuning wav2vec can greatly help, and with normalization, the performance further improves. In higher resource settings (169 and 281 hours of translated speech), differences between wav2vec and fbanks fade away (and so does the impact of fine-tuning and normalization). However, our ensembling experiments of lines 7 and 8 on ${100}\%$ of How2 show that it is beneficial to ensemble the best system (fbanks+norm, line 6) with a system trained with wav2vec (wav2vec + FT+norm, line 4) rather than a better model (fbanks, line 5) also based on filter-bank features, even though wav2vec $+ {FT} +$ norm underperforms fbanks on this partition. Ensembling all our models (line 9) leads to ${BLEU} > {30}$ even in very low resource training conditions (56 hours). Finally, in order to compare ourselves with the state-of-the-art (Inaguma et al., 2020), we decode How2 dev5 (a.k.a How2 test), which consists of 2,305 segments (about 3.7 hours of speech), using the ensemble of all our models trained on the full corpus (line 9). This gives us near state-of-the-art BLEU: we obtain 46.16 on How2 val and 47.17 on How2 dev5. This latter score on dev5 is to be compared with 48.04 reported with an ensemble model in (Inaguma et al., 2020) where ASR and MT pre-training were used, as well as data augmentation with SpecAugment.
|
| 90 |
+
|
| 91 |
+
---
|
| 92 |
+
|
| 93 |
+
presented here.
|
| 94 |
+
|
| 95 |
+
${}^{6}$ For the rest of the paper fbanks will actually mean filter-bank+pitch
|
| 96 |
+
|
| 97 |
+
${}^{7}$ Our implementation of the wav2vec speech encoder, as well as the detailed recipes for our experiments can be found online: https://github.com/mhn226/espnet/tree/ interspeech2020.
|
| 98 |
+
|
| 99 |
+
---
|
| 100 |
+
|
| 101 |
+
Table 2. Detokenized case-sensitive BLEU scores measured on How2 val set of different models trained on different partitions of How2 corpus (EN-PT) with different speech features. FT means fine-tuned and norm stands for MVN normalization.
|
| 102 |
+
|
| 103 |
+
<table><tr><td>$\mathbf{{No}.}$</td><td>Feature</td><td>10% (28h)</td><td>20% (56h)</td><td>30% (84h)</td><td>60% (169h)</td><td>100% (281h)</td></tr><tr><td>1</td><td>wav2vec</td><td>11.33</td><td>26.75</td><td>30.83</td><td>36.33</td><td>41.02</td></tr><tr><td>2</td><td>wav2vec + FT</td><td>12.52</td><td>27.30</td><td>32.11</td><td>37.78</td><td>42.32</td></tr><tr><td>3</td><td>wav2vec + norm</td><td>16.52</td><td>27.33</td><td>31.27</td><td>37.62</td><td>41.08</td></tr><tr><td>4</td><td>wav2vec + FT + norm</td><td>18.50</td><td>27.68</td><td>32.17</td><td>37.75</td><td>41.30</td></tr><tr><td>5</td><td>fbanks</td><td>1.03</td><td>18.61</td><td>27.32</td><td>37.23</td><td>41.63</td></tr><tr><td>6</td><td>fbanks + norm</td><td>2.11</td><td>24.58</td><td>30.21</td><td>37.56</td><td>42.51</td></tr><tr><td>7</td><td>Ensemble [5, 6]</td><td/><td>25.28</td><td>31.90</td><td>40.39</td><td>44.35</td></tr><tr><td>8</td><td>Ensemble [4, 6]</td><td/><td>29.87</td><td>34.67</td><td>41.22</td><td>45.02</td></tr><tr><td>9</td><td>Ensemble [1,2,3,4,5,6]</td><td/><td>31.88</td><td>36.80</td><td>42.62</td><td>46.16</td></tr></table>
|
| 104 |
+
|
| 105 |
+

|
| 106 |
+
|
| 107 |
+
Figure 2. Learning curves (accuracy) of models trained on different partitions of How2
|
| 108 |
+
|
| 109 |
+
### 4.4. Validation on two other language pairs
|
| 110 |
+
|
| 111 |
+
To validate our results in low resource settings (56 hours), we train our models on two subsets of MuST-C (Di Gangi et al., 2019) English-to-German and English-to-French training data (56 hours each, a training size similar to How2 20%). As illustrated by Table 3, MuST-C is more challenging than How2 (as confirmed by official IWSLT 2019 evaluation results (Niehues et al., 2019)), but for both language pairs, wav2vec significantly outperform fbanks. This confirms that self-supervised pre-training is useful in low resource scenarios.
|
| 112 |
+
|
| 113 |
+
## 5. Analysis of Learnt Representations
|
| 114 |
+
|
| 115 |
+
This section tries to answer the question why wav2vec representation performs better than filter-bank features in low resource settings. The following subsections present the experiments which show that wav2vec might be (1) better at discriminating phones, (2) better at aligning source and target sequences, and (3) more robust to speaker variability.
|
| 116 |
+
|
| 117 |
+
Table 3. AST BLEU on MuST-C 56h for ${EN} - {DE}$ and ${EN} - {FR}$ .
|
| 118 |
+
|
| 119 |
+
<table><tr><td>Lang</td><td>Features</td><td>tst-COMMON</td><td>tst-HE</td></tr><tr><td rowspan="4">EN-DE</td><td>wav2vec</td><td>7.56</td><td>7.21</td></tr><tr><td>wav2vec+norm</td><td>7.83</td><td>8.12</td></tr><tr><td>fbanks</td><td>1.50</td><td>1.09</td></tr><tr><td>fbanks+norm</td><td>4.89</td><td>4.87</td></tr><tr><td rowspan="4">EN-FR</td><td>wav2vec</td><td>12.08</td><td>12.41</td></tr><tr><td>wav2vec+norm</td><td>12.58</td><td>12.58</td></tr><tr><td>fbanks</td><td>0.54</td><td>0.00</td></tr><tr><td>fbanks+norm</td><td>7.10</td><td>6.37</td></tr></table>
|
| 120 |
+
|
| 121 |
+
Table 4. Phone error rate (PER %) on TIMIT dev and test set.
|
| 122 |
+
|
| 123 |
+
<table><tr><td>$\mathbf{{No}.}$</td><td>Feature</td><td>TIMIT dev</td><td>TMIT test</td></tr><tr><td>1</td><td>wav2vec</td><td>13.0</td><td>15.0</td></tr><tr><td>2</td><td>wav2vec + norm</td><td>13.9</td><td>15.8</td></tr><tr><td>3</td><td>fbanks</td><td>22.2</td><td>24.9</td></tr><tr><td>4</td><td>fbanks + norm</td><td>20.7</td><td>23.5</td></tr></table>
|
| 124 |
+
|
| 125 |
+
### 5.1. Better phone discrimination
|
| 126 |
+
|
| 127 |
+
We first replicate an experiment from (Schneider et al., 2019) for phoneme recognition on TIMIT (Garofolo et al., 1993). Speech representations are extracted from train, dev and test split of TIMIT. A simple attentional encoder-decoder model is used: encoder with 4 BLSTM layers of hidden size 320 , decoder with 1 LSTM layer and location-based attention (Luong et al., 2015). The results of Table 4 confirm that wav2vec representations (normalized or not) are much better at recognizing phones than fbanks.
|
| 128 |
+
|
| 129 |
+
### 5.2. Better source-target alignments
|
| 130 |
+
|
| 131 |
+
We evaluate the entropies of the soft alignments obtained with different speech representations in teacher forcing mode. Let ${\alpha }_{tj}$ be the alignment score between target token ${y}_{t}$ and source speech frame ${x}_{j}$ , we evaluate the entropy of the probability distribution ${\alpha }_{t},{H}_{t} = \mathop{\sum }\limits_{{j = 1}}^{\left| x\right| }{\alpha }_{tj}\log {\alpha }_{tj}$ for every target token. This measure is then averaged for all tokens at the corpus level (How 10%). A low entropy means the attention mechanism is confident in its source-target alignments (see example in Figure 3). Table 5 shows clearly that, in our low resource setting, wav2vec leads to better alignments (lower entropy) than fbanks. Fine-tuning and normalization of self-supervised representations also improve the soft alignments.
|
| 132 |
+
|
| 133 |
+
Table 5. Averaged entropies of soft-alignments on How2 dev and val set. AST models trained on 10% partition of How2.
|
| 134 |
+
|
| 135 |
+
<table><tr><td>$\mathbf{{No}.}$</td><td>Feature</td><td>How2 dev</td><td>How2 val</td></tr><tr><td>1</td><td>wav2vec</td><td>0.66</td><td>0.66</td></tr><tr><td>2</td><td>wav2vec + FT</td><td>0.65</td><td>0.65</td></tr><tr><td>3</td><td>wav2vec + norm</td><td>0.57</td><td>0.57</td></tr><tr><td>4</td><td>wav2vec + FT + norm</td><td>0.51</td><td>0.51</td></tr><tr><td>5</td><td>fbanks</td><td>0.89</td><td>0.90</td></tr><tr><td>6</td><td>fbanks + norm</td><td>0.93</td><td>0.93</td></tr></table>
|
| 136 |
+
|
| 137 |
+

|
| 138 |
+
|
| 139 |
+
Figure 3. Soft alignments between source speech features and target text for sentence "A outra pessoa perde."
|
| 140 |
+
|
| 141 |
+
### 5.3. Better robustness to speaker variability
|
| 142 |
+
|
| 143 |
+
Table 6. Equal error rate (EER %) on the VoxCeleb1 test and Lib-riSpeech test sets for female (f) and male (m) speakers.
|
| 144 |
+
|
| 145 |
+
<table><tr><td>$\mathbf{{No}.}$</td><td>Feature</td><td>VoxCeleb</td><td>Libri (f)</td><td>Libri (m)</td></tr><tr><td>1</td><td>wav2vec</td><td>22.75</td><td>11.22</td><td>2.23</td></tr><tr><td>2</td><td>wav2vec + norm</td><td>20.93</td><td>10.54</td><td>1.79</td></tr><tr><td>3</td><td>fbanks</td><td>15.78</td><td>5.47</td><td>0.89</td></tr><tr><td>4</td><td>fbanks + norm</td><td>16.25</td><td>3.47</td><td>0.67</td></tr></table>
|
| 146 |
+
|
| 147 |
+
To investigate robustness to speaker variability, we trained several automatic speaker verification (ASV) systems using wav2vec or fbanks features. Models are trained on Lib-riSpeech train-clean-360 dataset (Panayotov et al., 2015) using Kaldi (Povey et al., 2011). ASV systems are based on x-vectors and probabilistic linear discriminant analysis (PLDA) (Snyder et al., 2018). To extract x-vectors, we used a time delay neural network (TDNN) model topology similar to the one described in (Snyder et al., 2018). Input features are fbanks or wav2vec (optionally normalized) while output corresponds to 921 speakers of the training corpus. ASV experiments are conducted on the VoxCeleb1 test (Nagrani et al., 2017) and LibriSpeech test-clean (Panay-otov et al.,2015) sets. ${}^{8}$ ASV results (equal error rate - EER) are presented in Table 6. We observe that in all experiments, models trained on wav2vec features provide significantly higher EER in comparison with fbanks. This confirms our hypothesis that wav2vec representations remove speaker information from speech signal. ${}^{9}$
|
| 148 |
+
|
| 149 |
+
## 6. Conclusion
|
| 150 |
+
|
| 151 |
+
We investigated the impact of self-supervised learning for end-to-end AST. It was shown that representations based on contrastive predicting coding (CPC) improve results significantly compared to baseline filter-bank, in low-medium resource conditions (train $< {100h}$ ). Our explanation is that self-supervised representations show better phone discrimination, source-target alignments and speaker robustness.
|
| 152 |
+
|
| 153 |
+
## References
|
| 154 |
+
|
| 155 |
+
Baevski, A., Auli, M., and Mohamed, A. Effectiveness of self-supervised pre-training for speech recognition, 2019.
|
| 156 |
+
|
| 157 |
+
Bahdanau, D., Cho, K., and Bengio, Y. Neural Machine Translation by Jointly Learning to Align and Translate. In Proc. of ICLR, 2015.
|
| 158 |
+
|
| 159 |
+
Bansal, S., Kamper, H., Livescu, K., Lopez, A., and Goldwater, S. Pre-training on high-resource speech recognition improves low-resource speech-to-text translation. CoRR, abs/1809.01431, 2018. URL http://arxiv.org/ abs/1809.01431.
|
| 160 |
+
|
| 161 |
+
Bérard, A., Pietquin, O., Servan, C., and Besacier, L. Listen and translate: A proof of concept for end-to-end speech-to-text translation. In NIPS Workshop on End-to-end Learning for Speech and Audio Processing, 2016.
|
| 162 |
+
|
| 163 |
+
Bérard, A., Besacier, L., Kocabiyikoglu, A. C., and Pietquin, O. End-to-end automatic speech translation of audio-
|
| 164 |
+
|
| 165 |
+
books. CoRR, abs/1802.04200, 2018. URL http: //arxiv.org/abs/1802.04200.
|
| 166 |
+
|
| 167 |
+
Chen, T., Kornblith, S., Norouzi, M., and Hinton, G. A simple framework for contrastive learning of visual representations, 2020.
|
| 168 |
+
|
| 169 |
+
Chung, Y., Weng, W., Tong, S., and Glass, J. Towards unsupervised speech-to-text translation. CoRR, abs/1811.01307, 2018. URL http://arxiv.org/ abs/1811.01307.
|
| 170 |
+
|
| 171 |
+
Chung, Y., Hsu, W., Tang, H., and Glass, J. R. An unsupervised autoregressive model for speech representation learning. CoRR, abs/1904.03240, 2019. URL http://arxiv.org/abs/1904.03240.
|
| 172 |
+
|
| 173 |
+
Chung, Y.-A. and Glass, J. Generative pre-training for speech with autoregressive predictive coding, 2019.
|
| 174 |
+
|
| 175 |
+
Chung, Y.-A. and Glass, J. Improved speech representations with multi-target autoregressive predictive coding, 2020.
|
| 176 |
+
|
| 177 |
+
Devlin, J., Chang, M., Lee, K., and Toutanova, K. BERT: pre-training of deep bidirectional transformers for language understanding. CoRR, abs/1810.04805, 2018. URL http://arxiv.org/abs/1810.04805.
|
| 178 |
+
|
| 179 |
+
Di Gangi, M. A., Cattoni, R., Bentivogli, L., Negri, M., and Turchi, M. MuST-C: a Multilingual Speech Translation Corpus. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 2012-2017, Minneapolis, Minnesota, June 2019. Association for Computational Linguistics. doi: 10.18653/ v1/N19-1202. URL https://www.aclweb.org/ anthology/N19-1202.
|
| 180 |
+
|
| 181 |
+
Engel, J., Hantrakul, L., Gu, C., and Roberts, A. Ddsp: Differentiable digital signal processing, 2020.
|
| 182 |
+
|
| 183 |
+
Garofolo, J. S., Lamel, L. F., Fisher, W. M., Fiscus, J. G., Pallett, D. S., and Dahlgren, N. L. DARPA TIMIT acoustic phonetic continuous speech corpus cdrom, 1993.
|
| 184 |
+
|
| 185 |
+
Inaguma, H., Kiyono, S., Duh, K., Karita, S., Soplin, N. E. Y., Hayashi, T., and Watanabe, S. ESPnet-ST: All-in-one speech translation toolkit. arXiv preprint arXiv:2004.10234, 2020.
|
| 186 |
+
|
| 187 |
+
Jia, Y., Johnson, M., Macherey, W., Weiss, R. J., Cao, Y., Chiu, C., Ari, N., Laurenzo, S., and Wu, Y. Leveraging weakly supervised data to improve end-to-end speech-to-text translation. CoRR, abs/1811.02050, 2018. URL http://arxiv.org/abs/1811.02050.
|
| 188 |
+
|
| 189 |
+
---
|
| 190 |
+
|
| 191 |
+
${}^{8}$ The trial and enrollment subsets of the LibriSpeech test-clean for the ASV task are described in more details in (Tomashenko et al., 2020).
|
| 192 |
+
|
| 193 |
+
${}^{9}$ We would also expect that mean and variance normalization increase EER but this is not the case. One explanation might be that normalization also removes channel variability and thus improves ASV.
|
| 194 |
+
|
| 195 |
+
---
|
| 196 |
+
|
| 197 |
+
Jia, Y., Weiss, R. J., Biadsy, F., Macherey, W., Johnson,
|
| 198 |
+
|
| 199 |
+
M., Chen, Z., and Wu, Y. Direct speech-to-speech translation with a sequence-to-sequence model. CoRR, abs/1904.06037, 2019. URL http://arxiv.org/ abs/1904.06037.
|
| 200 |
+
|
| 201 |
+
Kahn, J., Rivière, M., Zheng, W., Kharitonov, E., Xu, Q., Mazaré, P.-E., Karadayi, J., Liptchinsky, V., Collobert, R., Fuegen, C., Likhomanenko, T., Synnaeve, G., Joulin, A., Mohamed, A., and Dupoux, E. Libri-light: A benchmark for asr with limited or no supervision, 2019.
|
| 202 |
+
|
| 203 |
+
Kawakami, K., Wang, L., Dyer, C., Blunsom, P., and van den Oord, A. Learning robust and multilingual speech representations, 2020.
|
| 204 |
+
|
| 205 |
+
Luong, N.-Q., Besacier, L., and Lecouteux, B. Towards accurate predictors of word quality for machine translation: Lessons learned on french - english and english - spanish systems. Data and Knowledge Engineering, 2015.
|
| 206 |
+
|
| 207 |
+
Nagrani, A., Chung, J. S., and Zisserman, A. VoxCeleb: a large-scale speaker identification dataset. In Interspeech, pp. 2616-2620, 2017.
|
| 208 |
+
|
| 209 |
+
Nguyen, H., Tomashenko, N., Boito, M. Z., Caubriere, A., Bougares, F., Rouvier, M., Besacier, L., and Esteve, Y. ON-TRAC consortium end-to-end speech translation systems for the IWSLT 2019 shared task. In Proc. of IWSLT, 2019.
|
| 210 |
+
|
| 211 |
+
Niehues, J., Cattoni, R., Stüker, S., Negri, M., Turchi, M., Salesky, E., Sanabria, R., Barrault, L., Specia, L., and Federico, M. The iwslt 2019 evaluation campaign. In Proceedings of the 16th International Workshop on Spoken Language Translation (IWSLT 2019), 2019.
|
| 212 |
+
|
| 213 |
+
Panayotov, V., Chen, G., Povey, D., and Khudanpur, S. LibriSpeech: an ASR corpus based on public domain audio books. In 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 5206-5210, 2015.
|
| 214 |
+
|
| 215 |
+
Pascual, S., Ravanelli, M., Serrà, J., Bonafonte, A., and Ben-gio, Y. Learning problem-agnostic speech representations from multiple self-supervised tasks. 2019.
|
| 216 |
+
|
| 217 |
+
Povey, D., Ghoshal, A., Boulianne, G., Burget, L., Glembek, O., Goel, N., Hannemann, M., et al. The Kaldi speech recognition toolkit. Technical report, 2011.
|
| 218 |
+
|
| 219 |
+
Ravanelli, M., Zhong, J., Pascual, S., Swietojanski, P., Monteiro, J., Trmal, J., and Bengio, Y. Multi-task self-supervised learning for robust speech recognition, 2020.
|
| 220 |
+
|
| 221 |
+
Rivière, M., Joulin, A., Mazaré, P.-E., and Dupoux, E. Unsupervised pretraining transfers well across languages, 2020.
|
| 222 |
+
|
| 223 |
+
Sanabria, R., Caglayan, O., Palaskar, S., Elliott, D., Barrault, L., Specia, L., and Metze, F. How2: a large-scale dataset for multimodal language understanding. In ViGIL Workshop, NeurIPS, 2018.
|
| 224 |
+
|
| 225 |
+
Schneider, S., Baevski, A., Collobert, R., and Auli, M. wav2vec: Unsupervised Pre-Training for Speech Recognition. In Proc. Interspeech 2019, pp. 3465- 3469, 2019. doi: 10.21437/Interspeech.2019-1873. URL http://dx.doi.org/10.21437/ Interspeech.2019-1873.
|
| 226 |
+
|
| 227 |
+
Simonyan, K. and Zisserman, A. Very deep convolutional networks for large-scale image recognition. In Proc. of ICLR, 2015.
|
| 228 |
+
|
| 229 |
+
Snyder, D., Garcia-Romero, D., Sell, G., Povey, D., and Khudanpur, S. X-vectors: Robust DNN embeddings for speaker recognition. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 5329-5333, 2018.
|
| 230 |
+
|
| 231 |
+
Sperber, M., Neubig, G., Niehues, J., and Waibel, A. Attention-passing models for robust and data-efficient end-to-end speech translation. CoRR, abs/1904.07209, 2019. URL http://arxiv.org/abs/1904.07209.
|
| 232 |
+
|
| 233 |
+
Tomashenko, N., Srivastava, B. M. L., Wang, X., Vincent, E., Nautsch, A., Yamagishi, J., Evans, N., et al. The VoicePrivacy 2020 Challenge evaluation plan. 2020.
|
| 234 |
+
|
| 235 |
+
Weiss, R. J., Chorowski, J., Jaitly, N., Wu, Y., and Chen, Z. Sequence-to-sequence models can directly transcribe foreign speech. In Proc. of INTERSPEECH, 2017.
|
papers/ICML/ICML 2020/ICML 2020 Workshop/ICML 2020 Workshop SAS/SR2L__h9q9p/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,271 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ INVESTIGATING SELF-SUPERVISED PRE-TRAINING FOR END-TO-END SPEECH TRANSLATION
|
| 2 |
+
|
| 3 |
+
Ha Nguyen ${}^{12}$ Fethi Bougares ${}^{3}$ Natalia Tomashenko ${}^{2}$ Yannick Estève ${}^{2}$ Laurent Besacier ${}^{1}$
|
| 4 |
+
|
| 5 |
+
§ ABSTRACT
|
| 6 |
+
|
| 7 |
+
Self-supervised learning from raw speech has been proven beneficial to improve automatic speech recognition (ASR). We investigate here its impact on end-to-end automatic speech translation (AST) performance. We use a contrastive predictive coding (CPC) model pre-trained from unlabeled speech as a feature extractor for a downstream AST task. We show that self-supervised pre-training is particularly efficient in low resource settings and that fine-tuning CPC models on the AST training data further improves performance. Even in higher resource settings, en-sembling AST models trained with filter-bank and CPC representations leads to near state-of-the-art models without using any ASR pre-training. This might be particularly beneficial when one needs to develop a system that translates from speech in a language with poorly standardized orthography or even from speech in an unwritten language.
|
| 8 |
+
|
| 9 |
+
§ 1. INTRODUCTION
|
| 10 |
+
|
| 11 |
+
Self-supervised learning using huge unlabeled data has been explored with very promising results for image processing (Chen et al., 2020) and natural language processing (Devlin et al., 2018). Recent works investigated self-supervised representation learning from speech (Baevski et al., 2019; Kawakami et al., 2020; Chung & Glass, 2019). They were successful to improve performance on downstream tasks such as speech recognition. These recent works suggest that it is possible to reduce dependence on labeled data for building speech systems through acoustic representation learning. We investigate the possibility to leverage unlabeled speech for end-to-end automatic speech translation (AST). We focus on scenarios where (a) recordings in source language are not transcribed ${}^{1}$ (no ASR pre-training is possible),(b) only a small-medium amount of training data (speech aligned to translations) is available, (c) a larger amount of unlabeled speech can be used. This scenario is typical of situations when one builds a system that translates from speech in a language with poorly standardized orthography or even from an unwritten language.
|
| 12 |
+
|
| 13 |
+
In summary, our contributions are: (1) we propose an in-depth study on the impact of self-supervised pre-training for AST, (2) we show that fine-tuning pre-trained representations on the AST training data is beneficial and that self-supervised pre-training is particularly efficient in low resource settings, (3) even in high resource settings, ensem-bling models trained with filter-bank and self-supervised representations leads to near state-of-the-art models without using ASR pre-training, (4) we analyze the representations learnt and show that they allow to better discriminate phones, better align source and target sequences, and are more robust to speaker variability.
|
| 14 |
+
|
| 15 |
+
§ 2. RELATED WORKS
|
| 16 |
+
|
| 17 |
+
§ 2.1. SELF-SUPERVISED LEARNING FROM SPEECH
|
| 18 |
+
|
| 19 |
+
Self-supervised learning from speech consists in resolving pseudo-tasks not requiring human annotations as a pretraining to the real tasks to solve. These pseudo-tasks target predicting next samples or solving ordering problems. Autoregressive predictive coding (APC) (Chung et al., 2019; Chung & Glass, 2020) considers the sequential structure of speech and predicts information about a future frame. An easier learning objective is introduced in Contrastive Predictive Coding (CPC) which consists in distinguishing a true future audio frame from negatives (Baevski et al., 2019; Schneider et al., 2019; Kahn et al., 2019). (Chung & Glass, 2019) shows that such representations are useful to improve several speech tasks while (Kawakami et al., 2020) extends those works by looking at the representations' robustness to domain and language shifts. In the same vein, (Rivière et al., 2020) compares self-supervised and supervised pre-training for ASR and shows that CPC pre-training extracts features that transfer well to other languages, being on par or even outperforming supervised pre-training. Another promising way is to use speech enhancement as a task for feature representation learning (Ravanelli et al., 2020; Engel et al., 2020). Finally, several self-supervised tasks can be jointly tackled to discover better speech representations (Pascual et al., 2019).
|
| 20 |
+
|
| 21 |
+
${}^{1}$ LIG - Université Grenoble Alpes, France ${}^{2}$ LIA - Avignon Université, France ${}^{3}$ LIUM - Le Mans Université, France. Correspondence to: Ha Nguyen <manh-ha.nguyen@univ-grenoble-alpes.fr>.
|
| 22 |
+
|
| 23 |
+
Published at the workshop on Self-supervision in Audio and Speech at the ${37}^{\text{ th }}$ International Conference on Machine Learning, Vienna, Austria. Copyright 2020 by the author(s).
|
| 24 |
+
|
| 25 |
+
${}^{1}$ Transcription not available or language poorly written
|
| 26 |
+
|
| 27 |
+
§ 2.2. END-TO-END AUTOMATIC SPEECH TRANSLATION
|
| 28 |
+
|
| 29 |
+
Previous automatic speech-to-text translation (AST) systems operate in two steps: source language automatic speech recognition (ASR) and source-to-target text machine translation (MT). However, recent works have attempted to build end-to-end AST without using source language transcription during learning or decoding (Bérard et al., 2016; Weiss et al., 2017) or using it at training time only (Bérard et al., 2018). Recently several extensions of these pioneering works were introduced: low resource AST (Bansal et al., 2018), unsupervised AST (Chung et al., 2018), end-to-end speech-to-speech translation (Translatotron) (Jia et al., 2019), multilingual AST (Di Gangi et al., 2019). Improvements of end-to-end AST were also proposed using weakly supervised data (Jia et al., 2018) or adding a second attention mechanism (Sperber et al., 2019). While supervised pre-training for AST was investigated (see for instance (Bérard et al., 2018)), we are aware of a single research group (Chung & Glass, 2019; 2020) that investigated self-supervised pretraining for AST. However their experiments were done in a high resource setting and AST (for which only marginal gains were displayed) was solely investigated among other tasks, without an in-depth analysis of the representations learnt.
|
| 30 |
+
|
| 31 |
+
§ 3. SELF-SUPERVISED PRE-TRAINING FROM SPEECH
|
| 32 |
+
|
| 33 |
+
§ 3.1. CONTRASTIVE PREDICTIVE CODING MODEL
|
| 34 |
+
|
| 35 |
+
We use the self-supervised pre-training model introduced in (Schneider et al., 2019) (wav2vec) which is based on contrastive predictive coding. The model uses (1) an encoder network that converts the audio signal in a latent representation (from raw speech samples $x$ into a feature representation $z$ ), and (2) a context network that aggregates multiple time steps to build contextualized representations (from a sequence ${z}_{i - v},\ldots ,{z}_{i}$ into a context vector ${c}_{i}$ ). ${}^{2}$ The full model (encoder+context) is trained end-to-end to distinguish a sample ${z}_{i + k}$ that is $\mathrm{k}$ steps in the future from negative samples $\widetilde{z}$ uniformly chosen from the same audio sequence. A contrastive loss is minimized for each step $k = 1,\ldots ,K$ and the overall loss is summed over different step sizes (more details in (Schneider et al., 2019)).
|
| 36 |
+
|
| 37 |
+
Table 1. Statistics of different How 2 data partitions
|
| 38 |
+
|
| 39 |
+
max width=
|
| 40 |
+
|
| 41 |
+
Partition #segments #hours #src_w #tgt_w
|
| 42 |
+
|
| 43 |
+
1-5
|
| 44 |
+
10% 17,751 28 313K 295K
|
| 45 |
+
|
| 46 |
+
1-5
|
| 47 |
+
20% 35,858 56 626K 591K
|
| 48 |
+
|
| 49 |
+
1-5
|
| 50 |
+
30% 53,698 84 887K 940K
|
| 51 |
+
|
| 52 |
+
1-5
|
| 53 |
+
60% 107,676 169 1778K 1883K
|
| 54 |
+
|
| 55 |
+
1-5
|
| 56 |
+
full 179,438 281 2963K 3139K
|
| 57 |
+
|
| 58 |
+
1-5
|
| 59 |
+
|
| 60 |
+
§ 3.2. PRE-TRAINED MODELS FOR ENGLISH
|
| 61 |
+
|
| 62 |
+
We use an off-the-shelf model provided for English. ${}^{3}$ It is trained on Librispeech corpus (Panayotov et al., 2015). We also investigate if fine-tuning the model on our task specific data is beneficial. For this, we fine-tune wav2vec on the full speech corpora used for our AST experiments (see next section). It is important to note that no transcripts nor translations are needed for this step which requires only raw speech. After fine-tuning wav2vec, we input the representations produced by the context network ${c}_{i}$ to the AST encoder instead of filter-bank features (see Figure 1).
|
| 63 |
+
|
| 64 |
+
§ 4. END-TO-END SPEECH TRANSLATION EXPERIMENTS
|
| 65 |
+
|
| 66 |
+
§ 4.1. EXPERIMENTAL SETUP
|
| 67 |
+
|
| 68 |
+
§ 4.1.1. DATA
|
| 69 |
+
|
| 70 |
+
How2 corpus (Sanabria et al., 2018) is used for our main experiments. This corpus contains about 297.6 hours of speech, which is transcribed and translated into 3.3 million of English words and 3.1 million of Portuguese words respectively. ${}^{4}$ From this version of data, we first filter out too long sentences (sentences longer than 30 seconds or 400 characters). Then, in order to simulate lower resource scenarios, we randomly split the corpus into four sub-corpora of roughly ${10}\% ,{20}\% ,{30}\%$ , and ${60}\%$ of the filtered full corpus. Our splits guarantee that smaller partitions are fully included in the bigger ones. The statistics of all the partitions and the filtered version of full corpora can be found in Table 1.
|
| 71 |
+
|
| 72 |
+
§ 4.1.2. SPEECH FEATURES AND DATA AUGMENTATION
|
| 73 |
+
|
| 74 |
+
As shown in Figure 1, we extract either wav2vec features or filter-bank+pitch features (later denoted as fbanks) from speech input. ${}^{5}$ Depending on the experiments, mean and variance normalization (MVN) is optionally applied to the generated features. For wav2vec feature extraction, we either use an off-the-shelf model trained on LibriSpeech (Panayotov et al., 2015) or a model fine-tuned on How2 training set. MVN parameters are estimated on the speech translation training set and then applied to all train/dev/test sets. Overall, we have 4 different self-supervised representations named wav2vec, wav2vec + norm, wav2vec + FT (fined-tuned wav2vec) and wav2vec $+ {FT} +$ norm. All those wav2vec features are of dimension 512. We compare the above representations to conventional filter-bank features. Similar to (Nguyen et al., 2019), we extract 80-dimensional Mel filter-bank features, concatenated with 3-dimensional pitch features from windows of ${25}\mathrm{{ms}}$ , and a frame shift of 10ms. MVN is used in the same manner as for wav2vec features. This gives us 2 additional speech representations named fbanks and fbanks + norm respectively (their dimension is ${83}){.}^{6}$ Data augmentation through speed perturbation is also applied with factors of0.9,1.0, and 1.1 to the training data. Our development set is made of 1,984 sentences randomly excluded from the training set. How2 val set is used as our test data.
|
| 75 |
+
|
| 76 |
+
${}^{3}$ https://github.com/pytorch/fairseq/blob/ master/examples/wav2vec/
|
| 77 |
+
|
| 78 |
+
${}^{4}$ As shown by (Nguyen et al.,2019), How 2 is sensitive to the downloading moment. Our version was downloaded in July, 2019.
|
| 79 |
+
|
| 80 |
+
${}^{5}$ Our preliminary experiments on How 2 10% with MFCC features which lead to similar performance as filter-bank are not
|
| 81 |
+
|
| 82 |
+
${}^{2}$ Practically, each ${z}_{i}$ encodes ${30ms}$ of speech every ${10ms}$ . As for ${c}_{i}$ , the total receptive field of the context network is ${210}\mathrm{\;{ms}}$ .
|
| 83 |
+
|
| 84 |
+
§ 4.2. SPEECH-TO-TEXT TRANSLATION MODEL
|
| 85 |
+
|
| 86 |
+
§ 4.2.1. ARCHITECTURE.
|
| 87 |
+
|
| 88 |
+
We use an attention-based encoder-decoder architecture, whose encoder is illustrated in Figure 1. The encoder is a stack of two VGG-like (Simonyan & Zisserman, 2015) CNN blocks followed by five 1024-dimensional BLSTM layers. Each VGG block contains two 2D-convolution layers just before a 2D-maxpooling layer, which aims to reduce both time(T)and frequency dimension(D)of the input speech features by a factor of 2 . These two VGG blocks transform input speech features’ shape from $\left( {T \times D}\right)$ to $\left( {T/4 \times D/4}\right)$ . Bahdanau's attention mechanism (Bahdanau et al., 2015) is used in all our experiments. The decoder is a stack of two 1024-dimensional LSTM layers. As proven effective in (Nguyen et al., 2019), this model is consistently used for all the experiments with fbanks features presented throughout this paper. However wav2vec features have higher dimension (512) than fbanks (83). In order to compare both input representations with a similar parameter budget in the architecture (and also because training an architecture with input features of dimension 512 would be substantially more computationally expensive), we add a projection block at the bottom of the encoder. ${}^{7}$ This block (containing a linear layer followed by a ReLU) reduces the wav2vec's feature size from 512 to 83 (see Figure 1).
|
| 89 |
+
|
| 90 |
+
< g r a p h i c s >
|
| 91 |
+
|
| 92 |
+
Figure 1. Architecture of the speech encoder: a stack of two VGG blocks followed by 5 BLSTM layers. We use as input (1) wav2vec features (that pass through an additional projection layer to reduce their dimension from 512 to 83), or (2) filter-bank+pitch features. The input features are optionally normalized (MVN).
|
| 93 |
+
|
| 94 |
+
§ 4.2.2. HYPERPARAMETERS' DETAILS
|
| 95 |
+
|
| 96 |
+
Models are trained in maximum 20 epochs with early stopping after 3 epochs if the accuracy on the dev set does not improve. Adadelta is chosen as optimizer and dropout is set to 0.3 on the encoder side. We decode all our models with beam size of 10 .
|
| 97 |
+
|
| 98 |
+
§ 4.3. EXPERIMENTAL RESULTS ON HOW2
|
| 99 |
+
|
| 100 |
+
On each partition of How2 corpus, we train 6 models which take as input different speech representations presented in section 4.1.2, thus in total 30 models shown in Table 2. We evaluate on How2 val set, which contains 2, 022 segments (about 3.2 hours of speech), in the same conditions as (Nguyen et al., 2019). It is clear from the table that in low resource settings (28 and 56 hours), self-supervised representations (wav2vec) significantly outperform fbanks. Figure 2a confirms this and shows that models trained with wav2vec representations converge better and faster. The impact of normalization and fine-tuning is also notable from both Table 2 and Figure 2a. In very low resource settings (like 28 hours), fine-tuning wav2vec can greatly help, and with normalization, the performance further improves. In higher resource settings (169 and 281 hours of translated speech), differences between wav2vec and fbanks fade away (and so does the impact of fine-tuning and normalization). However, our ensembling experiments of lines 7 and 8 on ${100}\%$ of How2 show that it is beneficial to ensemble the best system (fbanks+norm, line 6) with a system trained with wav2vec (wav2vec + FT+norm, line 4) rather than a better model (fbanks, line 5) also based on filter-bank features, even though wav2vec $+ {FT} +$ norm underperforms fbanks on this partition. Ensembling all our models (line 9) leads to ${BLEU} > {30}$ even in very low resource training conditions (56 hours). Finally, in order to compare ourselves with the state-of-the-art (Inaguma et al., 2020), we decode How2 dev5 (a.k.a How2 test), which consists of 2,305 segments (about 3.7 hours of speech), using the ensemble of all our models trained on the full corpus (line 9). This gives us near state-of-the-art BLEU: we obtain 46.16 on How2 val and 47.17 on How2 dev5. This latter score on dev5 is to be compared with 48.04 reported with an ensemble model in (Inaguma et al., 2020) where ASR and MT pre-training were used, as well as data augmentation with SpecAugment.
|
| 101 |
+
|
| 102 |
+
presented here.
|
| 103 |
+
|
| 104 |
+
${}^{6}$ For the rest of the paper fbanks will actually mean filter-bank+pitch
|
| 105 |
+
|
| 106 |
+
${}^{7}$ Our implementation of the wav2vec speech encoder, as well as the detailed recipes for our experiments can be found online: https://github.com/mhn226/espnet/tree/ interspeech2020.
|
| 107 |
+
|
| 108 |
+
Table 2. Detokenized case-sensitive BLEU scores measured on How2 val set of different models trained on different partitions of How2 corpus (EN-PT) with different speech features. FT means fine-tuned and norm stands for MVN normalization.
|
| 109 |
+
|
| 110 |
+
max width=
|
| 111 |
+
|
| 112 |
+
$\mathbf{{No}.}$ Feature 10% (28h) 20% (56h) 30% (84h) 60% (169h) 100% (281h)
|
| 113 |
+
|
| 114 |
+
1-7
|
| 115 |
+
1 wav2vec 11.33 26.75 30.83 36.33 41.02
|
| 116 |
+
|
| 117 |
+
1-7
|
| 118 |
+
2 wav2vec + FT 12.52 27.30 32.11 37.78 42.32
|
| 119 |
+
|
| 120 |
+
1-7
|
| 121 |
+
3 wav2vec + norm 16.52 27.33 31.27 37.62 41.08
|
| 122 |
+
|
| 123 |
+
1-7
|
| 124 |
+
4 wav2vec + FT + norm 18.50 27.68 32.17 37.75 41.30
|
| 125 |
+
|
| 126 |
+
1-7
|
| 127 |
+
5 fbanks 1.03 18.61 27.32 37.23 41.63
|
| 128 |
+
|
| 129 |
+
1-7
|
| 130 |
+
6 fbanks + norm 2.11 24.58 30.21 37.56 42.51
|
| 131 |
+
|
| 132 |
+
1-7
|
| 133 |
+
7 Ensemble [5, 6] X 25.28 31.90 40.39 44.35
|
| 134 |
+
|
| 135 |
+
1-7
|
| 136 |
+
8 Ensemble [4, 6] X 29.87 34.67 41.22 45.02
|
| 137 |
+
|
| 138 |
+
1-7
|
| 139 |
+
9 Ensemble [1,2,3,4,5,6] X 31.88 36.80 42.62 46.16
|
| 140 |
+
|
| 141 |
+
1-7
|
| 142 |
+
|
| 143 |
+
< g r a p h i c s >
|
| 144 |
+
|
| 145 |
+
Figure 2. Learning curves (accuracy) of models trained on different partitions of How2
|
| 146 |
+
|
| 147 |
+
§ 4.4. VALIDATION ON TWO OTHER LANGUAGE PAIRS
|
| 148 |
+
|
| 149 |
+
To validate our results in low resource settings (56 hours), we train our models on two subsets of MuST-C (Di Gangi et al., 2019) English-to-German and English-to-French training data (56 hours each, a training size similar to How2 20%). As illustrated by Table 3, MuST-C is more challenging than How2 (as confirmed by official IWSLT 2019 evaluation results (Niehues et al., 2019)), but for both language pairs, wav2vec significantly outperform fbanks. This confirms that self-supervised pre-training is useful in low resource scenarios.
|
| 150 |
+
|
| 151 |
+
§ 5. ANALYSIS OF LEARNT REPRESENTATIONS
|
| 152 |
+
|
| 153 |
+
This section tries to answer the question why wav2vec representation performs better than filter-bank features in low resource settings. The following subsections present the experiments which show that wav2vec might be (1) better at discriminating phones, (2) better at aligning source and target sequences, and (3) more robust to speaker variability.
|
| 154 |
+
|
| 155 |
+
Table 3. AST BLEU on MuST-C 56h for ${EN} - {DE}$ and ${EN} - {FR}$ .
|
| 156 |
+
|
| 157 |
+
max width=
|
| 158 |
+
|
| 159 |
+
Lang Features tst-COMMON tst-HE
|
| 160 |
+
|
| 161 |
+
1-4
|
| 162 |
+
4*EN-DE wav2vec 7.56 7.21
|
| 163 |
+
|
| 164 |
+
2-4
|
| 165 |
+
wav2vec+norm 7.83 8.12
|
| 166 |
+
|
| 167 |
+
2-4
|
| 168 |
+
fbanks 1.50 1.09
|
| 169 |
+
|
| 170 |
+
2-4
|
| 171 |
+
fbanks+norm 4.89 4.87
|
| 172 |
+
|
| 173 |
+
1-4
|
| 174 |
+
4*EN-FR wav2vec 12.08 12.41
|
| 175 |
+
|
| 176 |
+
2-4
|
| 177 |
+
wav2vec+norm 12.58 12.58
|
| 178 |
+
|
| 179 |
+
2-4
|
| 180 |
+
fbanks 0.54 0.00
|
| 181 |
+
|
| 182 |
+
2-4
|
| 183 |
+
fbanks+norm 7.10 6.37
|
| 184 |
+
|
| 185 |
+
1-4
|
| 186 |
+
|
| 187 |
+
Table 4. Phone error rate (PER %) on TIMIT dev and test set.
|
| 188 |
+
|
| 189 |
+
max width=
|
| 190 |
+
|
| 191 |
+
$\mathbf{{No}.}$ Feature TIMIT dev TMIT test
|
| 192 |
+
|
| 193 |
+
1-4
|
| 194 |
+
1 wav2vec 13.0 15.0
|
| 195 |
+
|
| 196 |
+
1-4
|
| 197 |
+
2 wav2vec + norm 13.9 15.8
|
| 198 |
+
|
| 199 |
+
1-4
|
| 200 |
+
3 fbanks 22.2 24.9
|
| 201 |
+
|
| 202 |
+
1-4
|
| 203 |
+
4 fbanks + norm 20.7 23.5
|
| 204 |
+
|
| 205 |
+
1-4
|
| 206 |
+
|
| 207 |
+
§ 5.1. BETTER PHONE DISCRIMINATION
|
| 208 |
+
|
| 209 |
+
We first replicate an experiment from (Schneider et al., 2019) for phoneme recognition on TIMIT (Garofolo et al., 1993). Speech representations are extracted from train, dev and test split of TIMIT. A simple attentional encoder-decoder model is used: encoder with 4 BLSTM layers of hidden size 320, decoder with 1 LSTM layer and location-based attention (Luong et al., 2015). The results of Table 4 confirm that wav2vec representations (normalized or not) are much better at recognizing phones than fbanks.
|
| 210 |
+
|
| 211 |
+
§ 5.2. BETTER SOURCE-TARGET ALIGNMENTS
|
| 212 |
+
|
| 213 |
+
We evaluate the entropies of the soft alignments obtained with different speech representations in teacher forcing mode. Let ${\alpha }_{tj}$ be the alignment score between target token ${y}_{t}$ and source speech frame ${x}_{j}$ , we evaluate the entropy of the probability distribution ${\alpha }_{t},{H}_{t} = \mathop{\sum }\limits_{{j = 1}}^{\left| x\right| }{\alpha }_{tj}\log {\alpha }_{tj}$ for every target token. This measure is then averaged for all tokens at the corpus level (How 10%). A low entropy means the attention mechanism is confident in its source-target alignments (see example in Figure 3). Table 5 shows clearly that, in our low resource setting, wav2vec leads to better alignments (lower entropy) than fbanks. Fine-tuning and normalization of self-supervised representations also improve the soft alignments.
|
| 214 |
+
|
| 215 |
+
Table 5. Averaged entropies of soft-alignments on How2 dev and val set. AST models trained on 10% partition of How2.
|
| 216 |
+
|
| 217 |
+
max width=
|
| 218 |
+
|
| 219 |
+
$\mathbf{{No}.}$ Feature How2 dev How2 val
|
| 220 |
+
|
| 221 |
+
1-4
|
| 222 |
+
1 wav2vec 0.66 0.66
|
| 223 |
+
|
| 224 |
+
1-4
|
| 225 |
+
2 wav2vec + FT 0.65 0.65
|
| 226 |
+
|
| 227 |
+
1-4
|
| 228 |
+
3 wav2vec + norm 0.57 0.57
|
| 229 |
+
|
| 230 |
+
1-4
|
| 231 |
+
4 wav2vec + FT + norm 0.51 0.51
|
| 232 |
+
|
| 233 |
+
1-4
|
| 234 |
+
5 fbanks 0.89 0.90
|
| 235 |
+
|
| 236 |
+
1-4
|
| 237 |
+
6 fbanks + norm 0.93 0.93
|
| 238 |
+
|
| 239 |
+
1-4
|
| 240 |
+
|
| 241 |
+
< g r a p h i c s >
|
| 242 |
+
|
| 243 |
+
Figure 3. Soft alignments between source speech features and target text for sentence "A outra pessoa perde."
|
| 244 |
+
|
| 245 |
+
§ 5.3. BETTER ROBUSTNESS TO SPEAKER VARIABILITY
|
| 246 |
+
|
| 247 |
+
Table 6. Equal error rate (EER %) on the VoxCeleb1 test and Lib-riSpeech test sets for female (f) and male (m) speakers.
|
| 248 |
+
|
| 249 |
+
max width=
|
| 250 |
+
|
| 251 |
+
$\mathbf{{No}.}$ Feature VoxCeleb Libri (f) Libri (m)
|
| 252 |
+
|
| 253 |
+
1-5
|
| 254 |
+
1 wav2vec 22.75 11.22 2.23
|
| 255 |
+
|
| 256 |
+
1-5
|
| 257 |
+
2 wav2vec + norm 20.93 10.54 1.79
|
| 258 |
+
|
| 259 |
+
1-5
|
| 260 |
+
3 fbanks 15.78 5.47 0.89
|
| 261 |
+
|
| 262 |
+
1-5
|
| 263 |
+
4 fbanks + norm 16.25 3.47 0.67
|
| 264 |
+
|
| 265 |
+
1-5
|
| 266 |
+
|
| 267 |
+
To investigate robustness to speaker variability, we trained several automatic speaker verification (ASV) systems using wav2vec or fbanks features. Models are trained on Lib-riSpeech train-clean-360 dataset (Panayotov et al., 2015) using Kaldi (Povey et al., 2011). ASV systems are based on x-vectors and probabilistic linear discriminant analysis (PLDA) (Snyder et al., 2018). To extract x-vectors, we used a time delay neural network (TDNN) model topology similar to the one described in (Snyder et al., 2018). Input features are fbanks or wav2vec (optionally normalized) while output corresponds to 921 speakers of the training corpus. ASV experiments are conducted on the VoxCeleb1 test (Nagrani et al., 2017) and LibriSpeech test-clean (Panay-otov et al.,2015) sets. ${}^{8}$ ASV results (equal error rate - EER) are presented in Table 6. We observe that in all experiments, models trained on wav2vec features provide significantly higher EER in comparison with fbanks. This confirms our hypothesis that wav2vec representations remove speaker information from speech signal. ${}^{9}$
|
| 268 |
+
|
| 269 |
+
§ 6. CONCLUSION
|
| 270 |
+
|
| 271 |
+
We investigated the impact of self-supervised learning for end-to-end AST. It was shown that representations based on contrastive predicting coding (CPC) improve results significantly compared to baseline filter-bank, in low-medium resource conditions (train $< {100h}$ ). Our explanation is that self-supervised representations show better phone discrimination, source-target alignments and speaker robustness.
|
papers/ICML/ICML 2020/ICML 2020 Workshop/ICML 2020 Workshop SAS/_P9LyJ5pMDb/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,147 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Using Self-Supervised Learning of Birdsong for Downstream Industrial Audio Classification
|
| 2 |
+
|
| 3 |
+
Patty Ryan ${}^{1}$ Sean Takafuji ${}^{1}$ Chenhao Yang ${}^{1}$ Nile Wilson ${}^{1}$ Christopher McBride ${}^{2}$
|
| 4 |
+
|
| 5 |
+
## Abstract
|
| 6 |
+
|
| 7 |
+
In manufacturing settings, workers rely on their sense of hearing and their knowledge of what sounds correct to help them identify machine quality problems based on the sound pitch, rhythm, timbre and other characteristics of the sound of the machine in operation. Using machine learning to classify these sounds has broad applications for automating the manual quality recognition work currently being done, including automating machine operator training, automating quality control detection, and diagnostics across manufacturing and mechanical service industries. We previously established that models taking input pitch information from music domains can dramatically improve classification model performance on industrial machine audio leveraging a pretrained pitch model.
|
| 8 |
+
|
| 9 |
+
In this work, we explore the use of self-supervised learning on pitch-intensive birdsong rather than a pre-trained model. To reduce our reliance on a pretrained pitch model and reduce the quantity of labeled industrial audio required, we implement self-supervised representation learning using plentiful, license-free, unlabeled, pitch-intensive wild birdsong recordings, with audio data augmentation to perform classification on industrial audio. We show that: 1. We can preprocess the unlabeled birdsong data sample with unsupervised methods to eliminate low signal sample and mask low frequency noise leaving just desirable chirp-rich sample. 2. We can identify effective representations and approaches for learning birdsong pitch content by comparing select self-supervised pretext task training of temporal sequence prediction and sequence generation. 3. We can identify effective augmentation methods for learning pitch through comparison of the impact of a variety of audio data augmentation methods on self-supervised learning. And 4. Downstream fine-tuned models deliver improved performance classifying industrial motor audio. We demonstrate that motorized sound classification models using self-supervised learning with a dataset of pitch intensive birdsong, combined with select data augmentation, achieves better results than using the pre-trained pitch model.
|
| 10 |
+
|
| 11 |
+
## 1. Introduction
|
| 12 |
+
|
| 13 |
+
We were introduced to the challenge of classifying industrial audio last year when working with a manufacturer that sought to improve welding quality. The correct distance of the welding device to the weld is a critical element in creating a quality weld. If the weld were to be conducted too close or too far from the weld, the weld would be weak and could fail. The master welder, in a tour of the factory floor, was able to immediately call our attention to the difference in the sounds of good welds and the sounds of bad welds. They had distinctively different pitches due to the reflection of the sound off the surface at different distances. However, the light emitted during welding made photography at this distance impractical. And so we investigated classifying the audio based on representations of the pitch with the immediate application of enabling training by allowing welders to get immediate feedback on the quality of their welds.
|
| 14 |
+
|
| 15 |
+
While there were recent advances using deep learning in areas of music machine learning classification and music synthesis, there are very few applications of these frequency and pitch machine learning methods on classification of audio in the industrial environment. We leveraged the CREPE pre-trained pitch estimation model (Kim et al., 2018) and found it performed reasonably well at classifying weld pitches. We implemented a multi-input ConvNet model combining 1D representations of CREPE pitch estimations from the time domain waveform, and Constant-Q (CQT) 2D transforms of the waveform, yielding a high accuracy classification of welding distance (Ryan et al., 2019). We experimented with other industrial audio datasets with the same modeling approach to understand whether the approach was generalizable. We classified correct and incorrect machining lathe settings and distinguished between the motors of different ferry boats operating on the Puget Sound. However, we had two challenges. Labeled data in industrial audio is scarce and expensive to collect, and relying on the CREPE model prediction proved too slow in industrial production settings. In this work we explore the use of self-supervised learning to reduce our labeled data requirements, and we explore whether we can learn enough of the pitch information from a birdsong dataset to allow us to eliminate our reliance on the CREPE pretrained pitch model.
|
| 16 |
+
|
| 17 |
+
---
|
| 18 |
+
|
| 19 |
+
${}^{1}$ Microsoft, Redmond, Washington, USA ${}^{2}$ ADM Associates, Reno, Nevada, USA. Correspondence to: Patty Ryan <Patty.Ryan@microsoft.com>.
|
| 20 |
+
|
| 21 |
+
Published at the workshop on Self-supervision in Audio and Speech at the ${37}^{\text{th }}$ International Conference on Machine Learning, Vienna, Austria. Copyright 2020 by the author(s).
|
| 22 |
+
|
| 23 |
+
---
|
| 24 |
+
|
| 25 |
+
### 1.1. Contribution
|
| 26 |
+
|
| 27 |
+
We make several contributions that we outline here. We demonstrate the efficacy of using self-supervision with a pitch-intensive birdsong model to allow downstream classification of pitch-intensive industrial motor audio. The application of learning pitch is extensively applied in the music realm (Kim et al., 2018), (Huang et al., 2018), (Engel et al., 2019). Enabling learning from the pitch present in birdsong, and using those pretrained weights to improve classification on the pitch present in industrial audio is novel. We describe unsupervised methods to preprocess the unlabeled birdsong audio to exclude low quality samples leaving us only with high quality samples. Finally, we demonstrate the efficacy of several audio data augmentation methods at enhancing self-supervised learning of pitch and demonstrate this on the downstream classification task. The source code for the implementation of our paper is available at: https://github.com/SingingData/birdsong-self-supervised-learning
|
| 28 |
+
|
| 29 |
+
## 2. Data Augmentation for Self-Supervised Learning on a Birdsong Dataset
|
| 30 |
+
|
| 31 |
+
### 2.1. Dataset
|
| 32 |
+
|
| 33 |
+
In the foothills of the Carson Range, within the Sierra Mountains, we captured footage and audio from a motion-activated wildlife camera. The camera was trained on bird feeders and the surrounding area, and captured eleven second video and audio samples ( ${44.1}\mathrm{{kHz}}$ ). The birds recorded included Quail, Blue Jays, Black Headed Grosbeaks, Doves, Robins, Red Finches, Stellars Jays, Black-billed Magpies, Yellow Warblers and Varied Thrush, among others. We extracted the audio from the captured video and resampled it to a ${22}\mathrm{{kHz}}$ sample rate.
|
| 34 |
+
|
| 35 |
+
### 2.2. Preprocessing
|
| 36 |
+
|
| 37 |
+
Some of our video samples were undesirable and needed to be excluded from our sample. For example, some of the motion activated video samples had inadvertent wind activations with no birdsong. Some samples had very faint birdsong. Still others had background noise including sprinklers, cars, and airplanes. First, we eliminated audio samples with little differential between average magnitude and maximum magnitude of the audio signal. Next, we performed a K-means cluster analysis on the CQT unrolled vectors to quickly identify and eliminate clusters of undesirable noise. These two methods allowed us to quickly eliminate one-third of our sample, leaving 1,252 total clean, high quality audio samples.
|
| 38 |
+
|
| 39 |
+
### 2.3. Transform
|
| 40 |
+
|
| 41 |
+
Once these samples were cleaned, we converted pitch and timbre through frequency domain changes over time. We use a Constant-Q Transform (CQT) to a 2D CQT spectrogram for each of our audio waveform inputs. CQT is a time-frequency analysis method with greater frequency resolution at lower frequencies and greater time resolution towards higher frequencies better capturing human audible pitch and timbre. Our use of CQT was inspired by Tim-breTRON (Huang et al., 2018).
|
| 42 |
+
|
| 43 |
+
### 2.4. Augmentations
|
| 44 |
+
|
| 45 |
+
The data augmentation methods we applied are as follows:
|
| 46 |
+
|
| 47 |
+
- Pitch Shifting: The pitch shift augmentation is applied using the Python library librosa (McFee et al., 2020) with the values $\{ - 2, - 1,1,2\}$ being empirically chosen based on the methods from (Salamon & Bello, 2016). The raw frequency values are shifted in increments of semitones with a positive value increasing the pitch and a negative value decreasing the pitch.
|
| 48 |
+
|
| 49 |
+
- Octave Shifting: The octave shift augmentation uses the same methodology as the pitch shift augmentation with an octave shift of 1 being equivalent to a pitch shift of 12 semitones. We reason that for our pretext task on birdsong data to translate well to the industrial audio setting, very large shifts in pitch would be valuable. We used octave shifts with the values $\{ - 2, - 1,1,2\}$ .
|
| 50 |
+
|
| 51 |
+
- Time Stretching: The time stretching augmentation extends or compresses the waveform by the following rates $\{ 2,5,{0.2},{0.5}\}$ . A rate of 2 will lead to the audio sample being twice its original speed, leading to a compressed waveform. Likewise, a rate of 0.5 will lead to the audio sample being half its original speed, creating an extended waveform.
|
| 52 |
+
|
| 53 |
+
Table 1. Classification accuracies on the pretext task with birdsong data. All models trained for 20 epochs.
|
| 54 |
+
|
| 55 |
+
<table><tr><td>ARCHITECTURE</td><td>AUGMENTATIONS</td><td>TRAINING SAMPLES</td><td>TRAIN (ACC)</td><td>TRAIN (LOSS)</td><td>VAL (ACC)</td><td>VAL (LOSS)</td></tr><tr><td>TRIPLET ALEXNET</td><td>NONE</td><td>763</td><td>92.27</td><td>2.5789</td><td>83.44</td><td>2.4384</td></tr><tr><td>TRIPLET ALEXNET</td><td>PITCH + OCTAVE</td><td>11445</td><td>85.64</td><td>1.5740</td><td>81.05</td><td>1.2945</td></tr><tr><td>TRIPLET ALEXNET</td><td>TIME STRETCHING</td><td>3052</td><td>84.19</td><td>1.5734</td><td>74.13</td><td>1.3982</td></tr><tr><td>TRIPLET ALEXNET</td><td>SPECAUGMENT</td><td>3052</td><td>87.13</td><td>3.0731</td><td>77.16</td><td>2.9691</td></tr></table>
|
| 56 |
+
|
| 57 |
+
- SpecAugment: Introduced for speech recognition, (Park et al., 2019) applied a frequency mask and a time mask on top of the log mel spectrogram representation of the audio sample. We use the library nlpaug (Ma, 2019) to apply this augmentation on the CQT representation of the audio sample. Using the notation and descriptions from (Park et al., 2019), on each audio sample we apply a frequency mask that covers 30 consecutive frequency channels denoted as $\lbrack f, f + {30})$ where $f$ is chosen from a uniform distribution of $\lbrack 0,\nu - f)$ where $\nu$ is the number of frequency channels in the CQT representation. Additionally, two time masks are applied on 10 and 20 consecutive time steps denoted as ${T}_{0} = \left\lbrack {{t}_{0},{t}_{0} + {20}}\right)$ and ${T}_{1} = \left\lbrack {{t}_{1},{t}_{1} + {10}}\right)$ with the additional constraint that ${T}_{0} \cap {T}_{1} = \varnothing$ .
|
| 58 |
+
|
| 59 |
+
## 3. Self-Supervised Learning Methods
|
| 60 |
+
|
| 61 |
+
### 3.1. Self-Supervised Learning Pretext Task
|
| 62 |
+
|
| 63 |
+
For the self-supervised pretext task, we chose verifying sequence temporal order, drawing inspiration from the "Shuffle and Learn" pretext task by (Misra et al., 2016). We reasoned that the pattern of the birdsong could be learned in order to determine temporal order, and in so doing would enable the pitch of the notes of the birdsong to be learned. We first created tuples of sequences by splitting each sample into four sequence chunks of 2.6 seconds a piece that we denote as (a, b, c, d) following the (Misra et al., 2016) approach. Next, for each sample we labeled a positive example as the sequence(a, b, c)leaving out the last chunk. To create negative examples, we incorrectly ordered the sequence using the left out chunk ’ $\mathrm{d}$ ’ resulting in the sequence(b, a, d)and (d, a, b).
|
| 64 |
+
|
| 65 |
+
### 3.2. Model Architecture
|
| 66 |
+
|
| 67 |
+
Again, following the "Shuffle and Learn" design, we designed a Triplet Siamese network for sequence verification. We reduced the last dense layer of the AlexNet architecture modestly to fit available computational resources. We applied the Lecun normal initializer, leaky ReLu and liberally applied drop-out. We balanced the datasets.
|
| 68 |
+
|
| 69 |
+
### 3.3. Downstream Task
|
| 70 |
+
|
| 71 |
+
For our downstream task, we classified Washington State Ferry recordings, distinguishing between the Wenatchee and the Tacoma motors based on 2.6 second samples.
|
| 72 |
+
|
| 73 |
+
For our downstream architecture, we took just one of the Siamese Triplets to form the basis of our downstream model. We loaded the pre-trained weights on each of the convolutional layers, and added two trainable dense layers and an output layer. We froze the first three layers and allowed the last three layers to be trainable. We trained the downstream task for 20 epochs with each of the data augmentation permutations.
|
| 74 |
+
|
| 75 |
+
## 4. Results
|
| 76 |
+
|
| 77 |
+
Self-supervised training on birdsong proved effective for improving our downstream classification model performance. First, two data augmentation techniques in particular, pitch shifting and time stretching, proved the most effective at improving downstream performance. With either of these data augmentation techniques present, our downstream model achieved ${100}\%$ classification accuracy with 10 epochs of training. By contrast, without pre-training, the downstream model failed to learn. In comparison, the model attained a comparable accuracy of 99.75% using a pre-trained pitch model, CREPE, combined with CQT with spec augment data augmentation. The performance of the model on the pretext task is noted in Table 1. The performance of the model on the pretext task is noted in Table 2 . We note the quantity of augmented training data in the table. For pitch + octave augmentations and our time stretching augmentations, we generated a greater number of training samples which may have resulted in lower training loss on the downstream ferry audio training.
|
| 78 |
+
|
| 79 |
+
## 5. Related Works
|
| 80 |
+
|
| 81 |
+
Self-Supervised Learning. Self-Supervised methods have shown promising growth in the natural language space involving audio waveforms with recent contributions such as Audio ALBERT (Chi et al., 2020). In the general audio space, there has been a larger focus on learning high-quality audio representations through unsupervised methods such as using autoencoders (Roche et al., 2018) equipped with convolutional layers or additionally with recurrent layers as well (Meyer et al., 2017), (Chung et al., 2016). One self-supervised task (Tagliasacchi et al., 2019) is called TemporalGap which focuses on estimating the length of a time masked temporal slice. Instead of using TemporalGap as our pretext task, we incorporated this task as part of our augmentations through SpecAugment (Park et al., 2019) which allows us an additional augmentation method that has demonstrated crucial value to the quality of the learned representations.
|
| 82 |
+
|
| 83 |
+
Table 2. Classification accuracies on the downstream task with ferry data. Pre-Trained indicates that the self-supervised model weights were transferred onto the classifier. The augmentations where indicated were applied on the data for the pretext task (the birdsong) but were not applied on the downstream task (the ferry sound). Training Samples: 167 recordings. Validation Samples: 66 recordings.
|
| 84 |
+
|
| 85 |
+
<table><tr><td>ARCHITECTURE</td><td>PRE-TRAINED</td><td>AUGMENTATIONS</td><td>TRAIN (ACC)</td><td>TRAIN (LOSS)</td><td>VAL (ACC)</td><td>VAL (LOSS)</td></tr><tr><td>ALEXNET</td><td>No</td><td>NONE</td><td>56.29</td><td>17843.1402</td><td>53.79</td><td>943.1954</td></tr><tr><td>ALEXNET</td><td>Yes</td><td>NONE</td><td>59.88</td><td>103428.4201</td><td>87.12</td><td>6730.2180</td></tr><tr><td>ALEXNET</td><td>Yes</td><td>PITCH + OCTAVE</td><td>74.85</td><td>19469.7310</td><td>100</td><td>0.3400</td></tr><tr><td>ALEXNET</td><td>Yes</td><td>TIME STRETCHING</td><td>69.76</td><td>16478.3154</td><td>100</td><td>0.3400</td></tr><tr><td>ALEXNET</td><td>Yes</td><td>SPECAUGMENT</td><td>59.28</td><td>15824.3084</td><td>92.42</td><td>256.0956</td></tr></table>
|
| 86 |
+
|
| 87 |
+
Audio Representations and Augmentations. The usage of different transformations on the audio waveform such as the short-time fourier transform (STFT), linear and log mel spectrograms, and continuous wavlet transform (CWT) has been studied on environmental audio classification tasks UrbanSound8K by (Huzaifah, 2017). Additionally from the speech recognition space, there is (Nguyen et al., 2019)
|
| 88 |
+
|
| 89 |
+
Application in Vision. While the focus of our methods is strictly focused on learning from the audio waveform, the method that we drew inspiration from (Misra et al., 2016) is performed on video frames. Other methods for self-supervision when video frames and audio waveforms are available have been explored (Alwassel et al., 2019), (Korbar et al., 2018). Our method applied with "Shuffle and Learn" (Misra et al., 2016) offers a new self-supervised learning task to the combined video and audio space.
|
| 90 |
+
|
| 91 |
+
## 6. Discussion
|
| 92 |
+
|
| 93 |
+
For pitch-intensive downstream classification tasks, it appears pretraining with license-free birdsong recordings is effective at improving performance, even for modestly sized labelled data sets. For our industrial enterprise implementations of audio machine learning, self-supervised learning is a promising approach. In this case, classification on the ferry motor dataset may be too easy, and we look forward to extending our experimentation to other more challenging industrial audio datasets. We believe audio and video of the natural world with relevant characteristics may prove a cost-effective data source to build self-supervised learning.
|
| 94 |
+
|
| 95 |
+
Further experimentation is called for given the differences in our train and validation accuracies, as shown in Table 2. We trained our downstream ferry classification models for 10 epochs each. However, additional training may improve the results.
|
| 96 |
+
|
| 97 |
+
## 7. Conclusion
|
| 98 |
+
|
| 99 |
+
In this paper, we share a simple insight into the strong pitch component shared by birdsong, music, and industrial audio. We demonstrate the efficacy of a selection of audio data augmentation techniques at representing the pitch component of birdsong and industrial audio. Additionally, we demonstrate unsupervised data pre-processing methods that allow selection of unlabeled birdsong data to yield pitch-intensive sample suited for self-supervised training. Finally, we demonstrate the effectiveness of using self-supervised learning techniques with a pretext task of sequence temporal order verification at learning pitch information that dramatically improves downstream model industrial audio classification tasks.
|
| 100 |
+
|
| 101 |
+
In future work, we aim to expand upon our method by leveraging other sources of audio data for the pretext task such as AudioSet (Gemmeke et al., 2017). Additionally, we would like to investigate how our learned representation can be used in conjunction with representations obtained from other pretext tasks such as (Tagliasacchi et al., 2019) to capture different features of the audio waveform.
|
| 102 |
+
|
| 103 |
+
## References
|
| 104 |
+
|
| 105 |
+
Alwassel, H., Mahajan, D., Torresani, L., Ghanem, B., and Tran, D. Self-supervised learning by cross-modal audio-video clustering, 2019.
|
| 106 |
+
|
| 107 |
+
Chi, P.-H., Chung, P.-H., Wu, T.-H., Hsieh, C.-C., Li, S.-W., and yi Lee, H. Audio albert: A lite bert for self-supervised learning of audio representation, 2020.
|
| 108 |
+
|
| 109 |
+
Chung, Y.-A., Wu, C.-C., Shen, C.-H., Lee, H.-Y., and Lee,
|
| 110 |
+
|
| 111 |
+
L.-S. Audio word2vec: Unsupervised learning of au-
|
| 112 |
+
|
| 113 |
+
dio segment representations using sequence-to-sequence autoencoder, 2016.
|
| 114 |
+
|
| 115 |
+
Engel, J., Agrawal, K. K., Chen, S., Gulrajani, I., Donahue, C., and Roberts, A. Gansynth: Adversarial neural audio synthesis, 2019.
|
| 116 |
+
|
| 117 |
+
Gemmeke, J. F., Ellis, D. P. W., Freedman, D., Jansen, A., Lawrence, W., Moore, R. C., Plakal, M., and Ritter, M. Audio set: An ontology and human-labeled dataset for audio events. In Proc. IEEE ICASSP 2017, New Orleans, LA, 2017.
|
| 118 |
+
|
| 119 |
+
Huang, S., Li, Q., Anil, C., Bao, X., Oore, S., and Grosse, R. B. Timbretron: A wavenet(cyclegan(cqt(audio))) pipeline for musical timbre transfer, 2018.
|
| 120 |
+
|
| 121 |
+
Huzaifah, M. Comparison of time-frequency representations for environmental sound classification using convolutional neural networks, 2017.
|
| 122 |
+
|
| 123 |
+
Kim, J. W., Salamon, J., Li, P., and Bello, J. P. Crepe: A convolutional representation for pitch estimation, 2018.
|
| 124 |
+
|
| 125 |
+
Korbar, B., Tran, D., and Torresani, L. Cooperative learning of audio and video models from self-supervised synchronization. In Bengio, S., Wallach, H., Larochelle, H., Grauman, K., Cesa-Bianchi, N., and Garnett, R. (eds.), Advances in Neural Information Processing Systems 31, pp. 7763-7774. Curran Associates, Inc., 2018.
|
| 126 |
+
|
| 127 |
+
Ma, E. NLP Augmentation. https://github.com/ makcedward/nlpaug, 2019.
|
| 128 |
+
|
| 129 |
+
McFee, B., Lostanlen, V., McVicar, M., Metsai, A., Balke, S., Thom, C., Raffel, C., Malek, A., Lee, D., Zalkow, F., Lee, K., Nieto, O., Mason, J., Ellis, D., Yamamoto, R., Seyfarth, S., Battenberg, E., , ., Bittner, R., Choi, K., Moore, J., Wei, Z., Hidaka, S., nullmightybofo, Friesch, P., Stter, F.-R., Here, D., Kim, T., Vollrath, M., and Weiss, A. librosa/librosa: 0.7.2, January 2020. URL https: //doi.org/10.5281/zenodo.3606573.
|
| 130 |
+
|
| 131 |
+
Meyer, M., Beutel, J., and Thiele, L. Unsupervised feature learning for audio analysis, 2017.
|
| 132 |
+
|
| 133 |
+
Misra, I., Zitnick, C. L., and Hebert, M. Shuffle and learn: Unsupervised learning using temporal order verification, 2016.
|
| 134 |
+
|
| 135 |
+
Nguyen, T.-S., Stueker, S., Niehues, J., and Waibel, A. Improving sequence-to-sequence speech recognition training with on-the-fly data augmentation, 2019.
|
| 136 |
+
|
| 137 |
+
Park, D. S., Chan, W., Zhang, Y., Chiu, C.-C., Zoph, B., Cubuk, E. D., and Le, Q. V. Specaugment: A simple data augmentation method for automatic speech recognition. Interspeech 2019, Sep 2019. doi: 10.21437/
|
| 138 |
+
|
| 139 |
+
interspeech.2019-2680. URL http://dx.doi.org/ 10.21437/Interspeech.2019-2680.
|
| 140 |
+
|
| 141 |
+
Roche, F., Hueber, T., Limier, S., and Girin, L. Autoen-coders for music sound modeling: a comparison of linear, shallow, deep, recurrent and variational models, 2018.
|
| 142 |
+
|
| 143 |
+
Ryan, P., Yang, C., and Wilson, N. Industrial audio classification with music domain features. https://github.com/SingingData/ Industrial-Audio-Classification/blob/ master/Media/Ryan-NeurIPS-poster.pdf, 2019.
|
| 144 |
+
|
| 145 |
+
Salamon, J. and Bello, J. P. Deep convolutional neural networks and data augmentation for environmental sound classification. CoRR, abs/1608.04363, 2016. URL http: //arxiv.org/abs/1608.04363.
|
| 146 |
+
|
| 147 |
+
Tagliasacchi, M., Gfeller, B., de Chaumont Quitry, F., and Roblek, D. Self-supervised audio representation learning for mobile devices, 2019.
|
papers/ICML/ICML 2020/ICML 2020 Workshop/ICML 2020 Workshop SAS/_P9LyJ5pMDb/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,132 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ USING SELF-SUPERVISED LEARNING OF BIRDSONG FOR DOWNSTREAM INDUSTRIAL AUDIO CLASSIFICATION
|
| 2 |
+
|
| 3 |
+
Patty Ryan ${}^{1}$ Sean Takafuji ${}^{1}$ Chenhao Yang ${}^{1}$ Nile Wilson ${}^{1}$ Christopher McBride ${}^{2}$
|
| 4 |
+
|
| 5 |
+
§ ABSTRACT
|
| 6 |
+
|
| 7 |
+
In manufacturing settings, workers rely on their sense of hearing and their knowledge of what sounds correct to help them identify machine quality problems based on the sound pitch, rhythm, timbre and other characteristics of the sound of the machine in operation. Using machine learning to classify these sounds has broad applications for automating the manual quality recognition work currently being done, including automating machine operator training, automating quality control detection, and diagnostics across manufacturing and mechanical service industries. We previously established that models taking input pitch information from music domains can dramatically improve classification model performance on industrial machine audio leveraging a pretrained pitch model.
|
| 8 |
+
|
| 9 |
+
In this work, we explore the use of self-supervised learning on pitch-intensive birdsong rather than a pre-trained model. To reduce our reliance on a pretrained pitch model and reduce the quantity of labeled industrial audio required, we implement self-supervised representation learning using plentiful, license-free, unlabeled, pitch-intensive wild birdsong recordings, with audio data augmentation to perform classification on industrial audio. We show that: 1. We can preprocess the unlabeled birdsong data sample with unsupervised methods to eliminate low signal sample and mask low frequency noise leaving just desirable chirp-rich sample. 2. We can identify effective representations and approaches for learning birdsong pitch content by comparing select self-supervised pretext task training of temporal sequence prediction and sequence generation. 3. We can identify effective augmentation methods for learning pitch through comparison of the impact of a variety of audio data augmentation methods on self-supervised learning. And 4. Downstream fine-tuned models deliver improved performance classifying industrial motor audio. We demonstrate that motorized sound classification models using self-supervised learning with a dataset of pitch intensive birdsong, combined with select data augmentation, achieves better results than using the pre-trained pitch model.
|
| 10 |
+
|
| 11 |
+
§ 1. INTRODUCTION
|
| 12 |
+
|
| 13 |
+
We were introduced to the challenge of classifying industrial audio last year when working with a manufacturer that sought to improve welding quality. The correct distance of the welding device to the weld is a critical element in creating a quality weld. If the weld were to be conducted too close or too far from the weld, the weld would be weak and could fail. The master welder, in a tour of the factory floor, was able to immediately call our attention to the difference in the sounds of good welds and the sounds of bad welds. They had distinctively different pitches due to the reflection of the sound off the surface at different distances. However, the light emitted during welding made photography at this distance impractical. And so we investigated classifying the audio based on representations of the pitch with the immediate application of enabling training by allowing welders to get immediate feedback on the quality of their welds.
|
| 14 |
+
|
| 15 |
+
While there were recent advances using deep learning in areas of music machine learning classification and music synthesis, there are very few applications of these frequency and pitch machine learning methods on classification of audio in the industrial environment. We leveraged the CREPE pre-trained pitch estimation model (Kim et al., 2018) and found it performed reasonably well at classifying weld pitches. We implemented a multi-input ConvNet model combining 1D representations of CREPE pitch estimations from the time domain waveform, and Constant-Q (CQT) 2D transforms of the waveform, yielding a high accuracy classification of welding distance (Ryan et al., 2019). We experimented with other industrial audio datasets with the same modeling approach to understand whether the approach was generalizable. We classified correct and incorrect machining lathe settings and distinguished between the motors of different ferry boats operating on the Puget Sound. However, we had two challenges. Labeled data in industrial audio is scarce and expensive to collect, and relying on the CREPE model prediction proved too slow in industrial production settings. In this work we explore the use of self-supervised learning to reduce our labeled data requirements, and we explore whether we can learn enough of the pitch information from a birdsong dataset to allow us to eliminate our reliance on the CREPE pretrained pitch model.
|
| 16 |
+
|
| 17 |
+
${}^{1}$ Microsoft, Redmond, Washington, USA ${}^{2}$ ADM Associates, Reno, Nevada, USA. Correspondence to: Patty Ryan <Patty.Ryan@microsoft.com>.
|
| 18 |
+
|
| 19 |
+
Published at the workshop on Self-supervision in Audio and Speech at the ${37}^{\text{ th }}$ International Conference on Machine Learning, Vienna, Austria. Copyright 2020 by the author(s).
|
| 20 |
+
|
| 21 |
+
§ 1.1. CONTRIBUTION
|
| 22 |
+
|
| 23 |
+
We make several contributions that we outline here. We demonstrate the efficacy of using self-supervision with a pitch-intensive birdsong model to allow downstream classification of pitch-intensive industrial motor audio. The application of learning pitch is extensively applied in the music realm (Kim et al., 2018), (Huang et al., 2018), (Engel et al., 2019). Enabling learning from the pitch present in birdsong, and using those pretrained weights to improve classification on the pitch present in industrial audio is novel. We describe unsupervised methods to preprocess the unlabeled birdsong audio to exclude low quality samples leaving us only with high quality samples. Finally, we demonstrate the efficacy of several audio data augmentation methods at enhancing self-supervised learning of pitch and demonstrate this on the downstream classification task. The source code for the implementation of our paper is available at: https://github.com/SingingData/birdsong-self-supervised-learning
|
| 24 |
+
|
| 25 |
+
§ 2. DATA AUGMENTATION FOR SELF-SUPERVISED LEARNING ON A BIRDSONG DATASET
|
| 26 |
+
|
| 27 |
+
§ 2.1. DATASET
|
| 28 |
+
|
| 29 |
+
In the foothills of the Carson Range, within the Sierra Mountains, we captured footage and audio from a motion-activated wildlife camera. The camera was trained on bird feeders and the surrounding area, and captured eleven second video and audio samples ( ${44.1}\mathrm{{kHz}}$ ). The birds recorded included Quail, Blue Jays, Black Headed Grosbeaks, Doves, Robins, Red Finches, Stellars Jays, Black-billed Magpies, Yellow Warblers and Varied Thrush, among others. We extracted the audio from the captured video and resampled it to a ${22}\mathrm{{kHz}}$ sample rate.
|
| 30 |
+
|
| 31 |
+
§ 2.2. PREPROCESSING
|
| 32 |
+
|
| 33 |
+
Some of our video samples were undesirable and needed to be excluded from our sample. For example, some of the motion activated video samples had inadvertent wind activations with no birdsong. Some samples had very faint birdsong. Still others had background noise including sprinklers, cars, and airplanes. First, we eliminated audio samples with little differential between average magnitude and maximum magnitude of the audio signal. Next, we performed a K-means cluster analysis on the CQT unrolled vectors to quickly identify and eliminate clusters of undesirable noise. These two methods allowed us to quickly eliminate one-third of our sample, leaving 1,252 total clean, high quality audio samples.
|
| 34 |
+
|
| 35 |
+
§ 2.3. TRANSFORM
|
| 36 |
+
|
| 37 |
+
Once these samples were cleaned, we converted pitch and timbre through frequency domain changes over time. We use a Constant-Q Transform (CQT) to a 2D CQT spectrogram for each of our audio waveform inputs. CQT is a time-frequency analysis method with greater frequency resolution at lower frequencies and greater time resolution towards higher frequencies better capturing human audible pitch and timbre. Our use of CQT was inspired by Tim-breTRON (Huang et al., 2018).
|
| 38 |
+
|
| 39 |
+
§ 2.4. AUGMENTATIONS
|
| 40 |
+
|
| 41 |
+
The data augmentation methods we applied are as follows:
|
| 42 |
+
|
| 43 |
+
* Pitch Shifting: The pitch shift augmentation is applied using the Python library librosa (McFee et al., 2020) with the values $\{ - 2, - 1,1,2\}$ being empirically chosen based on the methods from (Salamon & Bello, 2016). The raw frequency values are shifted in increments of semitones with a positive value increasing the pitch and a negative value decreasing the pitch.
|
| 44 |
+
|
| 45 |
+
* Octave Shifting: The octave shift augmentation uses the same methodology as the pitch shift augmentation with an octave shift of 1 being equivalent to a pitch shift of 12 semitones. We reason that for our pretext task on birdsong data to translate well to the industrial audio setting, very large shifts in pitch would be valuable. We used octave shifts with the values $\{ - 2, - 1,1,2\}$ .
|
| 46 |
+
|
| 47 |
+
* Time Stretching: The time stretching augmentation extends or compresses the waveform by the following rates $\{ 2,5,{0.2},{0.5}\}$ . A rate of 2 will lead to the audio sample being twice its original speed, leading to a compressed waveform. Likewise, a rate of 0.5 will lead to the audio sample being half its original speed, creating an extended waveform.
|
| 48 |
+
|
| 49 |
+
Table 1. Classification accuracies on the pretext task with birdsong data. All models trained for 20 epochs.
|
| 50 |
+
|
| 51 |
+
max width=
|
| 52 |
+
|
| 53 |
+
ARCHITECTURE AUGMENTATIONS TRAINING SAMPLES TRAIN (ACC) TRAIN (LOSS) VAL (ACC) VAL (LOSS)
|
| 54 |
+
|
| 55 |
+
1-7
|
| 56 |
+
TRIPLET ALEXNET NONE 763 92.27 2.5789 83.44 2.4384
|
| 57 |
+
|
| 58 |
+
1-7
|
| 59 |
+
TRIPLET ALEXNET PITCH + OCTAVE 11445 85.64 1.5740 81.05 1.2945
|
| 60 |
+
|
| 61 |
+
1-7
|
| 62 |
+
TRIPLET ALEXNET TIME STRETCHING 3052 84.19 1.5734 74.13 1.3982
|
| 63 |
+
|
| 64 |
+
1-7
|
| 65 |
+
TRIPLET ALEXNET SPECAUGMENT 3052 87.13 3.0731 77.16 2.9691
|
| 66 |
+
|
| 67 |
+
1-7
|
| 68 |
+
|
| 69 |
+
* SpecAugment: Introduced for speech recognition, (Park et al., 2019) applied a frequency mask and a time mask on top of the log mel spectrogram representation of the audio sample. We use the library nlpaug (Ma, 2019) to apply this augmentation on the CQT representation of the audio sample. Using the notation and descriptions from (Park et al., 2019), on each audio sample we apply a frequency mask that covers 30 consecutive frequency channels denoted as $\lbrack f,f + {30})$ where $f$ is chosen from a uniform distribution of $\lbrack 0,\nu - f)$ where $\nu$ is the number of frequency channels in the CQT representation. Additionally, two time masks are applied on 10 and 20 consecutive time steps denoted as ${T}_{0} = \left\lbrack {{t}_{0},{t}_{0} + {20}}\right)$ and ${T}_{1} = \left\lbrack {{t}_{1},{t}_{1} + {10}}\right)$ with the additional constraint that ${T}_{0} \cap {T}_{1} = \varnothing$ .
|
| 70 |
+
|
| 71 |
+
§ 3. SELF-SUPERVISED LEARNING METHODS
|
| 72 |
+
|
| 73 |
+
§ 3.1. SELF-SUPERVISED LEARNING PRETEXT TASK
|
| 74 |
+
|
| 75 |
+
For the self-supervised pretext task, we chose verifying sequence temporal order, drawing inspiration from the "Shuffle and Learn" pretext task by (Misra et al., 2016). We reasoned that the pattern of the birdsong could be learned in order to determine temporal order, and in so doing would enable the pitch of the notes of the birdsong to be learned. We first created tuples of sequences by splitting each sample into four sequence chunks of 2.6 seconds a piece that we denote as (a, b, c, d) following the (Misra et al., 2016) approach. Next, for each sample we labeled a positive example as the sequence(a, b, c)leaving out the last chunk. To create negative examples, we incorrectly ordered the sequence using the left out chunk ’ $\mathrm{d}$ ’ resulting in the sequence(b, a, d)and (d, a, b).
|
| 76 |
+
|
| 77 |
+
§ 3.2. MODEL ARCHITECTURE
|
| 78 |
+
|
| 79 |
+
Again, following the "Shuffle and Learn" design, we designed a Triplet Siamese network for sequence verification. We reduced the last dense layer of the AlexNet architecture modestly to fit available computational resources. We applied the Lecun normal initializer, leaky ReLu and liberally applied drop-out. We balanced the datasets.
|
| 80 |
+
|
| 81 |
+
§ 3.3. DOWNSTREAM TASK
|
| 82 |
+
|
| 83 |
+
For our downstream task, we classified Washington State Ferry recordings, distinguishing between the Wenatchee and the Tacoma motors based on 2.6 second samples.
|
| 84 |
+
|
| 85 |
+
For our downstream architecture, we took just one of the Siamese Triplets to form the basis of our downstream model. We loaded the pre-trained weights on each of the convolutional layers, and added two trainable dense layers and an output layer. We froze the first three layers and allowed the last three layers to be trainable. We trained the downstream task for 20 epochs with each of the data augmentation permutations.
|
| 86 |
+
|
| 87 |
+
§ 4. RESULTS
|
| 88 |
+
|
| 89 |
+
Self-supervised training on birdsong proved effective for improving our downstream classification model performance. First, two data augmentation techniques in particular, pitch shifting and time stretching, proved the most effective at improving downstream performance. With either of these data augmentation techniques present, our downstream model achieved ${100}\%$ classification accuracy with 10 epochs of training. By contrast, without pre-training, the downstream model failed to learn. In comparison, the model attained a comparable accuracy of 99.75% using a pre-trained pitch model, CREPE, combined with CQT with spec augment data augmentation. The performance of the model on the pretext task is noted in Table 1. The performance of the model on the pretext task is noted in Table 2 . We note the quantity of augmented training data in the table. For pitch + octave augmentations and our time stretching augmentations, we generated a greater number of training samples which may have resulted in lower training loss on the downstream ferry audio training.
|
| 90 |
+
|
| 91 |
+
§ 5. RELATED WORKS
|
| 92 |
+
|
| 93 |
+
Self-Supervised Learning. Self-Supervised methods have shown promising growth in the natural language space involving audio waveforms with recent contributions such as Audio ALBERT (Chi et al., 2020). In the general audio space, there has been a larger focus on learning high-quality audio representations through unsupervised methods such as using autoencoders (Roche et al., 2018) equipped with convolutional layers or additionally with recurrent layers as well (Meyer et al., 2017), (Chung et al., 2016). One self-supervised task (Tagliasacchi et al., 2019) is called TemporalGap which focuses on estimating the length of a time masked temporal slice. Instead of using TemporalGap as our pretext task, we incorporated this task as part of our augmentations through SpecAugment (Park et al., 2019) which allows us an additional augmentation method that has demonstrated crucial value to the quality of the learned representations.
|
| 94 |
+
|
| 95 |
+
Table 2. Classification accuracies on the downstream task with ferry data. Pre-Trained indicates that the self-supervised model weights were transferred onto the classifier. The augmentations where indicated were applied on the data for the pretext task (the birdsong) but were not applied on the downstream task (the ferry sound). Training Samples: 167 recordings. Validation Samples: 66 recordings.
|
| 96 |
+
|
| 97 |
+
max width=
|
| 98 |
+
|
| 99 |
+
ARCHITECTURE PRE-TRAINED AUGMENTATIONS TRAIN (ACC) TRAIN (LOSS) VAL (ACC) VAL (LOSS)
|
| 100 |
+
|
| 101 |
+
1-7
|
| 102 |
+
ALEXNET No NONE 56.29 17843.1402 53.79 943.1954
|
| 103 |
+
|
| 104 |
+
1-7
|
| 105 |
+
ALEXNET Yes NONE 59.88 103428.4201 87.12 6730.2180
|
| 106 |
+
|
| 107 |
+
1-7
|
| 108 |
+
ALEXNET Yes PITCH + OCTAVE 74.85 19469.7310 100 0.3400
|
| 109 |
+
|
| 110 |
+
1-7
|
| 111 |
+
ALEXNET Yes TIME STRETCHING 69.76 16478.3154 100 0.3400
|
| 112 |
+
|
| 113 |
+
1-7
|
| 114 |
+
ALEXNET Yes SPECAUGMENT 59.28 15824.3084 92.42 256.0956
|
| 115 |
+
|
| 116 |
+
1-7
|
| 117 |
+
|
| 118 |
+
Audio Representations and Augmentations. The usage of different transformations on the audio waveform such as the short-time fourier transform (STFT), linear and log mel spectrograms, and continuous wavlet transform (CWT) has been studied on environmental audio classification tasks UrbanSound8K by (Huzaifah, 2017). Additionally from the speech recognition space, there is (Nguyen et al., 2019)
|
| 119 |
+
|
| 120 |
+
Application in Vision. While the focus of our methods is strictly focused on learning from the audio waveform, the method that we drew inspiration from (Misra et al., 2016) is performed on video frames. Other methods for self-supervision when video frames and audio waveforms are available have been explored (Alwassel et al., 2019), (Korbar et al., 2018). Our method applied with "Shuffle and Learn" (Misra et al., 2016) offers a new self-supervised learning task to the combined video and audio space.
|
| 121 |
+
|
| 122 |
+
§ 6. DISCUSSION
|
| 123 |
+
|
| 124 |
+
For pitch-intensive downstream classification tasks, it appears pretraining with license-free birdsong recordings is effective at improving performance, even for modestly sized labelled data sets. For our industrial enterprise implementations of audio machine learning, self-supervised learning is a promising approach. In this case, classification on the ferry motor dataset may be too easy, and we look forward to extending our experimentation to other more challenging industrial audio datasets. We believe audio and video of the natural world with relevant characteristics may prove a cost-effective data source to build self-supervised learning.
|
| 125 |
+
|
| 126 |
+
Further experimentation is called for given the differences in our train and validation accuracies, as shown in Table 2. We trained our downstream ferry classification models for 10 epochs each. However, additional training may improve the results.
|
| 127 |
+
|
| 128 |
+
§ 7. CONCLUSION
|
| 129 |
+
|
| 130 |
+
In this paper, we share a simple insight into the strong pitch component shared by birdsong, music, and industrial audio. We demonstrate the efficacy of a selection of audio data augmentation techniques at representing the pitch component of birdsong and industrial audio. Additionally, we demonstrate unsupervised data pre-processing methods that allow selection of unlabeled birdsong data to yield pitch-intensive sample suited for self-supervised training. Finally, we demonstrate the effectiveness of using self-supervised learning techniques with a pretext task of sequence temporal order verification at learning pitch information that dramatically improves downstream model industrial audio classification tasks.
|
| 131 |
+
|
| 132 |
+
In future work, we aim to expand upon our method by leveraging other sources of audio data for the pretext task such as AudioSet (Gemmeke et al., 2017). Additionally, we would like to investigate how our learned representation can be used in conjunction with representations obtained from other pretext tasks such as (Tagliasacchi et al., 2019) to capture different features of the audio waveform.
|
papers/ICML/ICML 2020/ICML 2020 Workshop/ICML 2020 Workshop SAS/aaI4jKANEH4/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,383 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Revisiting Representation Learning for Singing Voice Separation with Sinkhorn Distances
|
| 2 |
+
|
| 3 |
+
Stylianos I. Mimilakis ${}^{ * }{}^{1}$ Konstantinos Drossos ${}^{ * }{}^{2}$ Gerald Schuller ${}^{3}$
|
| 4 |
+
|
| 5 |
+
## Abstract
|
| 6 |
+
|
| 7 |
+
In this work we present a method for unsupervised learning of audio representations, focused on the task of singing voice separation. We build upon a previously proposed method for learning representations of time-domain music signals with a re-parameterized denoising autoencoder, extending it by using the family of Sinkhorn distances with entropic regularization. We evaluate our method on the freely available MUSDB18 dataset of professionally produced music recordings, and our results show that Sinkhorn distances with small strength of entropic regularization are marginally improving the performance of informed singing voice separation. By increasing the strength of the entropic regularization, the learned representations of the mixture signal consists of almost perfectly additive and distinctly structured sources.
|
| 8 |
+
|
| 9 |
+
## 1. Introduction
|
| 10 |
+
|
| 11 |
+
Music source separation aims at the estimation of the individual music sources of an observed mixture signal. To that aim, supervised deep learning (DL) based approaches are shown to yield remarkable results (Hennequin et al., 2019; Défossez et al., 2019; Stöter et al., 2019; Samuel et al., 2020). Although different types of sources can be estimated from a music mixture, a specific task of music source separation that has received a lot of attention in relevant research communities is the separation of the singing voice, or singing voice source separation (Rafii et al., 2018).
|
| 12 |
+
|
| 13 |
+
State-of-the-art approaches in DL-based music and singing voice source separation have considered using both precomputed and learned signal representations. The approaches that utilize pre-computed signal representations, have extensively employed the short-time Fourier transform (STFT) (Hennequin et al., 2019; Stöter et al., 2019; Drossos et al., 2018; Mimilakis et al., 2018). On the other hand, learned representations are commonly used in end-to-end models and are jointly learned with the parameters of the rest of the model.
|
| 14 |
+
|
| 15 |
+
In both of the previous approaches, the learning of the representations is based on objectives that assess the reconstruction of the signals of the target sources (Défossez et al., 2019; Samuel et al., 2020). In many cases, the approaches based on end-to-end models do not yield better performance than approaches using representations computed using the STFT (Défossez et al., 2019; Samuel et al., 2020; Tzinis et al., 2020). Furthermore, the learned representations obtained by approaches utilizing end-to-end models are not easily nor intuitively interpreted, compared to the typical STFT representation that utilizes pre-computed signal representations. In order to bridge the gap of separation performance and interpretability between end-to-end-based and STFT-based approaches, recent studies focus on representation learning (Tzinis et al., 2020; Mimilakis et al., 2020).
|
| 16 |
+
|
| 17 |
+
In (Tzinis et al., 2020) is presented a sound source separation method, focused on representation learning. An encoder gets as an input the signals of the sources and their corresponding mixture, and outputs latent representations of the signals of each source and the mixture. Then, using these latent representations, the method calculates and applies source dependent masks to the latent representation of mixture. The result of the application of masks is given as an input to the decoder, which outputs an estimation of the signal of each source. The encoder and the decoder are jointly optimized to minimize the reconstruction error between the ground truth and estimated signals of each source. However, using reconstruction objectives for the separation of only specific sources could severely restrict the representation learning capabilities of encoder-decoder methods (Vincent, 2011). In (Mimilakis et al., 2020) it is proposed to learn representations for singing voice separation in an unsupervised way using a re-parameterized denoising autoencoder (DAE) (Vincent et al., 2010). The re-parameterization replaces the decoding basis functions by amplitude-modulated cosine functions whose parameters are learned with the rest of the DAE. This results into an interpretable representation of the singing voice signal that conveys amplitude information for modulated sinusoidal bases. The re-parametization is similar to Sinc-Networks (Ravanelli & Bengio, 2018) that use sinc functions for encoding speech signals. The parameters of the denoising autoencoder employed in (Mimilakis et al., 2020) are optimized using two objectives. The first objective is to minimize the reconstruction error between the clean and the reconstructed signal voice signal, and the second objective enforces the smoothness of the mixture signal's representation.
|
| 18 |
+
|
| 19 |
+
---
|
| 20 |
+
|
| 21 |
+
*Equal contribution ${}^{1}$ Fraunhofer-IDMT, Ilmenau, Germany ${}^{2}$ Audio Research Group, Tampere University, Tampere, Finland ${}^{3}$ Applied Media Systems Group, Technical University of Ilmenau, Ilmenau, Germany. Correspondence to: Stylianos I. Mimilakis <mis@idmt.fraunhofer.de>.
|
| 22 |
+
|
| 23 |
+
Proceedings of the ${37}^{\text{th }}$ International Conference on Machine Learning, Vienna, Austria, PMLR 108, 2020. Copyright 2020 by the author(s).
|
| 24 |
+
|
| 25 |
+
---
|
| 26 |
+
|
| 27 |
+
In this work we focus on unsupervised representation learning and we aim at learning representations of music signals that can offer enhanced interpretability combined with improved source separation performance. We build on the work presented in (Mimilakis et al., 2020) and we extend it by using the Sinkhorn distances with entropic regularization (Cuturi, 2013) as a representation specific objective. Our contribution is to experimentally show that Sinkhorn distances with entropic regularization can assist in learning representations in which the sources can be efficiently separated and the representations of sources are distinctly structured and additive.
|
| 28 |
+
|
| 29 |
+
## Notation
|
| 30 |
+
|
| 31 |
+
Bold lowercase letters, e.g., "x", denote vectors and bold uppercase letters, e.g. "X", denote matrices. The $l$ -th element of a vector is denoted as ${\mathbf{x}}_{\left\lbrack l\right\rbrack }$ . Similarly, accessing elements from matrices is denoted as ${\mathrm{X}}_{\left\lbrack l,{l}^{\prime }\right\rbrack }$ .
|
| 32 |
+
|
| 33 |
+
## 2. Proposed method
|
| 34 |
+
|
| 35 |
+
Our method follows the one presented in (Mimilakis et al., 2020) and employs an encoder $E\left( \cdot \right)$ and a decoder $D\left( \cdot \right)$ . The input to our method is a music signal, $\mathbf{x} \in {\mathbb{R}}^{N}$ , with $N$ time-domain samples. The output of the method is the learned non-negative representation of $\mathbf{x},\mathbf{A} \in {\mathbb{R}}_{ > 0}^{C \times T}$ , with $T$ templates of $C$ features. The $C$ features can be viewed as analogous to the frequency bins and the $T$ templates as the analogous to the time-frames in a time-frequency representation. $\mathbf{A}$ is computed by the encoder $E\left( \cdot \right)$ , and is interpreted as the magnitude information for a real-valued, sinusoidal-based model, employed by the decoder $D\left( \cdot \right)$ .
|
| 36 |
+
|
| 37 |
+
To optimize $E\left( \cdot \right)$ , we employ the decoder $D\left( \cdot \right)$ and a dataset of monaural (single channel) recordings of singing voice, ${\mathbf{x}}_{\mathrm{v}} \in {\mathbb{R}}^{N}$ , and accompanying musical instruments. Using ${\mathbf{x}}_{\mathbf{v}}$ we create two synthetic signals. The first synthetic signal, ${\widetilde{\mathbf{x}}}_{\mathrm{m}} \in {\mathbb{R}}^{N}$ , is the result of an additive corruption process, where the accompanying musical instruments such as drums, guitars, synthesizers, and bass (i.e. a generic multi-modal distribution-based noise) are added to ${\mathbf{x}}_{\mathbf{v}}$ . The second synthetic signal, ${\widetilde{\mathbf{x}}}_{\mathrm{v}} \in {\mathbb{R}}^{N}$ , is also the result of a corruption process, where Gaussian noise is added to ${\mathbf{x}}_{\mathbf{v}}$ , independently of the amplitude of ${\mathbf{x}}_{\mathrm{v}}$ . During the optimization process (i.e. training), the encoder $E\left( \cdot \right)$ computes two non-negative representations ${\mathbf{A}}_{\mathrm{m}},{\mathbf{A}}_{\mathrm{v}} \in$ ${\mathbb{R}}_{ > 0}^{C \times T}$ using the two above mentioned synthetic signals, ${\widetilde{\mathbf{x}}}_{\mathrm{m}}$ and ${\widetilde{\mathbf{x}}}_{\mathrm{v}}$ , respectively. ${\mathbf{A}}_{\mathrm{v}}$ is used as input to $D\left( \cdot \right)$ , and $D\left( \cdot \right)$ outputs an approximation of the clean singing voice signal ${\mathbf{x}}_{\mathrm{v}},{\widehat{\mathbf{x}}}_{\mathrm{v}}.{\mathbf{A}}_{\mathrm{m}}$ is solely used to calculate an extra loss that will allow $E\left( \cdot \right)$ to learn information regarding the additive multi-modal noise (Mimilakis et al., 2020). An illustration of the training procedure in Figure 1. After the optimization process, $E\left( \cdot \right)$ can take as an input any musical signal $\mathbf{x}$ , and will output the representation of $\mathbf{x},\mathbf{A}$ . The benefit is that $\mathbf{A}$ has good interpretability attributes, e.g. is nonnegative, has structured spectrogram representation, and can be effectively used in the downstream task of singing voice separation.
|
| 38 |
+
|
| 39 |
+
Accompaniment Encoder Decoder (C, C, L) Reconstruction Objective Representation segment Synthetic mixture Singing voice segment Gaussian Singing voice sampling representation Dataset of
|
| 40 |
+
|
| 41 |
+
Figure 1. Overview of our proposed method for representation learning.
|
| 42 |
+
|
| 43 |
+
### 2.1. Encoder
|
| 44 |
+
|
| 45 |
+
The encoder $E\left( \cdot \right)$ consists of two one-dimensional (1D) convolutions with strides. The first 1D convolution uses a stride $S$ and a set of $C$ number of kernels, ${\mathbf{k}}_{c} \in {\mathbb{R}}^{L}$ where $L$ is the temporal length of each $\mathbf{k}$ . The first convolution takes as inputs the signals ${\widetilde{\mathbf{x}}}_{\mathrm{m}}$ and ${\widetilde{\mathbf{x}}}_{\mathrm{v}}$ , and outputs the learned latent representations ${\widetilde{\mathbf{H}}}_{\mathrm{m}} \in {\mathbb{R}}_{ \geq 0}^{C \times T}$ and ${\widetilde{\mathbf{H}}}_{\mathrm{v}} \in {\mathbb{R}}_{ \geq 0}^{C \times T}$ , respectively, using
|
| 46 |
+
|
| 47 |
+
$$
|
| 48 |
+
{\widetilde{\mathrm{H}}}_{\star \left\lbrack {c, t}\right\rbrack } = \mathop{\sum }\limits_{{l = 0}}^{{L - 1}}{\widetilde{\mathrm{x}}}_{\star \left\lbrack {{St} + l}\right\rbrack }{\mathrm{k}}_{c\left\lbrack l\right\rbrack }, \tag{1}
|
| 49 |
+
$$
|
| 50 |
+
|
| 51 |
+
where " $\star$ " refers to either " $\mathrm{m}$ " or " $\mathrm{v}$ " for brevity, and $t \in$ $\left\lbrack {0,\ldots , T - 1}\right\rbrack$ . Appropriate zero-padding is applied to ${\widetilde{\mathbf{x}}}_{ \star }$ , so that $T = \lceil N/S\rceil$ , where $\lceil \cdot \rceil$ is the ceiling function. Each ${\widetilde{\mathbf{H}}}_{ \star }$ is used as an input to the second 1D convolution, which uses another set of $C$ kernels, ${\mathbf{K}}_{{c}^{\prime }}^{\prime } \in {\mathbb{R}}^{{L}^{\prime } \times C}$ , where ${c}^{\prime } =$ $\left\lbrack {1,\ldots , C}\right\rbrack$ , with a temporal length ${L}^{\prime }$ that is ${L}^{\prime } < < L$ . The output of the second convolution is ${\mathbf{H}}_{ \star } \in {\mathbb{R}}^{C \times T}$ , and is performed with a dilation factor of $\phi$ and a unit stride, as
|
| 52 |
+
|
| 53 |
+
$$
|
| 54 |
+
{\mathrm{H}}_{\star \left\lbrack {{c}^{\prime }, t}\right\rbrack } = \mathop{\sum }\limits_{{c = 0}}^{{C - 1}}\mathop{\sum }\limits_{{{l}^{\prime } = 0}}^{{{L}^{\prime } - 1}}{\widetilde{\mathrm{H}}}_{\star \left\lbrack {c, t + \phi {l}^{\prime }}\right\rbrack }{\mathrm{K}}_{{c}^{\prime }\left\lbrack {{l}^{\prime }, c}\right\rbrack }^{{}^{\prime }}. \tag{2}
|
| 55 |
+
$$
|
| 56 |
+
|
| 57 |
+
Then, each ${\mathbf{H}}_{ \star }$ is used in a residual connection, followed by the application of the rectified linear unit (ReLU) function (Nair & Hinton, 2010), as
|
| 58 |
+
|
| 59 |
+
$$
|
| 60 |
+
{\mathbf{A}}_{ \star } = \operatorname{ReLU}\left( {{\mathbf{H}}_{ \star } + {\widetilde{\mathbf{H}}}_{ \star }}\right) . \tag{3}
|
| 61 |
+
$$
|
| 62 |
+
|
| 63 |
+
This is performed in order to enforce smooth and nonnegative representations. The smoothness and the nonnegativity are attributes that can enhance interpretability and are useful for the separation of audio and music sources (Smaragdis & Venkataramani, 2017). To further enforce the smooth representations under realistic corruption processes, in (Mimilakis et al., 2020) it is proposed to minimize the (anisotropic) total-variation denoising cost function, ${\mathcal{L}}_{\mathrm{{TV}}}$ (Rudin et al.,1992), of the representation ${\mathbf{A}}_{\mathrm{m}}$ . ${\mathcal{L}}_{\mathrm{{TV}}}$ is computed as
|
| 64 |
+
|
| 65 |
+
$$
|
| 66 |
+
{\mathcal{L}}_{\mathrm{{TV}}}\left( {\mathbf{A}}_{\mathrm{m}}\right) = \frac{1}{CT}\left( {\mathop{\sum }\limits_{{c = 1}}^{{C - 1}}\mathop{\sum }\limits_{{t = 0}}^{{T - 1}}\left| {{\mathrm{A}}_{\mathrm{m}\left\lbrack {c, t}\right\rbrack } - {\mathrm{A}}_{\mathrm{m}\left\lbrack {c - 1, t}\right\rbrack }}\right| }\right.
|
| 67 |
+
$$
|
| 68 |
+
|
| 69 |
+
$$
|
| 70 |
+
\left. {+\mathop{\sum }\limits_{{t = 1}}^{{T - 1}}\mathop{\sum }\limits_{{c = 0}}^{{C - 1}}\left| {{\mathrm{\;A}}_{\mathrm{m}\left\lbrack {c, t}\right\rbrack } - {\mathrm{A}}_{\mathrm{m}\left\lbrack {c, t - 1}\right\rbrack }}\right| }\right) \text{.} \tag{4}
|
| 71 |
+
$$
|
| 72 |
+
|
| 73 |
+
Practically, ${\mathcal{L}}_{\mathrm{{TV}}}$ penalizes $E\left( \cdot \right)$ by the norm of the first order difference across the time-frames $T$ and templates $C$ , promoting slow time varying representations and grouping of the template activity. The previously mentioned representation attributes are formed from domain knowledge that is based on the STFT.
|
| 74 |
+
|
| 75 |
+
According to (Arjovsky et al., 2017)(Theorem 2) the total-variation distance, in our particular case the sum of absolute differences employed in Eq.(4), is not a suitable cost function for data distributions supported by low-dimensional manifolds. Instead, optimal transportation distances are suitable. We hypothesize that the singing voice, the mixture signals, and their corresponding representations can be described by low-dimensional manifolds, and we propose to replace ${\mathcal{L}}_{\mathrm{{TV}}}$ by Sinkhorn distances, ${\mathcal{L}}_{\mathrm{{SK}}}$ . This is because ${\mathcal{L}}_{\mathrm{{SK}}}$ allow an efficient computation of optimal transportation cost (Cuturi, 2013). More specifically, we use
|
| 76 |
+
|
| 77 |
+
$$
|
| 78 |
+
{\mathcal{L}}_{\mathrm{{SK}}}\left( {\mathbf{A}}_{\mathrm{m}}\right) = \left\langle {{\mathbf{P}}_{\lambda },\psi \left( {\mathbf{A}}_{\mathrm{m}}\right) }\right\rangle , \tag{5}
|
| 79 |
+
$$
|
| 80 |
+
|
| 81 |
+
where $\langle \cdot , \cdot \rangle$ is the Frobenious dot-product and $\psi : {\mathbb{R}}_{ > 0}^{C \times T} \mapsto$ ${\mathbb{R}}_{ > 0}^{T \times T}$ is a function that computes the cost matrix $\mathbf{M} \in$ ${\mathbb{R}}_{ \geq 0}^{\bar{T} \times T}$ of pair-wise distances, defined as
|
| 82 |
+
|
| 83 |
+
$$
|
| 84 |
+
\psi \left( {\mathbf{A}}_{\mathrm{m}}\right) \mathrel{\text{:=}} {\mathrm{M}}_{t,{t}^{\prime }} = {\left( \mathop{\sum }\limits_{{c = 0}}^{{C - 1}}{\left( \left| {\mathrm{A}}_{\mathrm{m}\left\lbrack {c, t}\right\rbrack } - {\mathrm{A}}_{\mathrm{m}\left\lbrack {c,{t}^{\prime }}\right\rbrack }\right| \right) }^{p}\right) }^{1/p}, \tag{6}
|
| 85 |
+
$$
|
| 86 |
+
|
| 87 |
+
for $p = 1$ and $t,{t}^{\prime } \in \left\lbrack {0,\ldots , T - 1}\right\rbrack$ . Only for, and prior to, the computation of the $\mathbf{M},{\mathbf{A}}_{\mathrm{m}}$ is normalized so that the sum of the features at each time-frame $t$ sum up to unity. Furthermore, ${\mathbf{P}}_{\lambda } \in {\mathbb{R}}_{ > 0}^{T \times T}$ is the transportation plan that is computed by solving the minimization problem
|
| 88 |
+
|
| 89 |
+
$$
|
| 90 |
+
{\mathbf{P}}_{\lambda } = \underset{\mathbf{P} \in \mathbb{U}\left( {r, c}\right) }{\arg \min }\left\langle {\mathbf{P},\psi \left( {\mathbf{A}}_{\mathrm{m}}\right) }\right\rangle - \frac{1}{\lambda }H\left( \mathbf{P}\right) , \tag{7}
|
| 91 |
+
$$
|
| 92 |
+
|
| 93 |
+
where $H\left( \cdot \right)$ denotes the entropy function and $\lambda > 0$ is a scalar the controls the strength of the entropic regularization. $\mathbb{U}\left( {r, c}\right)$ is the set of non-negative matrices of size $T \times T$ whose rows and columns sum up to $r$ and $c$ , respectively, where $r = c = 1$ . For solving the minimization problem of Eq.(7) we employ the algorithm presented in (Cuturi, 2013) that is based on the Sinkhorn iterative matrix scaling operator (Sinkhorn, 1967).
|
| 94 |
+
|
| 95 |
+
### 2.2. Decoder
|
| 96 |
+
|
| 97 |
+
The decoder $D\left( \cdot \right)$ takes as an input the representation ${\mathbf{A}}_{\mathrm{v}}$ and yields an approximation of the clean singing voice signal ${\mathbf{x}}_{\mathbf{v}}$ , denoted by ${\widehat{\mathbf{x}}}_{\mathbf{v}} \in {\mathbb{R}}^{N}$ . Specifically, $D\left( \cdot \right)$ models the clean singing voice as a sum of $C$ modulated sinusoidal components that overlap in ${\mathbb{R}}^{N}$ . The components are computed using an 1D transposed convolutions with $S$ strides and another set of $C$ number of kernels, ${\mathbf{w}}_{c} \in {\mathbb{R}}^{L}$ , as
|
| 98 |
+
|
| 99 |
+
$$
|
| 100 |
+
{\widehat{\mathrm{x}}}_{\mathrm{v}\left\lbrack {{St} + l}\right\rbrack } = \eta + \mathop{\sum }\limits_{{c = 0}}^{{C - 1}}{\mathrm{\;A}}_{{\mathrm{v}}_{\left\lbrack c, t\right\rbrack }}{\mathrm{w}}_{c\left\lbrack l\right\rbrack }\text{, where} \tag{8}
|
| 101 |
+
$$
|
| 102 |
+
|
| 103 |
+
$$
|
| 104 |
+
\eta = \left\{ {\begin{array}{ll} 0, & \text{ if }t = 0 \\ {\widehat{\mathrm{x}}}_{\mathrm{v}\left\lbrack {S\left( {t - 1}\right) + l}\right\rbrack }, & \text{ otherwise } \end{array}.}\right. \tag{9}
|
| 105 |
+
$$
|
| 106 |
+
|
| 107 |
+
As can be seen from Eq (9), $\eta$ is is a past sample contained in ${\widehat{\mathbf{x}}}_{\mathrm{v}}$ , that is used for the overlap-add process. Regarding the kernels ${\mathbf{w}}_{c}$ of the decoder, in (Mimilakis et al.,2020) is proposed their re-parameterization as
|
| 108 |
+
|
| 109 |
+
$$
|
| 110 |
+
{\mathrm{w}}_{c\left\lbrack l\right\rbrack } = \cos \left( {{2\pi }{f}_{c}^{2}l + {\rho }_{c}}\right) {\mathrm{b}}_{c\left\lbrack l\right\rbrack }, \tag{10}
|
| 111 |
+
$$
|
| 112 |
+
|
| 113 |
+
where $\cos \left( \cdot \right)$ is the cosine function, and $l = \left\lbrack {0,\ldots , L - 1}\right\rbrack$ is the time index. The parameters that are joinlty learnt with the parameters of the DAE are the sampling-rate-normalized carrier frequency ${f}_{c}$ , the phase ${\rho }_{c}$ (in radians), and the modulating signal ${\mathbf{b}}_{c} \in {\mathbb{R}}^{L}$ . The direct access to natural quantities like the above, significantly boosts the interpretability of the representation learning method. Additionally, ${\mathbf{w}}_{c}$ can be sorted according to the carrier frequency ${f}_{c}$ , promoting intuitive representations.
|
| 114 |
+
|
| 115 |
+
After the reconstruction of ${\widehat{\mathbf{x}}}_{\mathrm{v}}$ , the negative signal-to-noise ratio (neg-SNR) (Kavalerov et al., 2019), is computed as
|
| 116 |
+
|
| 117 |
+
$$
|
| 118 |
+
{\mathcal{L}}_{\text{neg-SNR }}\left( {{\mathbf{x}}_{\mathrm{v}},{\widehat{\mathbf{x}}}_{\mathrm{v}}}\right) = - {10}{\log }_{10}\left( \frac{{\begin{Vmatrix}{\mathbf{x}}_{\mathrm{v}}\end{Vmatrix}}_{2}^{2}}{{\begin{Vmatrix}{\mathbf{x}}_{\mathrm{v}} - {\widehat{\mathbf{x}}}_{\mathrm{v}}\end{Vmatrix}}_{2}^{2}}\right) , \tag{11}
|
| 119 |
+
$$
|
| 120 |
+
|
| 121 |
+
where $\parallel \cdot {\parallel }_{2}$ is the ${\ell }_{2}$ vector norm, and the negative sign is used to cast the logarithmic SNR as a minimization objective. Then, the overall overall minimization objective for $E\left( \cdot \right)$ and $D\left( \cdot \right)$ is computed using ${\mathcal{L}}_{\mathrm{{TV}}}$ as
|
| 122 |
+
|
| 123 |
+
$$
|
| 124 |
+
{\mathcal{L}}_{A} = {\mathcal{L}}_{\text{neg-SNR }} + \omega {\mathcal{L}}_{\mathrm{{TV}}}, \tag{12}
|
| 125 |
+
$$
|
| 126 |
+
|
| 127 |
+
or using ${\mathcal{L}}_{\mathrm{{SK}}}$ as
|
| 128 |
+
|
| 129 |
+
$$
|
| 130 |
+
{\mathcal{L}}_{B} = {\mathcal{L}}_{\text{neg-SNR }} + \omega {\mathcal{L}}_{\mathrm{{SK}}}, \tag{13}
|
| 131 |
+
$$
|
| 132 |
+
|
| 133 |
+
where $\omega$ is a scalar that weights the impact of the representation objective (either ${\mathcal{L}}_{\mathrm{{TV}}}$ or ${\mathcal{L}}_{\mathrm{{SK}}}$ ) in the learning signal for $E\left( \cdot \right)$ .
|
| 134 |
+
|
| 135 |
+
## 3. Experimental Procedure
|
| 136 |
+
|
| 137 |
+
### 3.1. Dataset
|
| 138 |
+
|
| 139 |
+
For training and testing the representation learning method we use the freely available MUSDB18 dataset (Rafii et al., 2017). The dataset consists of 150 two-channel professionally produced multi-tracks, i.e, the stereophonic signals of bass, drums, singing voice, and other music instruments, that comprise a music mixture. Every signal is sampled at ${44100}\mathrm{\;{Hz}}$ . The multi-tracks are split into training (100 multi-tracks) and testing (50 multi-tracks) subsets.
|
| 140 |
+
|
| 141 |
+
### 3.2. Training
|
| 142 |
+
|
| 143 |
+
During training we sample a set of four multi-tracks from which we use the vocals and the other music instrument sources, collectively forming the accompaniment source. The accompaniment source is computed by adding the bass, drums, and other music instrument sources. Then, each sampled multi-track is down-mixed to a single channel and is partitioned into overlapping segments of $N = {44100}$ samples. The overlap is 22050 samples. We randomly shuffle the segments for each source and corrupt the singing voice signal using the shuffled segments of the accompaniment source. For the corruption by additive Gaussian noise, the standard deviation of the noise is set to ${1e} - 4$ .
|
| 144 |
+
|
| 145 |
+
For optimizing the parameters of the representation learning method, with respect to the minimization of Eq.(12) or Eq.(13), we use the adam algorithm (Kingma & Ba, 2015), with a batch of 8 segments and a learning rate of ${1e} - 4$ . To compute the Sinkhorn distance(s), we average within the batch, all the cost matrices $\mathbf{M}$ computed using Eq.(6) and each ${\mathbf{A}}_{\mathrm{m}}$ contained in the batch.
|
| 146 |
+
|
| 147 |
+
### 3.3. Evaluation
|
| 148 |
+
|
| 149 |
+
For evaluating the usefulness of the representation that is learned by our method, we use the rest of the 50 tracks. Each track is down-mixed and partitioned into non-overlapping segments of $N = {44100}$ samples (1 second length). Shuffling and random mixing is not performed at this stage. However, silent segments of the singing voice are discarded. The representation is evaluated with respect to the three following criteria: i) reconstruction error of the proposed method to encode and decode the clean singing voice signal using the previously described methodology, ii) reconstruction error of the separated singing voice signal by binary masking, and iii) additivity of the representation. The first two criteria are objectively measured with respect to the clean singing voice signal ${\mathbf{x}}_{\mathbf{v}}$ using the scale-invariant signal-to-distortion ratio (SI-SDR) (Roux et al., 2019). Details regarding the computation of SI-SDR and the separation by binary masking are given in the supplementary material. Binary masking is used because it is an indicator of how disjoint (i.e. nonoverlapping) two sources are, given a representation (more information exists in the supplementary material). We assess the additivity of the sources by computing the measure
|
| 150 |
+
|
| 151 |
+
$$
|
| 152 |
+
\mathcal{A}\left( {{\mathbf{x}}_{\mathrm{m}},{\mathbf{x}}_{\mathrm{v}},{\mathbf{x}}_{\mathrm{{ac}}}}\right) = 1 - \frac{{\begin{Vmatrix}E\left( {\mathbf{x}}_{\mathrm{m}}\right) - E\left( {\mathbf{x}}_{\mathrm{v}}\right) - E\left( {\mathbf{x}}_{\mathrm{{ac}}}\right) \end{Vmatrix}}_{1}}{{\begin{Vmatrix}E\left( {\mathbf{x}}_{\mathrm{m}}\right) \end{Vmatrix}}_{1}},
|
| 153 |
+
$$
|
| 154 |
+
|
| 155 |
+
(14)
|
| 156 |
+
|
| 157 |
+
where $\parallel \cdot {\parallel }_{1}$ is the ${L}_{1}$ matrix norm, $\varepsilon = {1e} - {24}$ is a small term for ensuring numerical stability, and ${\mathbf{x}}_{\mathrm{{ac}}}$ is the time-domain signal of the accompaniment music source that is computed by mixing the multi-tracks available in the testing subset. High values of $\mathcal{A}\left( \cdot \right)$ indicate that the representation of the mixture signal consists of non-negative and additive sources (i.e. higher $\mathcal{A}\left( \cdot \right)$ is better). The attribute of additivity is important for the computation of optimal separation masks (Liutkus & Badeau, 2015), and in the unsupervised exploitation of music sources' structure (Smaragdis et al., 2006; Huang et al., 2012).
|
| 158 |
+
|
| 159 |
+
## 4. Results & Discussion
|
| 160 |
+
|
| 161 |
+
Table 1 contains the average and standard deviation values of the additivity measure $\mathcal{A}\left( \cdot \right)$ , the SI-SDR for the reconstruction and the separation objective performance in $\mathrm{{dB}}$ , and the values of the hyper-parameters $\omega$ and $\lambda$ . The results in Table 1 are discussed according to the SI-SDR value (higher is better), because SI-SDR is the reconstruction objective.
|
| 162 |
+
|
| 163 |
+
Table 1. Results from objectively evaluating the learned representations. Boldfaced values denote best obtained performance.
|
| 164 |
+
|
| 165 |
+
<table><tr><td>Objective</td><td>$\omega$</td><td>$\lambda$</td><td>SI-SDR (dB)</td><td>SI-SDR-BM (dB)</td><td>$\mathcal{A}\left( \cdot \right)$</td></tr><tr><td rowspan="5">${\mathcal{L}}_{A}$</td><td>0.5</td><td>N/A</td><td>31.49 (±2.98)</td><td>${4.43}\left( {\pm {4.98}}\right)$</td><td>0.76 (±0.10)</td></tr><tr><td>1.0</td><td>N/A</td><td>31.39 (±3.16)</td><td>${4.66}\left( {\pm {4.92}}\right)$</td><td>0.76 (±0.10)</td></tr><tr><td>1.5</td><td>N/A</td><td>31.01 (±3.13)</td><td>4.97 (±4.93)</td><td>${0.75}\left( {\pm {0.10}}\right)$</td></tr><tr><td>2.0</td><td>N/A</td><td>30.96 (±2.98)</td><td>${4.65}\left( {\pm {4.90}}\right)$</td><td>0.76 (±0.10)</td></tr><tr><td>4.0</td><td>N/A</td><td>31.40 (±2.83)</td><td>${5.06}\left( {\pm {4.97}}\right)$</td><td>0.76 (±0.10)</td></tr><tr><td rowspan="5">${\mathcal{L}}_{B}$</td><td>1.0</td><td>0.1</td><td>${31.28}\left( {\pm {2.98}}\right)$</td><td>${5.40}\left( {\pm {5.31}}\right)$</td><td>${0.76}\left( {\pm {0.09}}\right)$</td></tr><tr><td>1.0</td><td>0.5</td><td>${31.61}\left( {\pm {3.38}}\right)$</td><td>${5.63}\left( {\pm {5.29}}\right)$</td><td>${0.77}\left( {\pm {0.09}}\right)$</td></tr><tr><td>1.0</td><td>1.0</td><td>${31.29}\left( {\pm {3.25}}\right)$</td><td>${4.33}\left( {\pm {5.28}}\right)$</td><td>${0.86}\left( {\pm {0.08}}\right)$</td></tr><tr><td>1.0</td><td>1.5</td><td>${29.98}\left( {\pm {3.48}}\right)$</td><td>${0.06}\left( {\pm {6.43}}\right)$</td><td>${0.89}\left( {\pm {0.08}}\right)$</td></tr><tr><td>1.0</td><td>2.0</td><td>${31.13}\left( {\pm {3.66}}\right)$</td><td>$- {0.02}\left( {\pm {6.44}}\right)$</td><td>${0.89}\left( {\pm {0.08}}\right)$</td></tr></table>
|
| 166 |
+
|
| 167 |
+
There are two observable trends in Table 1. The first trend is that when using ${\mathcal{L}}_{B}$ , small values of $\lambda$ marginally improve the SI-SDR, compared to the best SI-SDR when using ${\mathcal{L}}_{A}$ (i.e. $\omega = {0.5}$ and SI-SDR=31.49). Specifically, for $\lambda = {0.5}$ and when using ${\mathcal{L}}_{B}$ , we obtain an improvement of ${0.12}\mathrm{\;{dB}}$ and ${1.20}\mathrm{\;{dB}}$ for SI-SDR and SI-SDR-BM, respectively, compared to the case of using ${\mathcal{L}}_{A}$ and $\omega = {0.5}$ . Additionally, with the same $\lambda = {0.5}$ for ${\mathcal{L}}_{B}$ , we obtain an improvement of ${0.57}\mathrm{\;{dB}}$ SI-SDR-BM, compared to the best SI-SDR-BM with ${\mathcal{L}}_{A}$ (i.e. with $\omega = {4.0}$ ). This trend shows that when using Sinkhorn distances as an objective (i.e. ${\mathcal{L}}_{B}$ ) and small entropic regularization weight (i.e. small values of $\lambda$ ), there is a marginal improvement of the reconstruction performance for the singing voice (measured with SI-SDR-BM), but also the learned representations yield better results for singing voice separation (measured with SI-SDR).
|
| 168 |
+
|
| 169 |
+
Carrier Frequency (Hz) ANDORDSSE Carrier Frequency (Hz) 93 Time frames $T$ Time frames $T$ Time frames $T$
|
| 170 |
+
|
| 171 |
+
Figure 2. Learned representations for the mixture (left), the singing voice (middle), and the accompaniment (right) signals using the $E\left( \cdot \right)$ optimized with ${\mathcal{L}}_{B}$ for ${\mathcal{L}}_{\mathrm{{SK}}} : \omega = {4.0},\lambda = {1.5}$
|
| 172 |
+
|
| 173 |
+
The second trend observed in Table 1 is that when using ${\mathcal{L}}_{B}$ and $\lambda > 1$ , specifically for $\lambda \in \left\lbrack {{1.5},{2.0}}\right\rbrack$ , the SI-SDR for binary masking drops by more than $5\mathrm{\;{dB}}$ , compared to $\lambda = {0.5}$ . This indicates that the separation by binary masking fails, suggesting that the singing voice and accompaniment are completely overlapping in the representation of the mixture ${\mathbf{A}}_{\mathrm{m}}$ . That is expected since entropy expresses the uncertainty about the representation of the mixture signal. This means that during training, all the elements in the feature space of the representation are equally probable to be active when the mixture signal is encoded. However, that uncertainty comes with an observed effect that is the sources become additive in the learned representation.
|
| 174 |
+
|
| 175 |
+
To further investigate the effect of entropic regularization with respect to the additivity metric, we keep the best $\lambda = {1.5}$ from Table 1, and examine the impact of the weight $\omega$ on ${\mathcal{L}}_{B}$ . The corresponding results compared to the STFT, that is the most commonly employed representation for music source separation, are given in Table 2. The results from Table 2 suggest that by increasing the weight $\omega$ that affects the strength of the representation objective in the learning signal, the learned mixture representations, for $\omega = {4.0}$ , consist of two almost additive representations, i.e., the singing voice and the accompaniment representations. Furthermore, nearly all representations computed using the Sinkhorn distances and the entropic regularization, outperform the STFT with respect to the objective measure of additivity in an unsupervised fashion. To qualitatively assess the representations for the extreme case observed in Table 2, Fig. 2 illustrates learned representations for the mixture, singing voice, and the accompaniment signal. The signals were acquired from a single multitrack segment contained in the testing sub-set of MUSDB18. The representations are computed using the optimized encoder with the ${\mathcal{L}}_{B}$ objective. As it can be clearly observed from Fig. 2 higher than 0.5 entropic regularization enables the learning of representations that for particular sources such as the accompaniment, exhibit distinct structure, i.e., vertical activity (activity with respect to $C$ ). Furthermore, the representation of the singing voice is characterized by horizontal activity, i.e., a few components $C$ are active and smoothly vary in time. We believe that the distinct structure of the music sources, observed in Fig.2, could be useful for unsupervised separation and/or enhancement methods such as the deep audio prior (Michelashvili & Wolf, 2019) and the harmonic convolution(s) model (Zhang et al., 2020).
|
| 176 |
+
|
| 177 |
+
Table 2. Objective evaluation of the additivity of the learned representations.
|
| 178 |
+
|
| 179 |
+
<table><tr><td>Objective</td><td>$\omega$</td><td>$\lambda$</td><td>$\mathcal{A}\left( \cdot \right)$</td></tr><tr><td rowspan="4">${\mathcal{L}}_{B}$</td><td>1.0</td><td>1.5</td><td>${0.89}\left( {\pm {0.08}}\right)$</td></tr><tr><td>1.5</td><td>1.5</td><td>${0.90}\left( {\pm {0.07}}\right)$</td></tr><tr><td>2.0</td><td>1.5</td><td>${0.92}\left( {\pm {0.07}}\right)$</td></tr><tr><td>4.0</td><td>1.5</td><td>${0.93}\left( {\pm {0.06}}\right)$</td></tr><tr><td>STFT</td><td>N/A</td><td>N/A</td><td>${0.86}\left( {\pm {0.06}}\right)$</td></tr></table>
|
| 180 |
+
|
| 181 |
+
## 5. Conclusions
|
| 182 |
+
|
| 183 |
+
In this work we proposed the usage of entropic regularized Sinkhorn distances as a cost objective for unsupervised learning of interpretable music signal representations. We experimentally showed that Sinkhron distances can be useful for the problem of learning representations for singing voice separation. Particularly, the learned representations allow the separation of singing voice by masking for small values of entropic regularization, improving a previously proposed unsupervised approach. Nonetheless, higher values of entropic regularization lead to learning representations of sources that are distinctly structured and are almost additive; attributes that are useful in music source separation. The source code is based on the Pytorch framework (Paszke et al.,2019) and is freely available online ${}^{1}$ .
|
| 184 |
+
|
| 185 |
+
## Acknowledgements
|
| 186 |
+
|
| 187 |
+
## Stylianos I. Mimilakis is supported in part by the German Research Foundation (AB 675/2-1, MU 2686/11-1). K. Drossos would like to acknowledge CSC Finland for com- putational resources.
|
| 188 |
+
|
| 189 |
+
---
|
| 190 |
+
|
| 191 |
+
${}^{1}$ https://github.com/Js-Mim/rl_singing_ voice
|
| 192 |
+
|
| 193 |
+
---
|
| 194 |
+
|
| 195 |
+
## Supplementary Material
|
| 196 |
+
|
| 197 |
+
## Computation of Sinkhorn Distances
|
| 198 |
+
|
| 199 |
+
The entropy for the regularization of Eq.(7) is computed as
|
| 200 |
+
|
| 201 |
+
$$
|
| 202 |
+
H\left( \mathbf{P}\right) = - \mathop{\sum }\limits_{{t,{t}^{\prime } = 0}}^{{T - 1}}{\mathrm{P}}_{\left\lbrack t,{t}^{\prime }\right\rbrack }\log \left( {\mathrm{P}}_{\left\lbrack t,{t}^{\prime }\right\rbrack }\right)
|
| 203 |
+
$$
|
| 204 |
+
|
| 205 |
+
For solving Eq.(7) with the Sinkhorn iterative matrix scaling algorithm and entropic regularization we used the Algorithm 1 presented in (Cuturi, 2013). We set the total number of iterations to ${1e3}$ per each batch, and the termination threshold to ${1e} - 5$ .
|
| 206 |
+
|
| 207 |
+
The normalization of ${\mathbf{A}}_{\mathrm{m}}$ prior to the computation of the Sinkhorn distances is based on:
|
| 208 |
+
|
| 209 |
+
$$
|
| 210 |
+
{\mathrm{A}}_{\mathrm{m}\left\lbrack {c, t}\right\rbrack }^{ * } = \frac{{\mathrm{A}}_{\mathrm{m}\left\lbrack {c, t}\right\rbrack }}{\mathop{\sum }\limits_{c}\left( {{\mathrm{\;A}}_{\mathrm{m}\left\lbrack {c, t}\right\rbrack } + \frac{1}{C}}\right) }
|
| 211 |
+
$$
|
| 212 |
+
|
| 213 |
+
## Hyper-parameter Selection
|
| 214 |
+
|
| 215 |
+
## Convolutional Networks
|
| 216 |
+
|
| 217 |
+
For training, the total number of iterations throughout the whole training sub-set is set to 10 . The selection is based on the experimental procedure presented in (Mimilakis et al., 2020), suggesting that any improvements towards the minimization of the overall cost function do not take place after the 10-th iteration.
|
| 218 |
+
|
| 219 |
+
The hyper-parameters for the convolution kernels are based on the best performing combination that has been previously presented in (Mimilakis et al., 2020) and are: number of kernels for the convolutional encoder ${C}^{\prime } = C = {800}$ , stride size used in the first convolutional operator and the decoder $S = {256}$ , length of each kernel in the first convolution and in the decoder $L = {2048}$ , length of the second convolution ${L}^{\prime } = 5$ , and the dilation factor of the second convolution $\phi = {10}$ .
|
| 220 |
+
|
| 221 |
+
## Audio Signals & Transforms
|
| 222 |
+
|
| 223 |
+
In the evaluation and for the comparison with the STFT, the STFT uses a window size of 2048 samples, an analysis step-size of 256 samples and the Hamming windowing function. The window-size and the step-size were selected according to the closest match of the hyper-parameters in the convolutions (stride, and kernel length).
|
| 224 |
+
|
| 225 |
+
The removal of silent segments is based on the following:
|
| 226 |
+
|
| 227 |
+
$$
|
| 228 |
+
{l}_{{\mathrm{x}}_{\mathrm{v}}} = {10}{\log }_{10}\left( {{\begin{Vmatrix}{\mathbf{x}}_{\mathrm{v}}\end{Vmatrix}}_{2}^{2} + \epsilon }\right) \left\{ \begin{array}{l} {\mathbf{x}}_{\mathrm{v}} : \text{ active, if }{l}_{{\mathrm{x}}_{\mathrm{v}}} \geq - {10} \\ {\mathbf{x}}_{\mathrm{v}} : \text{ silent,}\;\text{ otherwise. } \end{array}\right.
|
| 229 |
+
$$
|
| 230 |
+
|
| 231 |
+
## Initialization
|
| 232 |
+
|
| 233 |
+
The kernels in the first convolutions are randomly initialized with values drawn from a uniform distribution. The bounds of the uniform distribution are $\left( {-\sqrt{\frac{3}{C}},\sqrt{\frac{3}{C}}}\right)$ , following the initialization strategy presented in (He et al., 2015). For the decoder, the phase values ${\rho }_{c}$ are initialized to zero, and all the elements of the modulating vectors ${\mathbf{b}}_{c}$ are initialized to $\frac{1}{C + L}$ . The initialization of the normalized frequencies ${f}_{c}$ is inspired by (Ravanelli & Bengio, 2018) and is performed by first computing the center frequencies of the Mel scale ${f}_{\text{Mel }}$ between ${f}_{\mathrm{{Hz}}} \in \left\lbrack {{30},\ldots ,{22050}}\right\rbrack \mathrm{{Hz}}$ , over $C = {800}$ number of steps, using
|
| 234 |
+
|
| 235 |
+
$$
|
| 236 |
+
{f}_{\mathrm{{Mel}}} = {2595}{\log }_{10}\left( {1 + \frac{{f}_{\mathrm{{Hz}}}}{700}}\right)
|
| 237 |
+
$$
|
| 238 |
+
|
| 239 |
+
and then the initial ${f}_{c}$ value is computed as
|
| 240 |
+
|
| 241 |
+
$$
|
| 242 |
+
{f}_{c} = \frac{{7001}{0}^{{f}_{\mathrm{{Mel}}/{2595}}} - 1}{44100}
|
| 243 |
+
$$
|
| 244 |
+
|
| 245 |
+
## Separation by Binary Masking
|
| 246 |
+
|
| 247 |
+
We conduct singing voice separation by masking because masking is an important operation in audio and music source separation, and has been extensively used by DL-based approaches and also representation learning (Tzinis et al., 2020). The focus is given on informed separation, i.e., masks are computed by an oracle method using the information for all the mixture's sources from the dataset. This is done in order to estimate the least-upper-bound performance of singing voice separation for a learned representation. This alleviates the biases on the prior information that music source separation approaches have. Examples of biases include the source's structure and existing neural architectures engineered for the STFT. Finally, binary masking is used because it is an indicator of how disjoint (less overlap) two sources are given a representation.
|
| 248 |
+
|
| 249 |
+
The binary mask is computed by encoding three signals. The first signal is the mixture ${\mathbf{x}}_{\mathbf{m}}$ , the second signal is the accompaniment source ${\mathbf{x}}_{\mathrm{{ac}}}$ , and the singing voice signal ${\mathbf{x}}_{\mathrm{v}}$ . Using the trained encoder $E\left( \cdot \right)$ the representations ${\mathbf{A}}_{\mathrm{m}},{\mathbf{A}}_{\mathrm{{ac}}}$ , and ${\mathbf{A}}_{\mathrm{v}}$ are computed for ${\mathbf{x}}_{\mathrm{m}},{\mathbf{x}}_{\mathrm{{ac}}}$ , and ${\mathbf{x}}_{\mathrm{v}}$ , respectively. The mask ${\mathbf{G}}_{\mathrm{v}} \in {\mathbb{R}}^{C \times T}$ is computed as
|
| 250 |
+
|
| 251 |
+
$$
|
| 252 |
+
{\mathbf{G}}_{\mathrm{v}} = g\left( {{\mathbf{A}}_{\mathrm{v}} \oslash {\mathbf{A}}_{\mathrm{{ac}}}}\right) ,
|
| 253 |
+
$$
|
| 254 |
+
|
| 255 |
+
where " $\oslash$ " is the element-wise division and $g\left( \cdot \right)$ is defined
|
| 256 |
+
|
| 257 |
+
as
|
| 258 |
+
|
| 259 |
+
$$
|
| 260 |
+
g\left( \mathrm{x}\right) = \left\{ {\begin{array}{ll} 1, & \text{ if }\mathrm{x} \geq {0.5} \\ 0, & \text{ otherwise } \end{array}.}\right.
|
| 261 |
+
$$
|
| 262 |
+
|
| 263 |
+
The approximation of the singing voice time-domain signal ${\widehat{\mathbf{x}}}_{\mathrm{v}}$ using binary masking is computed using
|
| 264 |
+
|
| 265 |
+
$$
|
| 266 |
+
{\widehat{\mathbf{x}}}_{\mathrm{v}} = D\left( {{\mathbf{A}}_{\mathrm{m}} \odot {\mathbf{G}}_{\mathrm{v}}}\right) ,
|
| 267 |
+
$$
|
| 268 |
+
|
| 269 |
+
where " $\odot$ " is the element-wise (Hadamard) product.
|
| 270 |
+
|
| 271 |
+
## Computation of SI-SDR
|
| 272 |
+
|
| 273 |
+
The scale-invariant signal-to-distortion ratio in $\mathrm{{dB}}$ is computed for each segment, as
|
| 274 |
+
|
| 275 |
+
$$
|
| 276 |
+
\text{SI-SDR}\left( {{\mathbf{x}}_{\mathrm{v}},{\widehat{\mathbf{x}}}_{\mathrm{v}}}\right) = {10}{\log }_{10}\left( \frac{{\begin{Vmatrix}\alpha {\mathbf{x}}_{\mathrm{v}}\end{Vmatrix}}_{2}^{2}}{{\begin{Vmatrix}\alpha {\mathbf{x}}_{\mathrm{v}} - {\widehat{\mathbf{x}}}_{\mathrm{v}}\end{Vmatrix}}_{2}^{2}}\right) \text{, for}
|
| 277 |
+
$$
|
| 278 |
+
|
| 279 |
+
$$
|
| 280 |
+
\alpha = \frac{{\widetilde{\mathbf{x}}}_{\mathrm{v}}^{T}{\mathbf{x}}_{\mathrm{v}}}{{\begin{Vmatrix}{\mathbf{x}}_{\mathrm{v}}\end{Vmatrix}}_{2}^{2}}. \tag{15}
|
| 281 |
+
$$
|
| 282 |
+
|
| 283 |
+
Higher SI-SDR values indicate better reconstruction or separation performance.
|
| 284 |
+
|
| 285 |
+
## Additional Results
|
| 286 |
+
|
| 287 |
+
In Figure 3 we demonstrate additional results from the objective evaluation of the learned representations using ${\mathcal{L}}_{B}$ that consists of the Sinkhorn distances. Particularly, Figure 3 contains error plots for a greater range of entropic regularization weights $\lambda \in \left\lbrack {{0.1},{0.5},{1.0},{1.3},{1.5},{2.0},{5.0},{10.0}}\right\rbrack$ and for $\omega = {1.0}$ . In addition to this, we have included results for $p = 1$ and $p = 2$ , where $p > 0$ is used in the computation of the cost matrix $\mathbf{M}$ used by the Sinkhorn distances.
|
| 288 |
+
|
| 289 |
+
From Figure 3 two main observations are highlighted. The first observation is that the computation of the cost matrix $\mathbf{M}$ for $p = 2$ leads to marginally sub-optimal results, for nearly all $\lambda$ values and metrics, compared to $p = 1$ . Specifically, the reconstruction performance of $p = 1$ is outperforming $p = 2$ by $1\mathrm{\;{dB}}$ , on average across $\lambda$ values. Also, $p = 1$ outperforms by ${0.6}\mathrm{\;{dB}}$ , on average, $p = 2$ for the separation by masking performance. For the additivity metric, $p = 2$ marginally outperforms $p = 1$ for a negligible difference of $3{e}^{-3}$ . For these reasons the main results of our work focus on $p = 1$ .
|
| 290 |
+
|
| 291 |
+
The second observation is that for $\lambda > 2$ the observed separation performance dip and additivity performance peak in the area of $\lambda \in \left\lbrack {{1.3},{1.5},{2.0}}\right\rbrack$ disappears, and the examined method performs similarly to the values of low entropy, according to the examined metrics. This contradicts our expectations for the effect of entropic regularization. Our only explanation to this behavior is that for values $\lambda > 2$ the exponential function used in the computation of the Sinkhorn distances and applied initially to $\mathbf{M}$ yields saturated values that bias the overall minimization, in an unexpected way, that requires a closer inspection.
|
| 292 |
+
|
| 293 |
+
In similar vein, for the ${\mathcal{L}}_{A}$ that uses the total-variation de-noising cost the full results complimenting Table 1 are illustrated in Figure 4.
|
| 294 |
+
|
| 295 |
+
To justify the selection of the particular $\lambda = {1.5}$ hyper-parameter for computing ${\mathcal{L}}_{B}$ in Table 2, Figure 6 illustrates the evaluation results for neighbouring $\lambda \in \left\lbrack {{1.3},{2.0}}\right\rbrack$ compared to $\lambda = {1.5}$ , where similar behavior is observed. As it can be observed from Figure 6 the performance of all the representations is nearly identical, with a negligible performance boost observed for of $\lambda = {1.5}$ (orange line), on average across the values of $\omega$ .
|
| 296 |
+
|
| 297 |
+
SI-SDR (dB) SI-SDR (dB) ${\mathcal{L}}_{B}$ for $\omega = 1$ and ${\mathcal{L}}_{SK} : p = 1$ ${\mathcal{L}}_{B}$ for $\omega = 1$ and ${\mathcal{L}}_{SK} : p = 2$ 为 PPSPP 3 ${\mathcal{L}}_{B}$ for $\omega = 1$ and ${\mathcal{L}}_{SK} : p = 1$ ${\mathcal{L}}_{B}$ for $\omega = 1$ and ${\mathcal{L}}_{SK} : p = 2$ $\lambda$ ${\mathcal{L}}_{B}$ for $\omega = 1$ and ${\mathcal{L}}_{SK} : p = 1$ ${\mathcal{L}}_{B}$ for $\omega = 1$ and ${\mathcal{L}}_{SK} : p = 2$ 0 $A\left( \cdot \right)$ Q3 0 Q9
|
| 298 |
+
|
| 299 |
+
Figure 3. Performance evaluation of the learned representations by ${\mathcal{L}}_{B}$ that uses the Sinkhorn distances. (top-left) Reconstruction of singing voice in SI-SDR, (top-right) oracle separation performance in SI-SDR, and (bottom) additivity objective measure. Horizontal and vertical lines denote the average and the standard deviation of the performance, respectively.
|
| 300 |
+
|
| 301 |
+
SI-SDR (dB) SI-SDR (dB) Q3 。 ✓ $\vdash {\mathcal{L}}_{\mathcal{W}}$ 25 ω $A\left( \cdot \right)$ 。 Q9 Q3 ✓
|
| 302 |
+
|
| 303 |
+
Figure 4. ${\mathcal{L}}_{A}$ using total variation denoising $\left( {\mathcal{L}}_{\mathrm{{TV}}}\right)$ for various values of $\omega$ . Left reconstruction of singing voice in SI-SDR, middle oracle separation performance in SI-SDR, and right additivity objective measure
|
| 304 |
+
|
| 305 |
+
Finally, in Figure 5 we provide additional illustrations of the representations obtained using either ${\mathcal{L}}_{A}$ or ${\mathcal{L}}_{B}$ , for a random multi-track segment. For ${\mathcal{L}}_{B}$ we focus on two extreme cases of separation and additivity performance observed from Tables 1 and 2. In particular, we illustrate representations obtained for entropy values $\lambda = {1.5}$ and for $\lambda = {0.5}$ , that resulted in the best performance of additivity and masking, respectively. For comparison, we also display learned representations for ${\mathcal{L}}_{A}$ for $\omega = {4.0}$ , in which the best separation performance for ${\mathcal{L}}_{A}$ was observed in Table 1.
|
| 306 |
+
|
| 307 |
+
Carrier Frequency (Hz) Time frames $T$ Time frames $T$ Time frames $T$ Time frames $T$ (b) Learned representations for the mixture (left), the singing voice (middle), and the accompaniment (right) signals using the $E\left( \cdot \right)$ Q3 93 Q3 93 93 B Resistant Time frames $T$ Time frames $T$ Time frames $T$ (a) Learned representations for the mixture (left), the singing voice (middle), and the accompaniment (right) signals using t optimized with ${\mathcal{L}}_{A}$ for: ${\mathcal{L}}_{\mathrm{{TV}}} : \omega = {4.0}$ Carrier Frequency (Hz) Time frames $T$ optimized with ${\mathcal{L}}_{B}$ for: ${\mathcal{L}}_{\mathrm{{SK}}} : \omega = {1.0},\lambda = {0.5}$ Carrier Frequency (Hz) 0.1 3 Time frames $T$
|
| 308 |
+
|
| 309 |
+
(c) Learned representations for the mixture (left), the singing voice (middle), and the accompaniment (right) signals using the $E\left( \cdot \right)$
|
| 310 |
+
|
| 311 |
+
optimized with ${\mathcal{L}}_{B}$ for ${\mathcal{L}}_{\mathrm{{SK}}} : \omega = {4.0},\lambda = {1.5}$
|
| 312 |
+
|
| 313 |
+
Figure 5. An illustration of the learned representations of a single multi-track segment, using three optimized encoders $\mathcal{E}$ .
|
| 314 |
+
|
| 315 |
+
SI-SDR (dB) ${\mathcal{L}}_{B}$ for $\lambda = {1.3}$ SI-SDR (dB) 0 0 ${\mathcal{L}}_{B}$ for $\lambda = {1.3}$ ${\mathcal{L}}_{B}$ for $\lambda = {1.5}$ 谷 ${\mathcal{L}}_{B}$ for $\lambda = {2.0}$ 为 W ${\mathcal{L}}_{B}$ for $\lambda = {1.5}$ ${\mathcal{L}}_{B}$ for $\lambda = {2.0}$ ✓ $\omega$ ${\mathcal{L}}_{B}$ for $\lambda = {1.5}$ ${\mathcal{L}}_{B}$ for $\lambda = {2.0}$ $\mathcal{A}\left( \cdot \right)$
|
| 316 |
+
|
| 317 |
+
Figure 6. Performance evaluation of the learned representations by ${\mathcal{L}}_{B}$ using three entropic regularization $\lambda$ . (top-left) Reconstruction of singing voice in SI-SDR, (top-right) oracle separation performance in SI-SDR, and (bottom) additivity objective measure. Horizontal and vertical lines denote the average and the standard deviation of the performance, respectively.
|
| 318 |
+
|
| 319 |
+
By observing Figure 5 it can be seen that the usage of ${\mathcal{L}}_{A}$ (employing the total-variation denoising cost) leads to smooth representations. However, qualitatively the representation of the mixture and the sources seem somewhat blurry, without distinct structure between the sources. Consequently, representations learned using ${\mathcal{L}}_{A}$ might impose difficulties for source separation methods. On the other hand the employment of ${\mathcal{L}}_{B}$ with the Sinkhorn distances and for $\lambda = {0.5}$ , leads to learned representations that at least for the singing voice signal a prominent structure of horizontal activity is observed. The interesting part comes when the entropy regularization weight is increase to $\lambda = {1.5}$ , where now the accompaniment source is distinguished by prominent vertical activity.
|
| 320 |
+
|
| 321 |
+
## References
|
| 322 |
+
|
| 323 |
+
Arjovsky, M., Chintala, S., and Bottou, L. Wasserstein Generative Adversarial Networks. In Precup, D. and Teh,
|
| 324 |
+
|
| 325 |
+
Y. W. (eds.), Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pp. 214-223. PMLR, 06-11 Aug 2017.
|
| 326 |
+
|
| 327 |
+
Cuturi, M. Sinkhorn distances: Lightspeed computation of optimal transport. In Burges, C. J. C., Bottou, L., Welling, M., Ghahramani, Z., and Weinberger, K. Q. (eds.), Advances in Neural Information Processing Systems 26, pp. 2292-2300. Curran Associates, Inc., 2013.
|
| 328 |
+
|
| 329 |
+
Défossez, A., Usunier, N., Bottou, L., and Bach, F. Music Source Separation in the Waveform Domain. Technical Report 02379796v1, HAL, 2019.
|
| 330 |
+
|
| 331 |
+
Drossos, K., Mimilakis, S. I., Serdyuk, D., Schuller, G., Virtanen, T., and Bengio, Y. MaD TwinNet: Masker-Denoiser Architecture with Twin Networks for Monaural Sound Source Separation. In Proceedings of the 2018 IEEE International Joint Conference on Neural Networks (IJCNN), July 2018.
|
| 332 |
+
|
| 333 |
+
He, K., Zhang, X., Ren, S., and Sun, J. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV), ICCV '15, pp. 1026-1034, 2015.
|
| 334 |
+
|
| 335 |
+
Hennequin, R., Khlif, A., Voituret, F., and Moussallam, M. Spleeter: A Fast And State-of-the Art Music Source Separation Tool With Pre-trained Models. Late-Breaking/Demo ISMIR 2019, November 2019. Deezer Research.
|
| 336 |
+
|
| 337 |
+
Huang, P., Chen, S. D., Smaragdis, P., and Hasegawa-Johnson, M. Singing-Voice Separation from Monaural Recordings Using Robust Principal Component Analysis. In Proceedings of the 37th IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 57-60, 2012.
|
| 338 |
+
|
| 339 |
+
Kavalerov, I., Wisdom, S., Erdogan, H., Patton, B., Wilson, K., Roux, J. L., and Hershey, J. R. Universal Sound Separation. In 2019 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA), pp. 175-179, Oct 2019. doi: 10.1109/WASPAA.2019. 8937253.
|
| 340 |
+
|
| 341 |
+
Kingma, D. P. and Ba, J. Adam: A Method for Stochastic Optimization. In Proceedings of the International Conference on Learning Representations (ICLR-15), 2015.
|
| 342 |
+
|
| 343 |
+
Liutkus, A. and Badeau, R. Generalized Wiener filtering with fractional power spectrograms. In Proceedings of the 40th International Conference on Acoustics, Speech and Signal Processing (ICASSP 2015), pp. 266-270, April 2015.
|
| 344 |
+
|
| 345 |
+
Michelashvili, M. and Wolf, L. Speech denoising by accumulating per-frequency modeling fluctuations, 2019.
|
| 346 |
+
|
| 347 |
+
Mimilakis, S. I., Drossos, K., Santos, J. F., Schuller, G., Virtanen, T., and Bengio, Y. Monaural Singing Voice Separation with Skip-Filtering Connections and Recurrent Inference of Time-Frequency Mask. In Proceedings of the 43rd International Conference on Acoustics, Speech and Signal Processing (ICASSP 2018), 2018.
|
| 348 |
+
|
| 349 |
+
Mimilakis, S. I., Drossos, K., and Schuller, G. Unsupervised Interpretable Representation Learning for Singing Voice Separation. In Proceedings of the 27th European Signal Processing Conference (EUSIPCO 2020), 2020.
|
| 350 |
+
|
| 351 |
+
Nair, V. and Hinton, G. E. Rectified Linear Units Improve Restricted Boltzmann Machines. In Proceedings of the 27th International Conference on International Conference on Machine Learning, ICML'10, pp. 807-814, Madison, WI, USA, 2010. Omnipress. ISBN 9781605589077.
|
| 352 |
+
|
| 353 |
+
Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., Desmaison, A., Kopf, A., Yang, E., DeVito, Z., Raison, M., Tejani, A., Chilamkurthy, S., Steiner, B., Fang, L., Bai, J., and Chintala, S. Pytorch: An imperative style, high-performance deep learning library. In Wallach, H., Larochelle, H., Beygelzimer, A., d' Alché-Buc, F., Fox, E., and Garnett, R. (eds.), Advances in Neural Information Processing Systems 32, pp. 8024-8035. Curran Associates, Inc., 2019.
|
| 354 |
+
|
| 355 |
+
Rafii, Z., Liutkus, A., Stöter, F., Mimilakis, S. I., and Bittner, R. The MUSDB18 Corpus for Music Separation, Dec 2017. URL https://doi.org/10.5281/zenodo.1117372.
|
| 356 |
+
|
| 357 |
+
Rafii, Z., Liutkus, A., Stöter, F. R., Mimilakis, S. I., FitzGerald, D., and Pardo, B. An Overview of Lead and Accompaniment Separation in Music. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 26(8):1307- 1335, Aug 2018.
|
| 358 |
+
|
| 359 |
+
Ravanelli, M. and Bengio, Y. Interpretable Convolutional Filters with SincNet. In International Conference on Neural Information Processing Systems: Workshop on Interpretability and Robustness for Audio, Speech and Language, 2018.
|
| 360 |
+
|
| 361 |
+
Roux, J. L., Wisdom, S., Erdogan, H., and Hershey, J. R. SDR - Half-baked or Well Done? In Proceedings of the 44th IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2019), pp. 626-630, May 2019.
|
| 362 |
+
|
| 363 |
+
Rudin, L. I., Osher, S., and Fatemi, E. Nonlinear Total Variation Based Noise Removal Algorithms. In Proceedings of
|
| 364 |
+
|
| 365 |
+
the Eleventh Annual International Conference of the Center for Nonlinear Studies on Experimental Mathematics: Computational Issues in Nonlinear Science: Computational Issues in Nonlinear Science, pp. 259-268, 1992.
|
| 366 |
+
|
| 367 |
+
Samuel, D., Ganeshan, A., and Naradowsky, J. Meta-learning Extractors for Music Source Separation. In Proceedings of the 45th International Conference on Acoustics, Speech and Signal Processing (ICASSP 2020), May 2020.
|
| 368 |
+
|
| 369 |
+
Sinkhorn, R. Diagonal Equivalence to Matrices with Prescribed Row and Column Sums. The American Mathematical Monthly, 74(4):402-405, 1967.
|
| 370 |
+
|
| 371 |
+
Smaragdis, P. and Venkataramani, S. A Neural Network Alternative to Non-Negative Audio Models. In Proceedings of the 42nd IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2017), pp. 86-90, March 2017.
|
| 372 |
+
|
| 373 |
+
Smaragdis, P., Raj, B., and Shashanka, M. A Probabilistic Latent Variable Model for Acoustic Modeling. In Proceedings of the International Conference on Neural Information Processing Systems: Workshop on Advances in Models for Acoustic Processing, 2006.
|
| 374 |
+
|
| 375 |
+
Stöter, F.-R., Uhlich, S., Liutkus, A., and Mitsufuji, Y. Open-Unmix - A Reference Implementation for Music Source Separation. Journal of Open Source Software, 2019.
|
| 376 |
+
|
| 377 |
+
Tzinis, E., Venkataramani, S., Wang, Z., Subakan, C., and Smaragdis, P. Two-Step Sound Source Separation: Training on Learned Latent Targets. In Proceedings of the 45th International Conference on Acoustics, Speech and Signal Processing (ICASSP 2020), May 2020.
|
| 378 |
+
|
| 379 |
+
Vincent, P. A Connection between Score Matching and Denoising Autoencoders. Neural Computation, 23(7): 1661-1674, July 2011. ISSN 0899-7667.
|
| 380 |
+
|
| 381 |
+
Vincent, P., Larochelle, H., Lajoie, I., Bengio, Y., and Man-zagol, P.-A. Stacked Denoising Autoencoders: Learning Useful Representations in a Deep Network with a Local Denoising Criterion. Journal of Machine Learning Research, 11:3371-3408, 2010.
|
| 382 |
+
|
| 383 |
+
Zhang, Z., Wang, Y., Gan, C., Wu, J., Tenenbaum, J. B., Torralba, A., and Freeman, W. T. Deep audio priors emerge from harmonic convolutional networks. In Proceedings of the 8th International Conference on Learning Representations (ICLR), 2020.
|
papers/ICML/ICML 2020/ICML 2020 Workshop/ICML 2020 Workshop SAS/aaI4jKANEH4/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,364 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ REVISITING REPRESENTATION LEARNING FOR SINGING VOICE SEPARATION WITH SINKHORN DISTANCES
|
| 2 |
+
|
| 3 |
+
Stylianos I. Mimilakis ${}^{ * }{}^{1}$ Konstantinos Drossos ${}^{ * }{}^{2}$ Gerald Schuller ${}^{3}$
|
| 4 |
+
|
| 5 |
+
§ ABSTRACT
|
| 6 |
+
|
| 7 |
+
In this work we present a method for unsupervised learning of audio representations, focused on the task of singing voice separation. We build upon a previously proposed method for learning representations of time-domain music signals with a re-parameterized denoising autoencoder, extending it by using the family of Sinkhorn distances with entropic regularization. We evaluate our method on the freely available MUSDB18 dataset of professionally produced music recordings, and our results show that Sinkhorn distances with small strength of entropic regularization are marginally improving the performance of informed singing voice separation. By increasing the strength of the entropic regularization, the learned representations of the mixture signal consists of almost perfectly additive and distinctly structured sources.
|
| 8 |
+
|
| 9 |
+
§ 1. INTRODUCTION
|
| 10 |
+
|
| 11 |
+
Music source separation aims at the estimation of the individual music sources of an observed mixture signal. To that aim, supervised deep learning (DL) based approaches are shown to yield remarkable results (Hennequin et al., 2019; Défossez et al., 2019; Stöter et al., 2019; Samuel et al., 2020). Although different types of sources can be estimated from a music mixture, a specific task of music source separation that has received a lot of attention in relevant research communities is the separation of the singing voice, or singing voice source separation (Rafii et al., 2018).
|
| 12 |
+
|
| 13 |
+
State-of-the-art approaches in DL-based music and singing voice source separation have considered using both precomputed and learned signal representations. The approaches that utilize pre-computed signal representations, have extensively employed the short-time Fourier transform (STFT) (Hennequin et al., 2019; Stöter et al., 2019; Drossos et al., 2018; Mimilakis et al., 2018). On the other hand, learned representations are commonly used in end-to-end models and are jointly learned with the parameters of the rest of the model.
|
| 14 |
+
|
| 15 |
+
In both of the previous approaches, the learning of the representations is based on objectives that assess the reconstruction of the signals of the target sources (Défossez et al., 2019; Samuel et al., 2020). In many cases, the approaches based on end-to-end models do not yield better performance than approaches using representations computed using the STFT (Défossez et al., 2019; Samuel et al., 2020; Tzinis et al., 2020). Furthermore, the learned representations obtained by approaches utilizing end-to-end models are not easily nor intuitively interpreted, compared to the typical STFT representation that utilizes pre-computed signal representations. In order to bridge the gap of separation performance and interpretability between end-to-end-based and STFT-based approaches, recent studies focus on representation learning (Tzinis et al., 2020; Mimilakis et al., 2020).
|
| 16 |
+
|
| 17 |
+
In (Tzinis et al., 2020) is presented a sound source separation method, focused on representation learning. An encoder gets as an input the signals of the sources and their corresponding mixture, and outputs latent representations of the signals of each source and the mixture. Then, using these latent representations, the method calculates and applies source dependent masks to the latent representation of mixture. The result of the application of masks is given as an input to the decoder, which outputs an estimation of the signal of each source. The encoder and the decoder are jointly optimized to minimize the reconstruction error between the ground truth and estimated signals of each source. However, using reconstruction objectives for the separation of only specific sources could severely restrict the representation learning capabilities of encoder-decoder methods (Vincent, 2011). In (Mimilakis et al., 2020) it is proposed to learn representations for singing voice separation in an unsupervised way using a re-parameterized denoising autoencoder (DAE) (Vincent et al., 2010). The re-parameterization replaces the decoding basis functions by amplitude-modulated cosine functions whose parameters are learned with the rest of the DAE. This results into an interpretable representation of the singing voice signal that conveys amplitude information for modulated sinusoidal bases. The re-parametization is similar to Sinc-Networks (Ravanelli & Bengio, 2018) that use sinc functions for encoding speech signals. The parameters of the denoising autoencoder employed in (Mimilakis et al., 2020) are optimized using two objectives. The first objective is to minimize the reconstruction error between the clean and the reconstructed signal voice signal, and the second objective enforces the smoothness of the mixture signal's representation.
|
| 18 |
+
|
| 19 |
+
*Equal contribution ${}^{1}$ Fraunhofer-IDMT, Ilmenau, Germany ${}^{2}$ Audio Research Group, Tampere University, Tampere, Finland ${}^{3}$ Applied Media Systems Group, Technical University of Ilmenau, Ilmenau, Germany. Correspondence to: Stylianos I. Mimilakis <mis@idmt.fraunhofer.de>.
|
| 20 |
+
|
| 21 |
+
Proceedings of the ${37}^{\text{ th }}$ International Conference on Machine Learning, Vienna, Austria, PMLR 108, 2020. Copyright 2020 by the author(s).
|
| 22 |
+
|
| 23 |
+
In this work we focus on unsupervised representation learning and we aim at learning representations of music signals that can offer enhanced interpretability combined with improved source separation performance. We build on the work presented in (Mimilakis et al., 2020) and we extend it by using the Sinkhorn distances with entropic regularization (Cuturi, 2013) as a representation specific objective. Our contribution is to experimentally show that Sinkhorn distances with entropic regularization can assist in learning representations in which the sources can be efficiently separated and the representations of sources are distinctly structured and additive.
|
| 24 |
+
|
| 25 |
+
§ NOTATION
|
| 26 |
+
|
| 27 |
+
Bold lowercase letters, e.g., "x", denote vectors and bold uppercase letters, e.g. "X", denote matrices. The $l$ -th element of a vector is denoted as ${\mathbf{x}}_{\left\lbrack l\right\rbrack }$ . Similarly, accessing elements from matrices is denoted as ${\mathrm{X}}_{\left\lbrack l,{l}^{\prime }\right\rbrack }$ .
|
| 28 |
+
|
| 29 |
+
§ 2. PROPOSED METHOD
|
| 30 |
+
|
| 31 |
+
Our method follows the one presented in (Mimilakis et al., 2020) and employs an encoder $E\left( \cdot \right)$ and a decoder $D\left( \cdot \right)$ . The input to our method is a music signal, $\mathbf{x} \in {\mathbb{R}}^{N}$ , with $N$ time-domain samples. The output of the method is the learned non-negative representation of $\mathbf{x},\mathbf{A} \in {\mathbb{R}}_{ > 0}^{C \times T}$ , with $T$ templates of $C$ features. The $C$ features can be viewed as analogous to the frequency bins and the $T$ templates as the analogous to the time-frames in a time-frequency representation. $\mathbf{A}$ is computed by the encoder $E\left( \cdot \right)$ , and is interpreted as the magnitude information for a real-valued, sinusoidal-based model, employed by the decoder $D\left( \cdot \right)$ .
|
| 32 |
+
|
| 33 |
+
To optimize $E\left( \cdot \right)$ , we employ the decoder $D\left( \cdot \right)$ and a dataset of monaural (single channel) recordings of singing voice, ${\mathbf{x}}_{\mathrm{v}} \in {\mathbb{R}}^{N}$ , and accompanying musical instruments. Using ${\mathbf{x}}_{\mathbf{v}}$ we create two synthetic signals. The first synthetic signal, ${\widetilde{\mathbf{x}}}_{\mathrm{m}} \in {\mathbb{R}}^{N}$ , is the result of an additive corruption process, where the accompanying musical instruments such as drums, guitars, synthesizers, and bass (i.e. a generic multi-modal distribution-based noise) are added to ${\mathbf{x}}_{\mathbf{v}}$ . The second synthetic signal, ${\widetilde{\mathbf{x}}}_{\mathrm{v}} \in {\mathbb{R}}^{N}$ , is also the result of a corruption process, where Gaussian noise is added to ${\mathbf{x}}_{\mathbf{v}}$ , independently of the amplitude of ${\mathbf{x}}_{\mathrm{v}}$ . During the optimization process (i.e. training), the encoder $E\left( \cdot \right)$ computes two non-negative representations ${\mathbf{A}}_{\mathrm{m}},{\mathbf{A}}_{\mathrm{v}} \in$ ${\mathbb{R}}_{ > 0}^{C \times T}$ using the two above mentioned synthetic signals, ${\widetilde{\mathbf{x}}}_{\mathrm{m}}$ and ${\widetilde{\mathbf{x}}}_{\mathrm{v}}$ , respectively. ${\mathbf{A}}_{\mathrm{v}}$ is used as input to $D\left( \cdot \right)$ , and $D\left( \cdot \right)$ outputs an approximation of the clean singing voice signal ${\mathbf{x}}_{\mathrm{v}},{\widehat{\mathbf{x}}}_{\mathrm{v}}.{\mathbf{A}}_{\mathrm{m}}$ is solely used to calculate an extra loss that will allow $E\left( \cdot \right)$ to learn information regarding the additive multi-modal noise (Mimilakis et al., 2020). An illustration of the training procedure in Figure 1. After the optimization process, $E\left( \cdot \right)$ can take as an input any musical signal $\mathbf{x}$ , and will output the representation of $\mathbf{x},\mathbf{A}$ . The benefit is that $\mathbf{A}$ has good interpretability attributes, e.g. is nonnegative, has structured spectrogram representation, and can be effectively used in the downstream task of singing voice separation.
|
| 34 |
+
|
| 35 |
+
< g r a p h i c s >
|
| 36 |
+
|
| 37 |
+
Figure 1. Overview of our proposed method for representation learning.
|
| 38 |
+
|
| 39 |
+
§ 2.1. ENCODER
|
| 40 |
+
|
| 41 |
+
The encoder $E\left( \cdot \right)$ consists of two one-dimensional (1D) convolutions with strides. The first 1D convolution uses a stride $S$ and a set of $C$ number of kernels, ${\mathbf{k}}_{c} \in {\mathbb{R}}^{L}$ where $L$ is the temporal length of each $\mathbf{k}$ . The first convolution takes as inputs the signals ${\widetilde{\mathbf{x}}}_{\mathrm{m}}$ and ${\widetilde{\mathbf{x}}}_{\mathrm{v}}$ , and outputs the learned latent representations ${\widetilde{\mathbf{H}}}_{\mathrm{m}} \in {\mathbb{R}}_{ \geq 0}^{C \times T}$ and ${\widetilde{\mathbf{H}}}_{\mathrm{v}} \in {\mathbb{R}}_{ \geq 0}^{C \times T}$ , respectively, using
|
| 42 |
+
|
| 43 |
+
$$
|
| 44 |
+
{\widetilde{\mathrm{H}}}_{\star \left\lbrack {c,t}\right\rbrack } = \mathop{\sum }\limits_{{l = 0}}^{{L - 1}}{\widetilde{\mathrm{x}}}_{\star \left\lbrack {{St} + l}\right\rbrack }{\mathrm{k}}_{c\left\lbrack l\right\rbrack }, \tag{1}
|
| 45 |
+
$$
|
| 46 |
+
|
| 47 |
+
where " $\star$ " refers to either " $\mathrm{m}$ " or " $\mathrm{v}$ " for brevity, and $t \in$ $\left\lbrack {0,\ldots ,T - 1}\right\rbrack$ . Appropriate zero-padding is applied to ${\widetilde{\mathbf{x}}}_{ \star }$ , so that $T = \lceil N/S\rceil$ , where $\lceil \cdot \rceil$ is the ceiling function. Each ${\widetilde{\mathbf{H}}}_{ \star }$ is used as an input to the second 1D convolution, which uses another set of $C$ kernels, ${\mathbf{K}}_{{c}^{\prime }}^{\prime } \in {\mathbb{R}}^{{L}^{\prime } \times C}$ , where ${c}^{\prime } =$ $\left\lbrack {1,\ldots ,C}\right\rbrack$ , with a temporal length ${L}^{\prime }$ that is ${L}^{\prime } < < L$ . The output of the second convolution is ${\mathbf{H}}_{ \star } \in {\mathbb{R}}^{C \times T}$ , and is performed with a dilation factor of $\phi$ and a unit stride, as
|
| 48 |
+
|
| 49 |
+
$$
|
| 50 |
+
{\mathrm{H}}_{\star \left\lbrack {{c}^{\prime },t}\right\rbrack } = \mathop{\sum }\limits_{{c = 0}}^{{C - 1}}\mathop{\sum }\limits_{{{l}^{\prime } = 0}}^{{{L}^{\prime } - 1}}{\widetilde{\mathrm{H}}}_{\star \left\lbrack {c,t + \phi {l}^{\prime }}\right\rbrack }{\mathrm{K}}_{{c}^{\prime }\left\lbrack {{l}^{\prime },c}\right\rbrack }^{{}^{\prime }}. \tag{2}
|
| 51 |
+
$$
|
| 52 |
+
|
| 53 |
+
Then, each ${\mathbf{H}}_{ \star }$ is used in a residual connection, followed by the application of the rectified linear unit (ReLU) function (Nair & Hinton, 2010), as
|
| 54 |
+
|
| 55 |
+
$$
|
| 56 |
+
{\mathbf{A}}_{ \star } = \operatorname{ReLU}\left( {{\mathbf{H}}_{ \star } + {\widetilde{\mathbf{H}}}_{ \star }}\right) . \tag{3}
|
| 57 |
+
$$
|
| 58 |
+
|
| 59 |
+
This is performed in order to enforce smooth and nonnegative representations. The smoothness and the nonnegativity are attributes that can enhance interpretability and are useful for the separation of audio and music sources (Smaragdis & Venkataramani, 2017). To further enforce the smooth representations under realistic corruption processes, in (Mimilakis et al., 2020) it is proposed to minimize the (anisotropic) total-variation denoising cost function, ${\mathcal{L}}_{\mathrm{{TV}}}$ (Rudin et al.,1992), of the representation ${\mathbf{A}}_{\mathrm{m}}$ . ${\mathcal{L}}_{\mathrm{{TV}}}$ is computed as
|
| 60 |
+
|
| 61 |
+
$$
|
| 62 |
+
{\mathcal{L}}_{\mathrm{{TV}}}\left( {\mathbf{A}}_{\mathrm{m}}\right) = \frac{1}{CT}\left( {\mathop{\sum }\limits_{{c = 1}}^{{C - 1}}\mathop{\sum }\limits_{{t = 0}}^{{T - 1}}\left| {{\mathrm{A}}_{\mathrm{m}\left\lbrack {c,t}\right\rbrack } - {\mathrm{A}}_{\mathrm{m}\left\lbrack {c - 1,t}\right\rbrack }}\right| }\right.
|
| 63 |
+
$$
|
| 64 |
+
|
| 65 |
+
$$
|
| 66 |
+
\left. {+\mathop{\sum }\limits_{{t = 1}}^{{T - 1}}\mathop{\sum }\limits_{{c = 0}}^{{C - 1}}\left| {{\mathrm{\;A}}_{\mathrm{m}\left\lbrack {c,t}\right\rbrack } - {\mathrm{A}}_{\mathrm{m}\left\lbrack {c,t - 1}\right\rbrack }}\right| }\right) \text{ . } \tag{4}
|
| 67 |
+
$$
|
| 68 |
+
|
| 69 |
+
Practically, ${\mathcal{L}}_{\mathrm{{TV}}}$ penalizes $E\left( \cdot \right)$ by the norm of the first order difference across the time-frames $T$ and templates $C$ , promoting slow time varying representations and grouping of the template activity. The previously mentioned representation attributes are formed from domain knowledge that is based on the STFT.
|
| 70 |
+
|
| 71 |
+
According to (Arjovsky et al., 2017)(Theorem 2) the total-variation distance, in our particular case the sum of absolute differences employed in Eq.(4), is not a suitable cost function for data distributions supported by low-dimensional manifolds. Instead, optimal transportation distances are suitable. We hypothesize that the singing voice, the mixture signals, and their corresponding representations can be described by low-dimensional manifolds, and we propose to replace ${\mathcal{L}}_{\mathrm{{TV}}}$ by Sinkhorn distances, ${\mathcal{L}}_{\mathrm{{SK}}}$ . This is because ${\mathcal{L}}_{\mathrm{{SK}}}$ allow an efficient computation of optimal transportation cost (Cuturi, 2013). More specifically, we use
|
| 72 |
+
|
| 73 |
+
$$
|
| 74 |
+
{\mathcal{L}}_{\mathrm{{SK}}}\left( {\mathbf{A}}_{\mathrm{m}}\right) = \left\langle {{\mathbf{P}}_{\lambda },\psi \left( {\mathbf{A}}_{\mathrm{m}}\right) }\right\rangle , \tag{5}
|
| 75 |
+
$$
|
| 76 |
+
|
| 77 |
+
where $\langle \cdot , \cdot \rangle$ is the Frobenious dot-product and $\psi : {\mathbb{R}}_{ > 0}^{C \times T} \mapsto$ ${\mathbb{R}}_{ > 0}^{T \times T}$ is a function that computes the cost matrix $\mathbf{M} \in$ ${\mathbb{R}}_{ \geq 0}^{\bar{T} \times T}$ of pair-wise distances, defined as
|
| 78 |
+
|
| 79 |
+
$$
|
| 80 |
+
\psi \left( {\mathbf{A}}_{\mathrm{m}}\right) \mathrel{\text{ := }} {\mathrm{M}}_{t,{t}^{\prime }} = {\left( \mathop{\sum }\limits_{{c = 0}}^{{C - 1}}{\left( \left| {\mathrm{A}}_{\mathrm{m}\left\lbrack {c,t}\right\rbrack } - {\mathrm{A}}_{\mathrm{m}\left\lbrack {c,{t}^{\prime }}\right\rbrack }\right| \right) }^{p}\right) }^{1/p}, \tag{6}
|
| 81 |
+
$$
|
| 82 |
+
|
| 83 |
+
for $p = 1$ and $t,{t}^{\prime } \in \left\lbrack {0,\ldots ,T - 1}\right\rbrack$ . Only for, and prior to, the computation of the $\mathbf{M},{\mathbf{A}}_{\mathrm{m}}$ is normalized so that the sum of the features at each time-frame $t$ sum up to unity. Furthermore, ${\mathbf{P}}_{\lambda } \in {\mathbb{R}}_{ > 0}^{T \times T}$ is the transportation plan that is computed by solving the minimization problem
|
| 84 |
+
|
| 85 |
+
$$
|
| 86 |
+
{\mathbf{P}}_{\lambda } = \underset{\mathbf{P} \in \mathbb{U}\left( {r,c}\right) }{\arg \min }\left\langle {\mathbf{P},\psi \left( {\mathbf{A}}_{\mathrm{m}}\right) }\right\rangle - \frac{1}{\lambda }H\left( \mathbf{P}\right) , \tag{7}
|
| 87 |
+
$$
|
| 88 |
+
|
| 89 |
+
where $H\left( \cdot \right)$ denotes the entropy function and $\lambda > 0$ is a scalar the controls the strength of the entropic regularization. $\mathbb{U}\left( {r,c}\right)$ is the set of non-negative matrices of size $T \times T$ whose rows and columns sum up to $r$ and $c$ , respectively, where $r = c = 1$ . For solving the minimization problem of Eq.(7) we employ the algorithm presented in (Cuturi, 2013) that is based on the Sinkhorn iterative matrix scaling operator (Sinkhorn, 1967).
|
| 90 |
+
|
| 91 |
+
§ 2.2. DECODER
|
| 92 |
+
|
| 93 |
+
The decoder $D\left( \cdot \right)$ takes as an input the representation ${\mathbf{A}}_{\mathrm{v}}$ and yields an approximation of the clean singing voice signal ${\mathbf{x}}_{\mathbf{v}}$ , denoted by ${\widehat{\mathbf{x}}}_{\mathbf{v}} \in {\mathbb{R}}^{N}$ . Specifically, $D\left( \cdot \right)$ models the clean singing voice as a sum of $C$ modulated sinusoidal components that overlap in ${\mathbb{R}}^{N}$ . The components are computed using an 1D transposed convolutions with $S$ strides and another set of $C$ number of kernels, ${\mathbf{w}}_{c} \in {\mathbb{R}}^{L}$ , as
|
| 94 |
+
|
| 95 |
+
$$
|
| 96 |
+
{\widehat{\mathrm{x}}}_{\mathrm{v}\left\lbrack {{St} + l}\right\rbrack } = \eta + \mathop{\sum }\limits_{{c = 0}}^{{C - 1}}{\mathrm{\;A}}_{{\mathrm{v}}_{\left\lbrack c,t\right\rbrack }}{\mathrm{w}}_{c\left\lbrack l\right\rbrack }\text{ , where } \tag{8}
|
| 97 |
+
$$
|
| 98 |
+
|
| 99 |
+
$$
|
| 100 |
+
\eta = \left\{ {\begin{array}{ll} 0, & \text{ if }t = 0 \\ {\widehat{\mathrm{x}}}_{\mathrm{v}\left\lbrack {S\left( {t - 1}\right) + l}\right\rbrack }, & \text{ otherwise } \end{array}.}\right. \tag{9}
|
| 101 |
+
$$
|
| 102 |
+
|
| 103 |
+
As can be seen from Eq (9), $\eta$ is is a past sample contained in ${\widehat{\mathbf{x}}}_{\mathrm{v}}$ , that is used for the overlap-add process. Regarding the kernels ${\mathbf{w}}_{c}$ of the decoder, in (Mimilakis et al.,2020) is proposed their re-parameterization as
|
| 104 |
+
|
| 105 |
+
$$
|
| 106 |
+
{\mathrm{w}}_{c\left\lbrack l\right\rbrack } = \cos \left( {{2\pi }{f}_{c}^{2}l + {\rho }_{c}}\right) {\mathrm{b}}_{c\left\lbrack l\right\rbrack }, \tag{10}
|
| 107 |
+
$$
|
| 108 |
+
|
| 109 |
+
where $\cos \left( \cdot \right)$ is the cosine function, and $l = \left\lbrack {0,\ldots ,L - 1}\right\rbrack$ is the time index. The parameters that are joinlty learnt with the parameters of the DAE are the sampling-rate-normalized carrier frequency ${f}_{c}$ , the phase ${\rho }_{c}$ (in radians), and the modulating signal ${\mathbf{b}}_{c} \in {\mathbb{R}}^{L}$ . The direct access to natural quantities like the above, significantly boosts the interpretability of the representation learning method. Additionally, ${\mathbf{w}}_{c}$ can be sorted according to the carrier frequency ${f}_{c}$ , promoting intuitive representations.
|
| 110 |
+
|
| 111 |
+
After the reconstruction of ${\widehat{\mathbf{x}}}_{\mathrm{v}}$ , the negative signal-to-noise ratio (neg-SNR) (Kavalerov et al., 2019), is computed as
|
| 112 |
+
|
| 113 |
+
$$
|
| 114 |
+
{\mathcal{L}}_{\text{ neg-SNR }}\left( {{\mathbf{x}}_{\mathrm{v}},{\widehat{\mathbf{x}}}_{\mathrm{v}}}\right) = - {10}{\log }_{10}\left( \frac{{\begin{Vmatrix}{\mathbf{x}}_{\mathrm{v}}\end{Vmatrix}}_{2}^{2}}{{\begin{Vmatrix}{\mathbf{x}}_{\mathrm{v}} - {\widehat{\mathbf{x}}}_{\mathrm{v}}\end{Vmatrix}}_{2}^{2}}\right) , \tag{11}
|
| 115 |
+
$$
|
| 116 |
+
|
| 117 |
+
where $\parallel \cdot {\parallel }_{2}$ is the ${\ell }_{2}$ vector norm, and the negative sign is used to cast the logarithmic SNR as a minimization objective. Then, the overall overall minimization objective for $E\left( \cdot \right)$ and $D\left( \cdot \right)$ is computed using ${\mathcal{L}}_{\mathrm{{TV}}}$ as
|
| 118 |
+
|
| 119 |
+
$$
|
| 120 |
+
{\mathcal{L}}_{A} = {\mathcal{L}}_{\text{ neg-SNR }} + \omega {\mathcal{L}}_{\mathrm{{TV}}}, \tag{12}
|
| 121 |
+
$$
|
| 122 |
+
|
| 123 |
+
or using ${\mathcal{L}}_{\mathrm{{SK}}}$ as
|
| 124 |
+
|
| 125 |
+
$$
|
| 126 |
+
{\mathcal{L}}_{B} = {\mathcal{L}}_{\text{ neg-SNR }} + \omega {\mathcal{L}}_{\mathrm{{SK}}}, \tag{13}
|
| 127 |
+
$$
|
| 128 |
+
|
| 129 |
+
where $\omega$ is a scalar that weights the impact of the representation objective (either ${\mathcal{L}}_{\mathrm{{TV}}}$ or ${\mathcal{L}}_{\mathrm{{SK}}}$ ) in the learning signal for $E\left( \cdot \right)$ .
|
| 130 |
+
|
| 131 |
+
§ 3. EXPERIMENTAL PROCEDURE
|
| 132 |
+
|
| 133 |
+
§ 3.1. DATASET
|
| 134 |
+
|
| 135 |
+
For training and testing the representation learning method we use the freely available MUSDB18 dataset (Rafii et al., 2017). The dataset consists of 150 two-channel professionally produced multi-tracks, i.e, the stereophonic signals of bass, drums, singing voice, and other music instruments, that comprise a music mixture. Every signal is sampled at ${44100}\mathrm{\;{Hz}}$ . The multi-tracks are split into training (100 multi-tracks) and testing (50 multi-tracks) subsets.
|
| 136 |
+
|
| 137 |
+
§ 3.2. TRAINING
|
| 138 |
+
|
| 139 |
+
During training we sample a set of four multi-tracks from which we use the vocals and the other music instrument sources, collectively forming the accompaniment source. The accompaniment source is computed by adding the bass, drums, and other music instrument sources. Then, each sampled multi-track is down-mixed to a single channel and is partitioned into overlapping segments of $N = {44100}$ samples. The overlap is 22050 samples. We randomly shuffle the segments for each source and corrupt the singing voice signal using the shuffled segments of the accompaniment source. For the corruption by additive Gaussian noise, the standard deviation of the noise is set to ${1e} - 4$ .
|
| 140 |
+
|
| 141 |
+
For optimizing the parameters of the representation learning method, with respect to the minimization of Eq.(12) or Eq.(13), we use the adam algorithm (Kingma & Ba, 2015), with a batch of 8 segments and a learning rate of ${1e} - 4$ . To compute the Sinkhorn distance(s), we average within the batch, all the cost matrices $\mathbf{M}$ computed using Eq.(6) and each ${\mathbf{A}}_{\mathrm{m}}$ contained in the batch.
|
| 142 |
+
|
| 143 |
+
§ 3.3. EVALUATION
|
| 144 |
+
|
| 145 |
+
For evaluating the usefulness of the representation that is learned by our method, we use the rest of the 50 tracks. Each track is down-mixed and partitioned into non-overlapping segments of $N = {44100}$ samples (1 second length). Shuffling and random mixing is not performed at this stage. However, silent segments of the singing voice are discarded. The representation is evaluated with respect to the three following criteria: i) reconstruction error of the proposed method to encode and decode the clean singing voice signal using the previously described methodology, ii) reconstruction error of the separated singing voice signal by binary masking, and iii) additivity of the representation. The first two criteria are objectively measured with respect to the clean singing voice signal ${\mathbf{x}}_{\mathbf{v}}$ using the scale-invariant signal-to-distortion ratio (SI-SDR) (Roux et al., 2019). Details regarding the computation of SI-SDR and the separation by binary masking are given in the supplementary material. Binary masking is used because it is an indicator of how disjoint (i.e. nonoverlapping) two sources are, given a representation (more information exists in the supplementary material). We assess the additivity of the sources by computing the measure
|
| 146 |
+
|
| 147 |
+
$$
|
| 148 |
+
\mathcal{A}\left( {{\mathbf{x}}_{\mathrm{m}},{\mathbf{x}}_{\mathrm{v}},{\mathbf{x}}_{\mathrm{{ac}}}}\right) = 1 - \frac{{\begin{Vmatrix}E\left( {\mathbf{x}}_{\mathrm{m}}\right) - E\left( {\mathbf{x}}_{\mathrm{v}}\right) - E\left( {\mathbf{x}}_{\mathrm{{ac}}}\right) \end{Vmatrix}}_{1}}{{\begin{Vmatrix}E\left( {\mathbf{x}}_{\mathrm{m}}\right) \end{Vmatrix}}_{1}},
|
| 149 |
+
$$
|
| 150 |
+
|
| 151 |
+
(14)
|
| 152 |
+
|
| 153 |
+
where $\parallel \cdot {\parallel }_{1}$ is the ${L}_{1}$ matrix norm, $\varepsilon = {1e} - {24}$ is a small term for ensuring numerical stability, and ${\mathbf{x}}_{\mathrm{{ac}}}$ is the time-domain signal of the accompaniment music source that is computed by mixing the multi-tracks available in the testing subset. High values of $\mathcal{A}\left( \cdot \right)$ indicate that the representation of the mixture signal consists of non-negative and additive sources (i.e. higher $\mathcal{A}\left( \cdot \right)$ is better). The attribute of additivity is important for the computation of optimal separation masks (Liutkus & Badeau, 2015), and in the unsupervised exploitation of music sources' structure (Smaragdis et al., 2006; Huang et al., 2012).
|
| 154 |
+
|
| 155 |
+
§ 4. RESULTS & DISCUSSION
|
| 156 |
+
|
| 157 |
+
Table 1 contains the average and standard deviation values of the additivity measure $\mathcal{A}\left( \cdot \right)$ , the SI-SDR for the reconstruction and the separation objective performance in $\mathrm{{dB}}$ , and the values of the hyper-parameters $\omega$ and $\lambda$ . The results in Table 1 are discussed according to the SI-SDR value (higher is better), because SI-SDR is the reconstruction objective.
|
| 158 |
+
|
| 159 |
+
Table 1. Results from objectively evaluating the learned representations. Boldfaced values denote best obtained performance.
|
| 160 |
+
|
| 161 |
+
max width=
|
| 162 |
+
|
| 163 |
+
Objective $\omega$ $\lambda$ SI-SDR (dB) SI-SDR-BM (dB) $\mathcal{A}\left( \cdot \right)$
|
| 164 |
+
|
| 165 |
+
1-6
|
| 166 |
+
5*${\mathcal{L}}_{A}$ 0.5 N/A 31.49 (±2.98) ${4.43}\left( {\pm {4.98}}\right)$ 0.76 (±0.10)
|
| 167 |
+
|
| 168 |
+
2-6
|
| 169 |
+
1.0 N/A 31.39 (±3.16) ${4.66}\left( {\pm {4.92}}\right)$ 0.76 (±0.10)
|
| 170 |
+
|
| 171 |
+
2-6
|
| 172 |
+
1.5 N/A 31.01 (±3.13) 4.97 (±4.93) ${0.75}\left( {\pm {0.10}}\right)$
|
| 173 |
+
|
| 174 |
+
2-6
|
| 175 |
+
2.0 N/A 30.96 (±2.98) ${4.65}\left( {\pm {4.90}}\right)$ 0.76 (±0.10)
|
| 176 |
+
|
| 177 |
+
2-6
|
| 178 |
+
4.0 N/A 31.40 (±2.83) ${5.06}\left( {\pm {4.97}}\right)$ 0.76 (±0.10)
|
| 179 |
+
|
| 180 |
+
1-6
|
| 181 |
+
5*${\mathcal{L}}_{B}$ 1.0 0.1 ${31.28}\left( {\pm {2.98}}\right)$ ${5.40}\left( {\pm {5.31}}\right)$ ${0.76}\left( {\pm {0.09}}\right)$
|
| 182 |
+
|
| 183 |
+
2-6
|
| 184 |
+
1.0 0.5 ${31.61}\left( {\pm {3.38}}\right)$ ${5.63}\left( {\pm {5.29}}\right)$ ${0.77}\left( {\pm {0.09}}\right)$
|
| 185 |
+
|
| 186 |
+
2-6
|
| 187 |
+
1.0 1.0 ${31.29}\left( {\pm {3.25}}\right)$ ${4.33}\left( {\pm {5.28}}\right)$ ${0.86}\left( {\pm {0.08}}\right)$
|
| 188 |
+
|
| 189 |
+
2-6
|
| 190 |
+
1.0 1.5 ${29.98}\left( {\pm {3.48}}\right)$ ${0.06}\left( {\pm {6.43}}\right)$ ${0.89}\left( {\pm {0.08}}\right)$
|
| 191 |
+
|
| 192 |
+
2-6
|
| 193 |
+
1.0 2.0 ${31.13}\left( {\pm {3.66}}\right)$ $- {0.02}\left( {\pm {6.44}}\right)$ ${0.89}\left( {\pm {0.08}}\right)$
|
| 194 |
+
|
| 195 |
+
1-6
|
| 196 |
+
|
| 197 |
+
There are two observable trends in Table 1. The first trend is that when using ${\mathcal{L}}_{B}$ , small values of $\lambda$ marginally improve the SI-SDR, compared to the best SI-SDR when using ${\mathcal{L}}_{A}$ (i.e. $\omega = {0.5}$ and SI-SDR=31.49). Specifically, for $\lambda = {0.5}$ and when using ${\mathcal{L}}_{B}$ , we obtain an improvement of ${0.12}\mathrm{\;{dB}}$ and ${1.20}\mathrm{\;{dB}}$ for SI-SDR and SI-SDR-BM, respectively, compared to the case of using ${\mathcal{L}}_{A}$ and $\omega = {0.5}$ . Additionally, with the same $\lambda = {0.5}$ for ${\mathcal{L}}_{B}$ , we obtain an improvement of ${0.57}\mathrm{\;{dB}}$ SI-SDR-BM, compared to the best SI-SDR-BM with ${\mathcal{L}}_{A}$ (i.e. with $\omega = {4.0}$ ). This trend shows that when using Sinkhorn distances as an objective (i.e. ${\mathcal{L}}_{B}$ ) and small entropic regularization weight (i.e. small values of $\lambda$ ), there is a marginal improvement of the reconstruction performance for the singing voice (measured with SI-SDR-BM), but also the learned representations yield better results for singing voice separation (measured with SI-SDR).
|
| 198 |
+
|
| 199 |
+
< g r a p h i c s >
|
| 200 |
+
|
| 201 |
+
Figure 2. Learned representations for the mixture (left), the singing voice (middle), and the accompaniment (right) signals using the $E\left( \cdot \right)$ optimized with ${\mathcal{L}}_{B}$ for ${\mathcal{L}}_{\mathrm{{SK}}} : \omega = {4.0},\lambda = {1.5}$
|
| 202 |
+
|
| 203 |
+
The second trend observed in Table 1 is that when using ${\mathcal{L}}_{B}$ and $\lambda > 1$ , specifically for $\lambda \in \left\lbrack {{1.5},{2.0}}\right\rbrack$ , the SI-SDR for binary masking drops by more than $5\mathrm{\;{dB}}$ , compared to $\lambda = {0.5}$ . This indicates that the separation by binary masking fails, suggesting that the singing voice and accompaniment are completely overlapping in the representation of the mixture ${\mathbf{A}}_{\mathrm{m}}$ . That is expected since entropy expresses the uncertainty about the representation of the mixture signal. This means that during training, all the elements in the feature space of the representation are equally probable to be active when the mixture signal is encoded. However, that uncertainty comes with an observed effect that is the sources become additive in the learned representation.
|
| 204 |
+
|
| 205 |
+
To further investigate the effect of entropic regularization with respect to the additivity metric, we keep the best $\lambda = {1.5}$ from Table 1, and examine the impact of the weight $\omega$ on ${\mathcal{L}}_{B}$ . The corresponding results compared to the STFT, that is the most commonly employed representation for music source separation, are given in Table 2. The results from Table 2 suggest that by increasing the weight $\omega$ that affects the strength of the representation objective in the learning signal, the learned mixture representations, for $\omega = {4.0}$ , consist of two almost additive representations, i.e., the singing voice and the accompaniment representations. Furthermore, nearly all representations computed using the Sinkhorn distances and the entropic regularization, outperform the STFT with respect to the objective measure of additivity in an unsupervised fashion. To qualitatively assess the representations for the extreme case observed in Table 2, Fig. 2 illustrates learned representations for the mixture, singing voice, and the accompaniment signal. The signals were acquired from a single multitrack segment contained in the testing sub-set of MUSDB18. The representations are computed using the optimized encoder with the ${\mathcal{L}}_{B}$ objective. As it can be clearly observed from Fig. 2 higher than 0.5 entropic regularization enables the learning of representations that for particular sources such as the accompaniment, exhibit distinct structure, i.e., vertical activity (activity with respect to $C$ ). Furthermore, the representation of the singing voice is characterized by horizontal activity, i.e., a few components $C$ are active and smoothly vary in time. We believe that the distinct structure of the music sources, observed in Fig.2, could be useful for unsupervised separation and/or enhancement methods such as the deep audio prior (Michelashvili & Wolf, 2019) and the harmonic convolution(s) model (Zhang et al., 2020).
|
| 206 |
+
|
| 207 |
+
Table 2. Objective evaluation of the additivity of the learned representations.
|
| 208 |
+
|
| 209 |
+
max width=
|
| 210 |
+
|
| 211 |
+
Objective $\omega$ $\lambda$ $\mathcal{A}\left( \cdot \right)$
|
| 212 |
+
|
| 213 |
+
1-4
|
| 214 |
+
4*${\mathcal{L}}_{B}$ 1.0 1.5 ${0.89}\left( {\pm {0.08}}\right)$
|
| 215 |
+
|
| 216 |
+
2-4
|
| 217 |
+
1.5 1.5 ${0.90}\left( {\pm {0.07}}\right)$
|
| 218 |
+
|
| 219 |
+
2-4
|
| 220 |
+
2.0 1.5 ${0.92}\left( {\pm {0.07}}\right)$
|
| 221 |
+
|
| 222 |
+
2-4
|
| 223 |
+
4.0 1.5 ${0.93}\left( {\pm {0.06}}\right)$
|
| 224 |
+
|
| 225 |
+
1-4
|
| 226 |
+
STFT N/A N/A ${0.86}\left( {\pm {0.06}}\right)$
|
| 227 |
+
|
| 228 |
+
1-4
|
| 229 |
+
|
| 230 |
+
§ 5. CONCLUSIONS
|
| 231 |
+
|
| 232 |
+
In this work we proposed the usage of entropic regularized Sinkhorn distances as a cost objective for unsupervised learning of interpretable music signal representations. We experimentally showed that Sinkhron distances can be useful for the problem of learning representations for singing voice separation. Particularly, the learned representations allow the separation of singing voice by masking for small values of entropic regularization, improving a previously proposed unsupervised approach. Nonetheless, higher values of entropic regularization lead to learning representations of sources that are distinctly structured and are almost additive; attributes that are useful in music source separation. The source code is based on the Pytorch framework (Paszke et al.,2019) and is freely available online ${}^{1}$ .
|
| 233 |
+
|
| 234 |
+
§ ACKNOWLEDGEMENTS
|
| 235 |
+
|
| 236 |
+
§ STYLIANOS I. MIMILAKIS IS SUPPORTED IN PART BY THE GERMAN RESEARCH FOUNDATION (AB 675/2-1, MU 2686/11-1). K. DROSSOS WOULD LIKE TO ACKNOWLEDGE CSC FINLAND FOR COM- PUTATIONAL RESOURCES.
|
| 237 |
+
|
| 238 |
+
${}^{1}$ https://github.com/Js-Mim/rl_singing_ voice
|
| 239 |
+
|
| 240 |
+
§ SUPPLEMENTARY MATERIAL
|
| 241 |
+
|
| 242 |
+
§ COMPUTATION OF SINKHORN DISTANCES
|
| 243 |
+
|
| 244 |
+
The entropy for the regularization of Eq.(7) is computed as
|
| 245 |
+
|
| 246 |
+
$$
|
| 247 |
+
H\left( \mathbf{P}\right) = - \mathop{\sum }\limits_{{t,{t}^{\prime } = 0}}^{{T - 1}}{\mathrm{P}}_{\left\lbrack t,{t}^{\prime }\right\rbrack }\log \left( {\mathrm{P}}_{\left\lbrack t,{t}^{\prime }\right\rbrack }\right)
|
| 248 |
+
$$
|
| 249 |
+
|
| 250 |
+
For solving Eq.(7) with the Sinkhorn iterative matrix scaling algorithm and entropic regularization we used the Algorithm 1 presented in (Cuturi, 2013). We set the total number of iterations to ${1e3}$ per each batch, and the termination threshold to ${1e} - 5$ .
|
| 251 |
+
|
| 252 |
+
The normalization of ${\mathbf{A}}_{\mathrm{m}}$ prior to the computation of the Sinkhorn distances is based on:
|
| 253 |
+
|
| 254 |
+
$$
|
| 255 |
+
{\mathrm{A}}_{\mathrm{m}\left\lbrack {c,t}\right\rbrack }^{ * } = \frac{{\mathrm{A}}_{\mathrm{m}\left\lbrack {c,t}\right\rbrack }}{\mathop{\sum }\limits_{c}\left( {{\mathrm{\;A}}_{\mathrm{m}\left\lbrack {c,t}\right\rbrack } + \frac{1}{C}}\right) }
|
| 256 |
+
$$
|
| 257 |
+
|
| 258 |
+
§ HYPER-PARAMETER SELECTION
|
| 259 |
+
|
| 260 |
+
§ CONVOLUTIONAL NETWORKS
|
| 261 |
+
|
| 262 |
+
For training, the total number of iterations throughout the whole training sub-set is set to 10 . The selection is based on the experimental procedure presented in (Mimilakis et al., 2020), suggesting that any improvements towards the minimization of the overall cost function do not take place after the 10-th iteration.
|
| 263 |
+
|
| 264 |
+
The hyper-parameters for the convolution kernels are based on the best performing combination that has been previously presented in (Mimilakis et al., 2020) and are: number of kernels for the convolutional encoder ${C}^{\prime } = C = {800}$ , stride size used in the first convolutional operator and the decoder $S = {256}$ , length of each kernel in the first convolution and in the decoder $L = {2048}$ , length of the second convolution ${L}^{\prime } = 5$ , and the dilation factor of the second convolution $\phi = {10}$ .
|
| 265 |
+
|
| 266 |
+
§ AUDIO SIGNALS & TRANSFORMS
|
| 267 |
+
|
| 268 |
+
In the evaluation and for the comparison with the STFT, the STFT uses a window size of 2048 samples, an analysis step-size of 256 samples and the Hamming windowing function. The window-size and the step-size were selected according to the closest match of the hyper-parameters in the convolutions (stride, and kernel length).
|
| 269 |
+
|
| 270 |
+
The removal of silent segments is based on the following:
|
| 271 |
+
|
| 272 |
+
$$
|
| 273 |
+
{l}_{{\mathrm{x}}_{\mathrm{v}}} = {10}{\log }_{10}\left( {{\begin{Vmatrix}{\mathbf{x}}_{\mathrm{v}}\end{Vmatrix}}_{2}^{2} + \epsilon }\right) \left\{ \begin{array}{l} {\mathbf{x}}_{\mathrm{v}} : \text{ active, if }{l}_{{\mathrm{x}}_{\mathrm{v}}} \geq - {10} \\ {\mathbf{x}}_{\mathrm{v}} : \text{ silent, }\;\text{ otherwise. } \end{array}\right.
|
| 274 |
+
$$
|
| 275 |
+
|
| 276 |
+
§ INITIALIZATION
|
| 277 |
+
|
| 278 |
+
The kernels in the first convolutions are randomly initialized with values drawn from a uniform distribution. The bounds of the uniform distribution are $\left( {-\sqrt{\frac{3}{C}},\sqrt{\frac{3}{C}}}\right)$ , following the initialization strategy presented in (He et al., 2015). For the decoder, the phase values ${\rho }_{c}$ are initialized to zero, and all the elements of the modulating vectors ${\mathbf{b}}_{c}$ are initialized to $\frac{1}{C + L}$ . The initialization of the normalized frequencies ${f}_{c}$ is inspired by (Ravanelli & Bengio, 2018) and is performed by first computing the center frequencies of the Mel scale ${f}_{\text{ Mel }}$ between ${f}_{\mathrm{{Hz}}} \in \left\lbrack {{30},\ldots ,{22050}}\right\rbrack \mathrm{{Hz}}$ , over $C = {800}$ number of steps, using
|
| 279 |
+
|
| 280 |
+
$$
|
| 281 |
+
{f}_{\mathrm{{Mel}}} = {2595}{\log }_{10}\left( {1 + \frac{{f}_{\mathrm{{Hz}}}}{700}}\right)
|
| 282 |
+
$$
|
| 283 |
+
|
| 284 |
+
and then the initial ${f}_{c}$ value is computed as
|
| 285 |
+
|
| 286 |
+
$$
|
| 287 |
+
{f}_{c} = \frac{{7001}{0}^{{f}_{\mathrm{{Mel}}/{2595}}} - 1}{44100}
|
| 288 |
+
$$
|
| 289 |
+
|
| 290 |
+
§ SEPARATION BY BINARY MASKING
|
| 291 |
+
|
| 292 |
+
We conduct singing voice separation by masking because masking is an important operation in audio and music source separation, and has been extensively used by DL-based approaches and also representation learning (Tzinis et al., 2020). The focus is given on informed separation, i.e., masks are computed by an oracle method using the information for all the mixture's sources from the dataset. This is done in order to estimate the least-upper-bound performance of singing voice separation for a learned representation. This alleviates the biases on the prior information that music source separation approaches have. Examples of biases include the source's structure and existing neural architectures engineered for the STFT. Finally, binary masking is used because it is an indicator of how disjoint (less overlap) two sources are given a representation.
|
| 293 |
+
|
| 294 |
+
The binary mask is computed by encoding three signals. The first signal is the mixture ${\mathbf{x}}_{\mathbf{m}}$ , the second signal is the accompaniment source ${\mathbf{x}}_{\mathrm{{ac}}}$ , and the singing voice signal ${\mathbf{x}}_{\mathrm{v}}$ . Using the trained encoder $E\left( \cdot \right)$ the representations ${\mathbf{A}}_{\mathrm{m}},{\mathbf{A}}_{\mathrm{{ac}}}$ , and ${\mathbf{A}}_{\mathrm{v}}$ are computed for ${\mathbf{x}}_{\mathrm{m}},{\mathbf{x}}_{\mathrm{{ac}}}$ , and ${\mathbf{x}}_{\mathrm{v}}$ , respectively. The mask ${\mathbf{G}}_{\mathrm{v}} \in {\mathbb{R}}^{C \times T}$ is computed as
|
| 295 |
+
|
| 296 |
+
$$
|
| 297 |
+
{\mathbf{G}}_{\mathrm{v}} = g\left( {{\mathbf{A}}_{\mathrm{v}} \oslash {\mathbf{A}}_{\mathrm{{ac}}}}\right) ,
|
| 298 |
+
$$
|
| 299 |
+
|
| 300 |
+
where " $\oslash$ " is the element-wise division and $g\left( \cdot \right)$ is defined
|
| 301 |
+
|
| 302 |
+
as
|
| 303 |
+
|
| 304 |
+
$$
|
| 305 |
+
g\left( \mathrm{x}\right) = \left\{ {\begin{array}{ll} 1, & \text{ if }\mathrm{x} \geq {0.5} \\ 0, & \text{ otherwise } \end{array}.}\right.
|
| 306 |
+
$$
|
| 307 |
+
|
| 308 |
+
The approximation of the singing voice time-domain signal ${\widehat{\mathbf{x}}}_{\mathrm{v}}$ using binary masking is computed using
|
| 309 |
+
|
| 310 |
+
$$
|
| 311 |
+
{\widehat{\mathbf{x}}}_{\mathrm{v}} = D\left( {{\mathbf{A}}_{\mathrm{m}} \odot {\mathbf{G}}_{\mathrm{v}}}\right) ,
|
| 312 |
+
$$
|
| 313 |
+
|
| 314 |
+
where " $\odot$ " is the element-wise (Hadamard) product.
|
| 315 |
+
|
| 316 |
+
§ COMPUTATION OF SI-SDR
|
| 317 |
+
|
| 318 |
+
The scale-invariant signal-to-distortion ratio in $\mathrm{{dB}}$ is computed for each segment, as
|
| 319 |
+
|
| 320 |
+
$$
|
| 321 |
+
\text{ SI-SDR }\left( {{\mathbf{x}}_{\mathrm{v}},{\widehat{\mathbf{x}}}_{\mathrm{v}}}\right) = {10}{\log }_{10}\left( \frac{{\begin{Vmatrix}\alpha {\mathbf{x}}_{\mathrm{v}}\end{Vmatrix}}_{2}^{2}}{{\begin{Vmatrix}\alpha {\mathbf{x}}_{\mathrm{v}} - {\widehat{\mathbf{x}}}_{\mathrm{v}}\end{Vmatrix}}_{2}^{2}}\right) \text{ , for }
|
| 322 |
+
$$
|
| 323 |
+
|
| 324 |
+
$$
|
| 325 |
+
\alpha = \frac{{\widetilde{\mathbf{x}}}_{\mathrm{v}}^{T}{\mathbf{x}}_{\mathrm{v}}}{{\begin{Vmatrix}{\mathbf{x}}_{\mathrm{v}}\end{Vmatrix}}_{2}^{2}}. \tag{15}
|
| 326 |
+
$$
|
| 327 |
+
|
| 328 |
+
Higher SI-SDR values indicate better reconstruction or separation performance.
|
| 329 |
+
|
| 330 |
+
§ ADDITIONAL RESULTS
|
| 331 |
+
|
| 332 |
+
In Figure 3 we demonstrate additional results from the objective evaluation of the learned representations using ${\mathcal{L}}_{B}$ that consists of the Sinkhorn distances. Particularly, Figure 3 contains error plots for a greater range of entropic regularization weights $\lambda \in \left\lbrack {{0.1},{0.5},{1.0},{1.3},{1.5},{2.0},{5.0},{10.0}}\right\rbrack$ and for $\omega = {1.0}$ . In addition to this, we have included results for $p = 1$ and $p = 2$ , where $p > 0$ is used in the computation of the cost matrix $\mathbf{M}$ used by the Sinkhorn distances.
|
| 333 |
+
|
| 334 |
+
From Figure 3 two main observations are highlighted. The first observation is that the computation of the cost matrix $\mathbf{M}$ for $p = 2$ leads to marginally sub-optimal results, for nearly all $\lambda$ values and metrics, compared to $p = 1$ . Specifically, the reconstruction performance of $p = 1$ is outperforming $p = 2$ by $1\mathrm{\;{dB}}$ , on average across $\lambda$ values. Also, $p = 1$ outperforms by ${0.6}\mathrm{\;{dB}}$ , on average, $p = 2$ for the separation by masking performance. For the additivity metric, $p = 2$ marginally outperforms $p = 1$ for a negligible difference of $3{e}^{-3}$ . For these reasons the main results of our work focus on $p = 1$ .
|
| 335 |
+
|
| 336 |
+
The second observation is that for $\lambda > 2$ the observed separation performance dip and additivity performance peak in the area of $\lambda \in \left\lbrack {{1.3},{1.5},{2.0}}\right\rbrack$ disappears, and the examined method performs similarly to the values of low entropy, according to the examined metrics. This contradicts our expectations for the effect of entropic regularization. Our only explanation to this behavior is that for values $\lambda > 2$ the exponential function used in the computation of the Sinkhorn distances and applied initially to $\mathbf{M}$ yields saturated values that bias the overall minimization, in an unexpected way, that requires a closer inspection.
|
| 337 |
+
|
| 338 |
+
In similar vein, for the ${\mathcal{L}}_{A}$ that uses the total-variation de-noising cost the full results complimenting Table 1 are illustrated in Figure 4.
|
| 339 |
+
|
| 340 |
+
To justify the selection of the particular $\lambda = {1.5}$ hyper-parameter for computing ${\mathcal{L}}_{B}$ in Table 2, Figure 6 illustrates the evaluation results for neighbouring $\lambda \in \left\lbrack {{1.3},{2.0}}\right\rbrack$ compared to $\lambda = {1.5}$ , where similar behavior is observed. As it can be observed from Figure 6 the performance of all the representations is nearly identical, with a negligible performance boost observed for of $\lambda = {1.5}$ (orange line), on average across the values of $\omega$ .
|
| 341 |
+
|
| 342 |
+
< g r a p h i c s >
|
| 343 |
+
|
| 344 |
+
Figure 3. Performance evaluation of the learned representations by ${\mathcal{L}}_{B}$ that uses the Sinkhorn distances. (top-left) Reconstruction of singing voice in SI-SDR, (top-right) oracle separation performance in SI-SDR, and (bottom) additivity objective measure. Horizontal and vertical lines denote the average and the standard deviation of the performance, respectively.
|
| 345 |
+
|
| 346 |
+
< g r a p h i c s >
|
| 347 |
+
|
| 348 |
+
Figure 4. ${\mathcal{L}}_{A}$ using total variation denoising $\left( {\mathcal{L}}_{\mathrm{{TV}}}\right)$ for various values of $\omega$ . Left reconstruction of singing voice in SI-SDR, middle oracle separation performance in SI-SDR, and right additivity objective measure
|
| 349 |
+
|
| 350 |
+
Finally, in Figure 5 we provide additional illustrations of the representations obtained using either ${\mathcal{L}}_{A}$ or ${\mathcal{L}}_{B}$ , for a random multi-track segment. For ${\mathcal{L}}_{B}$ we focus on two extreme cases of separation and additivity performance observed from Tables 1 and 2. In particular, we illustrate representations obtained for entropy values $\lambda = {1.5}$ and for $\lambda = {0.5}$ , that resulted in the best performance of additivity and masking, respectively. For comparison, we also display learned representations for ${\mathcal{L}}_{A}$ for $\omega = {4.0}$ , in which the best separation performance for ${\mathcal{L}}_{A}$ was observed in Table 1.
|
| 351 |
+
|
| 352 |
+
< g r a p h i c s >
|
| 353 |
+
|
| 354 |
+
(c) Learned representations for the mixture (left), the singing voice (middle), and the accompaniment (right) signals using the $E\left( \cdot \right)$
|
| 355 |
+
|
| 356 |
+
optimized with ${\mathcal{L}}_{B}$ for ${\mathcal{L}}_{\mathrm{{SK}}} : \omega = {4.0},\lambda = {1.5}$
|
| 357 |
+
|
| 358 |
+
Figure 5. An illustration of the learned representations of a single multi-track segment, using three optimized encoders $\mathcal{E}$ .
|
| 359 |
+
|
| 360 |
+
< g r a p h i c s >
|
| 361 |
+
|
| 362 |
+
Figure 6. Performance evaluation of the learned representations by ${\mathcal{L}}_{B}$ using three entropic regularization $\lambda$ . (top-left) Reconstruction of singing voice in SI-SDR, (top-right) oracle separation performance in SI-SDR, and (bottom) additivity objective measure. Horizontal and vertical lines denote the average and the standard deviation of the performance, respectively.
|
| 363 |
+
|
| 364 |
+
By observing Figure 5 it can be seen that the usage of ${\mathcal{L}}_{A}$ (employing the total-variation denoising cost) leads to smooth representations. However, qualitatively the representation of the mixture and the sources seem somewhat blurry, without distinct structure between the sources. Consequently, representations learned using ${\mathcal{L}}_{A}$ might impose difficulties for source separation methods. On the other hand the employment of ${\mathcal{L}}_{B}$ with the Sinkhorn distances and for $\lambda = {0.5}$ , leads to learned representations that at least for the singing voice signal a prominent structure of horizontal activity is observed. The interesting part comes when the entropy regularization weight is increase to $\lambda = {1.5}$ , where now the accompaniment source is distinguished by prominent vertical activity.
|
papers/ICML/ICML 2020/ICML 2020 Workshop/ICML 2020 Workshop SAS/cnLz5ckGs1y/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,213 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Analysis of Predictive Coding Models for Phonemic Representation Learning in Small Datasets
|
| 2 |
+
|
| 3 |
+
María Andrea Cruz Blandón ${}^{1}$ Okko Räsänen ${}^{1}{}^{2}$
|
| 4 |
+
|
| 5 |
+
## Abstract
|
| 6 |
+
|
| 7 |
+
Neural network models using predictive coding are interesting from the viewpoint of computational modelling of human language acquisition, where the objective is to understand how linguistic units could be learned from speech without any labels. Even though several promising predictive coding -based learning algorithms have been proposed in the literature, it is currently unclear how well they generalise to different languages and training dataset sizes. In addition, despite that such models have shown to be effective phonemic feature learners, it is unclear whether minimisation of the predictive loss functions of these models also leads to optimal phoneme-like representations. The present study investigates the behaviour of two predictive coding models, Autoregressive Predictive Coding and Contrastive Predictive Coding, in a phoneme discrimination task (ABX task) for two languages with different dataset sizes. Our experiments show a strong correlation between the autoregressive loss and the phoneme discrimination scores with the two datasets. However, to our surprise, the CPC model shows rapid convergence already after one pass over the training data, and, on average, its representations outperform those of APC on both languages.
|
| 8 |
+
|
| 9 |
+
## 1. Introduction
|
| 10 |
+
|
| 11 |
+
According to a number of influential neurocognitive hypotheses, the human brain uses predictive mechanisms for perception of and learning from sensory data (Friston, 2005; 2010; Cope et al., 2017). Similar ideas have been adapted to unsupervised neural network models, one of them the so-called Predictive Coding (PC) framework (see (Spratling, 2017) for a review of PC algorithms). Previously, PC has been used in image processing (Hénaff et al., 2019) and speech processing (Oord et al., 2018; Chung et al., 2019; Chung & Glass, 2019; Lian et al., 2019; Schneider et al., 2019).
|
| 12 |
+
|
| 13 |
+
The PC-based models are of special interest for low-resource speech technology, where access to labelled data is limited, but also for research on early language acquisition, where neurocognitively motivated approaches are of particular interest. In the latter, good models of human language learning should learn linguistic information from speech without any a priori linguistic specification. In both low-resource processing and modelling of human learning, the models should generalise across languages. Low-resource systems should also work with small datasets, whereas high-quality datasets used to study language learning are also often limited in size. One of the resulting challenges is the application of the same models across the different corpora, where a good system would require little if any hyperparameter optimi-sation across the different use cases. Since hyperparameter optimisation is time-consuming and often not feasible, the use of conventional hyperparameters is common.
|
| 14 |
+
|
| 15 |
+
In this paper, we examine the performance of PC models applied to learn of phonemic representations from speech in the context of two new languages, French and Mandarin, whose corpora are also smaller compared to the original studies. The work contributes to the understanding of these models, and provides support for model selection when applying these models to real low-resource scenarios. We focus on three questions: a) is there a consistent relationship between the model loss functions and phoneme selectivity of the learned representations across different datasets, b) how much is this relationship affected by the dataset type and size, and c) how does learning in these models compare a function of the amount of training data available?
|
| 16 |
+
|
| 17 |
+
## 2. Predictive Coding Models
|
| 18 |
+
|
| 19 |
+
In this section, we will explain the two selected PC models, APC (Chung et al., 2019) and CPC (Oord et al., 2018). The fundamental difference between the two is the optimisation problem that each model tries to solve. More specifically, APC uses an autoregressive loss trying to predict future input features accurately while CPC uses a contrastive loss that focuses on distinguishing real future latent representations from false future. The authors of APC argue that there is evidence that a low contrastive loss implies the existence of a classifier with a low unimodal loss (Chung et al., 2019). In contrast, in CPC, the authors claim that unimodal losses are not convenient when we want the model to excel in capturing the relationships between the data and its context in high dimensional data such as time-frequency structure of speech (Oord et al., 2018).
|
| 20 |
+
|
| 21 |
+
---
|
| 22 |
+
|
| 23 |
+
${}^{1}$ Unit of Computing Sciences, Tampere University, Finland ${}^{2}$ Dept. Signal Processing and Acoustics, Aalto University, Finland. Correspondence to: María Andrea Cruz Blandón <maria.cruzblandon@tuni.fi>, Okko Räsänen <okko.rasanen@tuni.fi>.
|
| 24 |
+
|
| 25 |
+
Published at the workshop on Self-supervision in Audio and Speech at the ${37}^{\text{th }}$ International Conference on Machine Learning, Vienna, Austria. Copyright 2020 by the author(s).
|
| 26 |
+
|
| 27 |
+
---
|
| 28 |
+
|
| 29 |
+
### 2.1. Contrastive Predictive Coding
|
| 30 |
+
|
| 31 |
+
The underlying motivation for CPC is to extract information from the temporal context that serves to describe the data more effectively. To achieve this, CPC's authors propose a model that aims to maximise the mutual information between the data and its future context.
|
| 32 |
+
|
| 33 |
+
The architecture comprises two blocks. In the first block, a non-linear encoder processes the input features (raw audio waveform in the original paper). The outputs of this block are called the latent representations, ${\mathbf{z}}_{t}$ . This block is followed by an autoregressive block that produces so-called context latent representations ${\mathbf{c}}_{t}$ using the history of previous latent representations ${\mathbf{z}}_{ \leq t}$ . Using ${\mathbf{c}}_{t}$ , the model predicts latent representations $k$ time steps ahead using ${\mathbf{z}}^{\prime }{}_{t + k} = {\mathbf{W}}_{k}{\mathbf{c}}_{t}$ , which correspond to the predictive coding part.
|
| 34 |
+
|
| 35 |
+
To maximise the mutual information between input features and context representations, the authors introduce InfoNCE loss. This loss is based on Noise-Contrastive Estimation (NCE) (Gutmann & Hyvärinen, 2010). Assuming there is a noise distribution close to the data distribution, the model can learn by comparison. The model reaches this aim by discriminating the samples taken from the data distribution and the ones taken from the noise distribution, which are called negative samples. In CPC, the negative samples are randomly taken from the data distribution as in (Bengio & Senécal, 2008). The InfoNCE loss corresponds to the categorical cross-entropy loss (see Eq. (1)), where a density ratio gives the score of the sample classification. The model does not require to learn the probabilistic data distribution directly, instead uses a log-bilinear model for the density ratio, ${f}_{k}\left( {{\mathbf{x}}_{t + k},{\mathbf{c}}_{t}}\right) = \exp \left( {{\mathbf{z}}_{t + k}^{T}{\mathbf{W}}_{k}{\mathbf{c}}_{t}}\right)$ .
|
| 36 |
+
|
| 37 |
+
$$
|
| 38 |
+
{L}_{\text{InfoNCE }} = - \log \frac{{f}_{k}\left( {{\mathbf{x}}_{t + k},{\mathbf{c}}_{t}}\right) }{\mathop{\sum }\limits_{{{\mathbf{x}}_{j} \in \mathbf{X}}}{f}_{k}\left( {{\mathbf{x}}_{j},{\mathbf{c}}_{t}}\right) } \tag{1}
|
| 39 |
+
$$
|
| 40 |
+
|
| 41 |
+
### 2.2. Autoregressive Predictive Coding
|
| 42 |
+
|
| 43 |
+
Based on the hypothesis that a low contrastive loss implies the existence of a linear classifier with a low unimodal loss (Chung et al., 2019), authors of APC propose an autoregressive model for the PC. APC is similar to autoencoder architectures in which the target features are the same as the input features, except that in APC, the target features are the input features occurring in future time steps.
|
| 44 |
+
|
| 45 |
+
APC architecture consists of a 'PreNet' block that maps the input features (80-dim log Mel spectrograms in the original paper) to a new vector space, an autoregressive model, and a 'PostNet' block implementing the PC part. The 'PostNet' block predicts the future $k$ features ${\mathbf{x}}_{t + k}$ , using the latent representation $\left( {\mathbf{z}}_{t}\right)$ output by the autoregressive model. As a result, the model learns the probability distribution of future features. APC uses the Mean Absolute Error (MAE) as the loss function to optimise the training (see equation 2), where ${y}_{t + k}$ is the prediction for the signal ${x}_{t + k}$ . Therefore, the latent representations should then encode information that helps the model to reconstruct the input features $k$ steps in the future.
|
| 46 |
+
|
| 47 |
+
$$
|
| 48 |
+
{L}_{\mathrm{{MAE}}} = \frac{\mathop{\sum }\limits_{{t = 1}}^{{N - k}}\left| {{\mathbf{x}}_{t + k} - {\mathbf{y}}_{t + k}}\right| }{N - k} \tag{2}
|
| 49 |
+
$$
|
| 50 |
+
|
| 51 |
+
## 3. Experimental Setup
|
| 52 |
+
|
| 53 |
+
In this section, we describe the corpora, model architectures, and the experimental setup we used to analyse the relationship between APC and CPC validation losses and their performance in a phoneme discrimination task.
|
| 54 |
+
|
| 55 |
+
### 3.1. Datasets and phoneme discrimination tasks
|
| 56 |
+
|
| 57 |
+
We tested APC and CPC models on a subset of the track 1 of the Zero Resource Speech Challenge 2020 datasets (Dunbar et al., 2017) that focuses on learning of phoneme-sensitive features in an unsupervised manner. The subset contains 24 $\mathrm{h}$ of French and ${2.5}\mathrm{\;h}$ of Mandarin conversational speech for model training, and 47,096 and 21, 247 one second utterances for testing in the two languages, respectively. The training datasets are composed of a few speakers with more speech (approx. ${20}\mathrm{\;{min}}$ for Mandarin and $2\mathrm{\;h}$ for French), and several speakers with short recordings (about ${10}\mathrm{\;{min}}$ each). We tried to maximise speaker diversity (unique speakers) in the training while maintaining train/validation split ratio of ${80}\% /{20}\%$ as closely as possible.
|
| 58 |
+
|
| 59 |
+
In the context of the challenge, the task consists of learning speech representations that are convenient for phoneme discrimination, for which the challenge incorporates a minimal pair ABX-task (Schatz et al., 2013; 2014). The task measures the phonemic discriminability of the learned representations (Versteegh et al., 2015; Dunbar et al., 2017). In our experiments, the evaluation tool provided by the challenge was used to calculate the ABX scores. ABX scores are reported separately for within-speaker (minimal pair tokens always from the same talker) and across-speaker conditions (tokens from different speakers), where the latter better reflects speaker-independent phonemic categorisation.
|
| 60 |
+
|
| 61 |
+
### 3.2. Implementation of the PC models
|
| 62 |
+
|
| 63 |
+
As input features, ${39}\mathrm{{MFCC}}\left( {{13}\text{static} + \Delta + {\Delta \Delta }}\right)$ coefficients were extracted using a window length of ${25}\mathrm{\;{ms}}$ and a window shift of ${10}\mathrm{\;{ms}}$ . The data was split into $2\mathrm{\;s}$ samples. For each epoch, the order of the input data was randomised. All models were trained in a monolingual setup.
|
| 64 |
+
|
| 65 |
+
For APC, we followed the implementation published by (Chung et al., 2019). The network consists of three fully connected layers with 128 units with ReLU activations for ’PreNet’ with ${20}\%$ of dropout, three GRU layers with 512 units and residual connections (Wu et al., 2016) for the predictive part, and one convolutional layer with kernel size of one for the 'PostNet'. We used an initial learning rate of ${10}^{-4}$ unless otherwise specified. The prediction was carried out five frames (50-ms) ahead.
|
| 66 |
+
|
| 67 |
+
For CPC, we followed the implementation provided by (Schneider et al., 2019) for the contrastive loss calculation. We also followed the adaptation of the encoder proposed by (Chung et al., 2019) for using acoustic features as input features. The architecture consists of three fully connected layers with 512 units with ReLU activations for the encoder, and one GRU layer with 256 units for the autoregressive model, both blocks trained with a dropout of ${20}\%$ . As in (Schneider et al., 2019), we used ten negative samples taken from the batch and predicted 12 steps frames, that is ${120}\mathrm{\;{ms}}$ ahead. We used an initial learning rate of ${10}^{-3}$ .
|
| 68 |
+
|
| 69 |
+
We trained all models using a batch size of 32 , and using Adam optimiser (Kingma & Ba, 2015). For all models, we used PCA to reduce the dimension of the latent vectors used for ABX task (maintaining 95% of the variance), as the original dimensionality was too high for the ABX-scoring tool to handle. The reported ABX-scores correspond to the extracted latent representations ${\mathbf{z}}_{t}$ for both models. Although context latent representations ${\mathbf{c}}_{t}$ were also analysed for the CPC model, we only report latent representations as there were no notable differences between ${\mathbf{c}}_{t}$ and ${\mathbf{z}}_{t}$ .
|
| 70 |
+
|
| 71 |
+
### 3.3. Experiments
|
| 72 |
+
|
| 73 |
+
To assess the correlation between the validation loss and the ABX scores, the APC and CPC models were trained for 100 epochs and saving the models every ten epochs for ABX-scoring ('APC-1' and 'CPC-1'). Each model was trained three times with random initialisation to consider the influence of initial parameters. In the case of CPC, we ran an additional experiment ('CPC-2') to investigate the behaviour of the model during the first ten epochs in more detail, saving after each of the first 10 epochs and then every 10 epochs, and running the experiment twice.
|
| 74 |
+
|
| 75 |
+
Table 1. Percentage of the French dataset used for training. The number of hours that the percentage represents, and the number of samples for the training set (T.) and for the validation set (V.)
|
| 76 |
+
|
| 77 |
+
<table><tr><td>PERCENTAGE</td><td>Hours</td><td>T. SAMPLES</td><td>V. SAMPLES</td></tr><tr><td>100</td><td>25.1</td><td>36,031</td><td>9, 182</td></tr><tr><td>75</td><td>18.8</td><td>27,023</td><td>6, 886</td></tr><tr><td>50</td><td>12.6</td><td>18,015</td><td>4,591</td></tr><tr><td>25</td><td>6.3</td><td>9,007</td><td>2,295</td></tr></table>
|
| 78 |
+
|
| 79 |
+
To calculate the correlation, Pearson's correlation coefficient (r)was adopted; however, in cases were the linear correlation was not evident in the scatter plot, we also calculated Spearman’s rank correlation coefficient $\left( {r}_{s}\right)$ . Additionally, the significance of the correlation coefficients was validated performing a hypothesis test for $r$ and using the critical value (Zar,1972) for ${r}_{s}$ . In both cases with a significance level of $\alpha = {0.05}$ (critical values equal to ${r}_{s} = {0.678}$ , and $t = {1.86})$ . The $t$ test statistic for $r$ was calculating with the formula $t = r\sqrt{n - 2}/\sqrt{1 - {r}^{2}}$ , where $n$ is the number of points used for calculating $r$ .
|
| 80 |
+
|
| 81 |
+
Regarding the relationship between the dataset size and the performance of the predictive model, we train four models varying the percentage of samples for the training data from 100% to 25% decreasing on 25% each time. For this analysis, the French dataset was employed, see table 1.
|
| 82 |
+
|
| 83 |
+
## 4. Results
|
| 84 |
+
|
| 85 |
+
Fig. 1(a) shows the validation loss and the ABX-scores of the APC model for the French and Mandarin datasets (APC- 1). A striking correlation between the two values can be seen for the two languages; although the slope for Mandarin data is higher than for French data. There is also more variability in the French runs ( $r = {0.817} \pm {0.076}$ for ABX across-speaker; $r = {0.725} \pm {0.159}$ for ABX within speaker) than in the Mandarin dataset $(r = {0.991} \pm {0.005}$ for ABX across-speaker; $r = {0.978} \pm {0.009}$ for ABX within-speaker). Since the French training started to overfit already after 20 epochs (with increasing validation loss), we re-ran these experiments for French dataset but using a lower learning rate $\left( {{lr} = {10}^{ - }5}\right)$ (APC-2). As a result, the variability among the runs was reduced $(r = {0.997} \pm {0.001}$ for ABX across-speaker and $r = {0.809} \pm {0.248}$ for ABX within speaker. See Supplementary Material, Fig. S1 for the scatter plot).
|
| 86 |
+
|
| 87 |
+
In the first experiment for CPC (CPC-1), there was little relative variation in both the InfoNCE loss and the ABX-scores. A closer analysis revealed that the validation loss was decreasing with more epochs, whereas the ABX-scores were oscillating with small changes (standard deviation for the three runs: ${SD} = {0.217}$ for ABX across-speaker for Mandarin; ${SD} = {0.318}$ for ABX within-speaker for Mandarin; ${SD} = {0.145}$ for ABX across-speaker for French; ${SD} = {0.174}$ for ABX within-speaker for French). This behaviour suggested the model was converging to phoneme-like representations already in the first ten epochs. To evaluate this hypothesis, we ran a second experiment also evaluating all the models from the ten first epochs. To our surprise, the CPC model shows a rapid convergence after one pass over the training data (see ABX-scores for larger values of the validation loss). The oscillation pattern observed in the previous experiments persists with later epochs, and the change in overall ABX-score is nearly zero for almost all cases except for Mandarin ABX across-speaker condition, where a slight improvement is observed with more training. Notably, the CPC ABX performance after one epoch is already comparable to the APC best results.
|
| 88 |
+
|
| 89 |
+

|
| 90 |
+
|
| 91 |
+
Figure 1. Scatter plots of the APC-1 and CPC-2 models ABX performance as a function of validation loss, including a detailed picture of the first ten epochs for the CPC model. Symbol markers: (+) First run, (-) Second run, and (*) Third run.
|
| 92 |
+
|
| 93 |
+
Table 2 lists the correlation coefficients calculated for the averaged performance of the APC model (Mandarin APC-1 and French APC-2). Since the relationship between the InfoNCE loss and the ABX-scores is highly variable for the CPC model across the runs, we calculated the correlation coefficients for each run (CPC-1 (run id)). All APC correlation coefficients were found to be significant with significance criterion of $\alpha = {0.05}(t = {42.260}$ for APC-2 FR ABX across-speaker; $t = {6.637}$ for APC-2 FR ABX within-speaker; $t = {22.923}$ for APC-2 MA ABX across-speaker; $t = {13.461}$ for APC-2 MA ABX within-speaker). The CPC model, on the other hand, shows both positive and negative correlation for the same language (see, e.g., $r$ of ABX across-speaker score for CPC-1 (1,3) MA). This remarkable discrepancy highlights the variability between runs when the model has rapidly converged.
|
| 94 |
+
|
| 95 |
+
Table 3 shows the correlation coefficients obtained for the CPC model for the first ten epochs (CPC-2) for both languages. The relationship between the validation loss and the ABX across-speaker score shown in Fig. 1(b) was also reflected in the correlation coefficients obtained. Both $r$ and ${r}_{s}$ are significant and exhibit a strong positive correlation throughout the training $(r\left( 8\right) = {0.972},\rho < {0.05}$ for the first and $r\left( 8\right) = {0.960},\rho < {0.05}$ for the second run). The strong correlation for the ABX across-speaker score also shows a feature of the InfoNCE loss that is worth noting, although it was exhibited for some runs only. The selection of the negative samples could have an impact on the information that is favoured in the representations (Oord et al., 2018; Chung et al., 2019). The rationale behind this is that by using the same utterance to extract the negative samples, the information about speaker features will not be relevant for distinguishing true and negative samples, thus encouraging phonemic information. We run additional experiments to evaluate if the ratio change (relative proportion of change between consecutive epochs) for the validation loss was correlated to the ABX-scores, but our results did not provide statistical evidence of such correlation.
|
| 96 |
+
|
| 97 |
+
Table 2. Correlation coefficients between the validation loss and the ABX-scores for the French (FR) and Mandarin (MA) datasets. Pearson’s(r)and Spearman’s Rank $\left( {r}_{s}\right)$ correlation coefficients are reported for ABX-scores. (*) $\rho < {0.05}$ . Analysis of APC averaged performance and CPC runs.
|
| 98 |
+
|
| 99 |
+
<table><tr><td rowspan="2">MODEL</td><td colspan="2">ACROSS-SPEAKER</td><td colspan="2">WITHIN-SPEAKER</td></tr><tr><td>$r$</td><td>${r}_{s}$</td><td>$r$</td><td>${r}_{s}$</td></tr><tr><td>APC-2 FR</td><td>${\mathbf{{0.998}}}^{ * }$</td><td>${1.000}^{ * }$</td><td>${\mathbf{{0.920}}}^{ * }$</td><td>${0.879}^{ * }$</td></tr><tr><td>APC-1 MA</td><td>${\mathbf{{0.992}}}^{ * }$</td><td>${0.903}^{ * }$</td><td>${\mathbf{{0.979}}}^{ * }$</td><td>${0.867}^{ * }$</td></tr><tr><td>CPC-1 (1) FR</td><td>-0.202</td><td>-0.115</td><td>-0.703*</td><td>-0.770*</td></tr><tr><td>CPC-1 (2) FR</td><td>${0.920}^{ * }$</td><td>${0.867}^{ * }$</td><td>${0.836}^{ * }$</td><td>0.588</td></tr><tr><td>CPC-1 (3) FR</td><td>-0.511</td><td>$- {0.661}^{ * }$</td><td>-0.228</td><td>-0.055</td></tr><tr><td>CPC-1 (1) MA</td><td>$- {0.705}^{ * }$</td><td>-0.552</td><td>-0.525</td><td>-0648*</td></tr><tr><td>CPC-1 (2) MA</td><td>0.282</td><td>0.006</td><td>$- {0.759}^{ * }$</td><td>$- {0.782}^{ * }$</td></tr><tr><td>CPC-1 (3) MA</td><td>${0.913}^{ * }$</td><td>${0.782}^{ * }$</td><td>0.310</td><td>0.430</td></tr></table>
|
| 100 |
+
|
| 101 |
+
As for the dataset size comparison, Table 4 shows the ABX-scores obtained after training the APC model with different dataset size for the French language. Unlike earlier, the model was trained with a learning rate of ${10}^{ - }5$ , as this was found to improve training stability in the earlier experiments. Considering the strong correlation between MAE and the ABX-scores, each model was chosen based on the lowest validation loss. The differences in the ABX-scores are relatively negligible when taking into account that the models were trained for a maximum of 100 epochs (usually with the lowest validation loss value). This implies that the models could still improve their representations with more training. That being said, with only ${25}\%$ of the total data, that is ${6.3}\mathrm{\;h}$ of the French dataset, the APC model already converged with the hyperparameters here defined. Contradictory to the idea that more training data improves the performance, this result shows that hyperparameter tuning would be more beneficial in this case than increasing the training data. For CPC, it was problematic because we could not use the validation loss as the selection criterium, and we could not conduct the experiments in time. However, see the supplementary material for an upper bound of the true performance assuming a rapid convergence.
|
| 102 |
+
|
| 103 |
+
Table 3. Correlation coefficients for the first ten epochs of the CPC model. (*) $\rho \geq {0.05}$ .
|
| 104 |
+
|
| 105 |
+
<table><tr><td rowspan="2">MODEL</td><td colspan="2">ACROSS-SPEAKER</td><td colspan="2">WITHIN-SPEAKER</td></tr><tr><td>$r$</td><td>${r}_{s}$</td><td>$r$</td><td>${r}_{s}$</td></tr><tr><td>CPC-2 (1) FR</td><td>${0.218}^{ * }$</td><td>${0.219}^{ * }$</td><td>-0.869</td><td>-0.851</td></tr><tr><td>CPC-2 (2) FR</td><td>0.795</td><td>${0.255}^{ * }$</td><td>$- {0.323}^{ * }$</td><td>$- {0.608}^{ * }$</td></tr><tr><td>CPC-2 (1) MA</td><td>0.948</td><td>0.988</td><td>$- {0.406}^{ * }$</td><td>${0.285}^{ * }$</td></tr><tr><td>CPC-2 (2) MA</td><td>0.957</td><td>0.964</td><td>$- {0.587}^{ * }$</td><td>$- {0.479}^{ * }$</td></tr></table>
|
| 106 |
+
|
| 107 |
+
Table 4. Performance of the APC model as a function of the dataset size.
|
| 108 |
+
|
| 109 |
+
<table><tr><td>PERCENTAGE</td><td>ACROSS-SPEAKER</td><td>WITHIN-SPEAKER</td></tr><tr><td>100</td><td>19.265</td><td>12.790</td></tr><tr><td>75</td><td>19.921</td><td>13.202</td></tr><tr><td>50</td><td>19.878</td><td>12.879</td></tr><tr><td>25</td><td>20.358</td><td>13.074</td></tr><tr><td>$\bar{x} \pm {SD}$</td><td>${19.856} \pm {0.449}$</td><td>${12.986} \pm {0.186}$</td></tr></table>
|
| 110 |
+
|
| 111 |
+
As a final comparison, Table 5 lists the best ABX-scores obtained for the APC-1 and CPC-1 models, and the training epoch for which the best model was obtained. We also report CPC-2 model only after one epoch of training to demonstrate its fast learning. MFCC-based ABX-scores are also reported as a baseline. Both PC models improved the ABX-scores in comparison with the baseline, except for Mandarin ABX within-speaker score. The CPC model outperforms the APC model in both languages and ABX-scores.
|
| 112 |
+
|
| 113 |
+
## 5. Discussion and Conclusions
|
| 114 |
+
|
| 115 |
+
In this paper, we analysed the behaviour of PC models in the context of phoneme discrimination tasks with relatively small datasets. Our experiments confirmed that APC and CPC models are also suitable for relatively small corpora. In the original papers, the APC and CPC models were trained on 100- and 360-hour subsets from Librispeech (Panayotov et al., 2015), respectively. Our results show that these models also learn phoneme-discriminating representations from much smaller corpora down to mere 2.5 hours of speech.
|
| 116 |
+
|
| 117 |
+
Table 5. Best ABX-scores obtained for the APC and CPC models among all the three runs of the first experiment and ABX-scores of the CPC model in the first epoch of the second experiment. In bold the lowest scores.
|
| 118 |
+
|
| 119 |
+
<table><tr><td>MODEL</td><td>EPOCH</td><td>ACROSS-S</td><td>WITHIN-S</td></tr><tr><td>APC-1 FR</td><td>10</td><td>18.698</td><td>11.740</td></tr><tr><td>APC-1 MA</td><td>100</td><td>12.624</td><td>10.197</td></tr><tr><td>CPC-1 FR</td><td>10</td><td>17.500</td><td>9.791</td></tr><tr><td>CPC-1 MA</td><td>20</td><td>11.837</td><td>9.185</td></tr><tr><td>CPC-2 FR</td><td>1</td><td>17.463</td><td>9.854</td></tr><tr><td>CPC-2 MA</td><td>1</td><td>13.058</td><td>9.202</td></tr><tr><td>MFCC FR</td><td>-</td><td>21.050</td><td>10.150</td></tr><tr><td>MFCC MA</td><td>-</td><td>14.584</td><td>9.140</td></tr></table>
|
| 120 |
+
|
| 121 |
+
A very high and consistent correlation $\left( {r \approx {0.97}}\right)$ between the MAE loss and ABX scores was found for the APC model across the two datasets. However, this correlation was affected by the sampling of epochs for the ABX evaluation, where a large proportion of the scores were obtained after the model had already saturated in performance. Despite this effect, which could easily be avoided by using early stopping, the APC behaves similarly for both datasets.
|
| 122 |
+
|
| 123 |
+
On the contrary, there was no significant correlation between validation loss and ABX scores for the CPC model. In fact, our results suggest that the CPC model was rapidly converging to effective phoneme-sensitive representations already during the first ten epochs. After this, the model continues learning representations that improve the predictive loss, but this is not reflected in better phonemic representations. The latter requires further experiments to understand the underpinning of this behaviour. Interestingly, the very good CPC performance already after one pass over the training data resembles the conditions of human language acquisition, where a child never has access to the same input twice.
|
| 124 |
+
|
| 125 |
+
Finally, APC results are especially important as they could be interpreted as evidence of adaptability to different dataset sizes and robustness to different languages; the validation loss can be employed for selecting the model when extracting phonemic features for different datasets. On the other hand, although the CPC model obtained the best ABX scores in early iterations, its validation loss is less directly linked with the phonemic nature of the learned representations in the case of small datasets.
|
| 126 |
+
|
| 127 |
+
## Acknowledgements
|
| 128 |
+
|
| 129 |
+
This study was funded by Academy of Finland grants no. 314602 and 320053.
|
| 130 |
+
|
| 131 |
+
## S1. Supplementary Material
|
| 132 |
+
|
| 133 |
+
### S1.1. Code and statistical data
|
| 134 |
+
|
| 135 |
+
Our implementation of the APC and CPC model and all the data points and statistical metrics could be found on https://github.com/SPEECHCOG/pc_ models_analysis
|
| 136 |
+
|
| 137 |
+
### S1.2. Scatter plots
|
| 138 |
+
|
| 139 |
+
Figure S1 shows the APC-2 experiment for the French dataset. Figure S2 illustrates the CPC-1 experiment, three runs for each language with 100 epochs per run, and figure S3 is the detailed view of the ABX across-speaker scores over epochs for the three runs of the French dataset.
|
| 140 |
+
|
| 141 |
+

|
| 142 |
+
|
| 143 |
+
Figure S1. Scatter plot of the French APC model ABX performance as a function of the validation loss (APC-2). Model trained with $\operatorname{lr} = {10}^{ - }5$ . Symbol markers: (+) First run,(-) Second run, and (*) Third run.
|
| 144 |
+
|
| 145 |
+
#### S1.3.CPC dataset size experiment
|
| 146 |
+
|
| 147 |
+
In the case of the CPC model, there was not a significant correlation between the validation loss and the ABX-scores. As a consequence, it was less accurate to use the validation loss as the selection criterium of the model than for the APC model. To offer an upper bound of the real performance of the CPC model, we ran the dataset size experiment (see subsection 3.3) assuming a rapid convergence. For this experiment, we used the same architecture as explained in subsection 3.2. Table S1 shows the ABX-scores obtained after training the model for ten epochs with different dataset size for the French language. As in APC, by using roughly six hours of the French dataset (25%) the model obtained ABX-scores comparable to the ABX-scores obtained with the full dataset.
|
| 148 |
+
|
| 149 |
+

|
| 150 |
+
|
| 151 |
+
Figure S2. Scatter plot of the CPC model ABX performance as a function of the validation loss (CPC-1). Symbol markers: (+) First run, $\left( \cdot \right)$ Second run, and $\left( *\right)$ Third run.
|
| 152 |
+
|
| 153 |
+

|
| 154 |
+
|
| 155 |
+
Figure S3. ABX across-speaker scores as a function of the epoch for the three runs of the French CPC model (CPC-1). Symbol markers: (+) First run, (-) Second run, and (*) Third run.
|
| 156 |
+
|
| 157 |
+
On the other hand, unlike the APC model, the ABX across-speaker score shows a slight improvement by increasing the dataset size. The infoNCE loss benefits from more data for the comparison of negative and true samples resulting in more speaker-independent phoneme representations. However, notice that the differences in the ABX within-speaker scores are relatively negligible. This behaviour is comparable to the results for the Mandarin CPC-2 models, where the ABX across-speaker score was improving over time, whereas the ABX within-speaker score was oscillating around the same value (see figure 1(b)). Further experiments are necessary to understand this behaviour.
|
| 158 |
+
|
| 159 |
+
Table S1. Performance of the CPC model as a function of the dataset size. Assuming a rapid convergence in 10 epochs.
|
| 160 |
+
|
| 161 |
+
<table><tr><td>PERCENTAGE</td><td>ACROSS-SPEAKER</td><td>WITHIN-SPEAKER</td></tr><tr><td>100</td><td>16.872</td><td>10.325</td></tr><tr><td>75</td><td>17.535</td><td>11.166</td></tr><tr><td>50</td><td>17.778</td><td>10.361</td></tr><tr><td>25</td><td>18.406</td><td>10.478</td></tr><tr><td>$\bar{x} \pm {SD}$</td><td>${17.648} \pm {0.634}$</td><td>${10.583} \pm {0.394}$</td></tr></table>
|
| 162 |
+
|
| 163 |
+
#### S1.4.APC with Mean Square Error loss
|
| 164 |
+
|
| 165 |
+
To explore the behaviour of the APC model with a different unimodal loss, we ran an extra experiment utilising the Mean Square Error (MSE) loss for training the model. Similar to previous experiments, we ran the model three times for 100 epochs and evaluated the performance on the ABX task every ten epochs for the Mandarin dataset.
|
| 166 |
+
|
| 167 |
+
Figure S4 shows the APC model ABX performance as a function of the MSE loss. The behaviour is comparable to APC with MAE loss. The Pearson's correlation coefficients are $r = {0.953} \pm {0.014},\rho < {0.05}$ for ABX across-speaker score and $r = {0.908} \pm {0.005},\rho < {0.05}$ for ABX within-speaker score. These results expose a high correlation between the ABX-scores and the MSE loss. In order to compare the correlation coefficients of the two APC models (with MAE loss and with MSE loss), we performed a Z-test. We set the level of significance to $\alpha = {0.05}$ indicating a critical value of $\pm {1.96}$ and employed Fisher's transformation for the correlation coefficients of the averaged performance (APC (MSE): r=0.956 for ABX across-speaker and $\mathrm{r} = {0.915}$ for $\mathrm{{ABX}}$ within-speaker; $\mathrm{{APC}}$ (MAE): $\mathrm{r} = {0.992}$ for $\mathrm{{ABX}}$ across-speaker and $\mathrm{r} = {0.979}$ for $\mathrm{{ABX}}$ within-speaker. All coefficients with $\rho < {0.05}$ ). The observed $\mathrm{Z}$ values are ${Z}_{\text{obs }} = - {1.672}$ for ABX across-speaker and ${Z}_{\text{obs }} = - {1.326}$ for ABX within-speaker. We did not find sufficient evidence to conclude a significant difference between the correlation coefficients of the APC (MAE) model and the APC (MSE) model.
|
| 168 |
+
|
| 169 |
+

|
| 170 |
+
|
| 171 |
+
Figure S4. Scatter plot of the Mandarin APC model ABX performance as a function of the Mean Square Error loss. Model trained with $\operatorname{lr} = {10}^{ - }4$ . Symbol markers: (+) First run,(-) Second run, and (*) Third run.
|
| 172 |
+
|
| 173 |
+
## References
|
| 174 |
+
|
| 175 |
+
Bengio, Y. and Senécal, J. S. Adaptive importance sampling to accelerate training of a neural probabilistic language model. IEEE Transactions on Neural Networks, 19(4): 713-722, 2008. ISSN 10459227. doi: 10.1109/TNN, 2007.912312.
|
| 176 |
+
|
| 177 |
+
Chung, Y.-A. and Glass, J. Generative Pre-Training for Speech with Autoregressive Predictive Coding. ArXiv, abs/1910.1, 2019.
|
| 178 |
+
|
| 179 |
+
Chung, Y. A., Hsu, W. N., Tang, H., and Glass, J. An unsupervised autoregressive model for speech representation learning. Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH, pp. 146-150, 2019. ISSN 19909772. doi: 10.21437/Interspeech.2019-1473.
|
| 180 |
+
|
| 181 |
+
Cope, T. E., Sohoglu, E., Sedley, W., Patterson, K., Jones, P. S., Wiggins, J., Dawson, C., Grube, M., Carlyon, R. P., Griffiths, T. D., Davis, M. H., and Rowe, J. B. Evidence for causal top-down frontal contributions to predictive processes in speech perception. Nature Communications, 8(1), 2017. ISSN 20411723. doi: 10.1038/ s41467-017-01958-7.
|
| 182 |
+
|
| 183 |
+
Dunbar, E., Cao, X. N., Benjumea, J., Karadayi, J., Bernard, M., Besacier, L., Anguera, X., and Dupoux, E. The zero resource speech challenge 2017. 2017 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU), pp. 323-330, 2017. doi: 10.1109/ASRU.2017. 8268953.
|
| 184 |
+
|
| 185 |
+
Friston, K. A theory of cortical responses. Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences, 360(1456):815-836, 4 2005. ISSN 09628436. doi: 10.1098/rstb.2005.1622.
|
| 186 |
+
|
| 187 |
+
Friston, K. The free-energy principle: A unified brain theory? Nature Reviews Neuroscience, 11:127-138, 2010. ISSN 1471003X. doi: 10.1038/nrn2787.
|
| 188 |
+
|
| 189 |
+
Gutmann, M. and Hyvärinen, A. Noise-contrastive estimation: A new estimation principle for unnormalized statistical models. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 9:297-304, 2010. ISSN 15324435.
|
| 190 |
+
|
| 191 |
+
Hénaff, O. J., Srinivas, A., De Fauw, J., Razavi, A., Doersch, C., Eslami, S. M. A., and Oord, A. v. d. Data-Efficient Image Recognition with Contrastive Predictive Coding. CoRR, abs/1905.0, 2019.
|
| 192 |
+
|
| 193 |
+
Kingma, D. P. and Ba, J. Adam: A Method for Stochastic Optimization. International Conference on Learning Representations, 2015.
|
| 194 |
+
|
| 195 |
+
Lian, Z., Tao, J., Liu, B., and Huang, J. Unsupervised representation learning with future observation prediction for speech emotion recognition. Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH, pp. 3840-3844, 2019. ISSN 19909772. doi: 10.21437/Interspeech.2019-1582.
|
| 196 |
+
|
| 197 |
+
Oord, A. v. d., Li, Y., and Vinyals, O. Representation Learning with Contrastive Predictive Coding. CoRR, abs/1807.0, 72018.
|
| 198 |
+
|
| 199 |
+
Panayotov, V., Chen, G., Povey, D., and Khudanpur, S. Lib-rispeech: An ASR corpus based on public domain audio books. 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 5206- 5210, 2015. ISSN 15206149. doi: 10.1109/ICASSP.2015. 7178964.
|
| 200 |
+
|
| 201 |
+
Schatz, T., Peddinti, V., Bach, F., Jansen, A., Hermansky, H., and Dupoux, E. Evaluating speech features with the minimal-pair ABX task: Analysis of the classical MFC/PLP pipeline. Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH, pp. 1781-1785, 2013. ISSN 19909772.
|
| 202 |
+
|
| 203 |
+
Schatz, T., Peddinti, V., Cao, X. N., Bach, F., Hermansky, H., and Dupoux, E. Evaluating speech features with the Minimal-Pair ABX task (II): Resistance to noise. Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH, pp. 915-919, 2014. ISSN 19909772.
|
| 204 |
+
|
| 205 |
+
Schneider, S., Baevski, A., Collobert, R., and Auli, M. wav2vec : Unsupervised Pre-training for Speech Recognition. Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH, pp. 3465-3469, 2019.
|
| 206 |
+
|
| 207 |
+
Spratling, M. W. A review of predictive coding algorithms. Brain and Cognition, 112:92-97, 2017. ISSN 10902147. doi: 10.1016/j.bandc.2015.11.003.
|
| 208 |
+
|
| 209 |
+
Versteegh, M., Thiollière, R., Schatz, T., Cao, X. N., Anguera, X., Jansen, A., and Dupoux, E. The zero resource speech challenge 2015. Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH, pp. 3169-3173, 2015. ISSN 19909772.
|
| 210 |
+
|
| 211 |
+
Wu, Y., Schuster, M., Chen, Z., Le, Q. V., Norouzi, M., Macherey, W., Krikun, M., Cao, Y., Gao, Q., Macherey, K., Klingner, J., Shah, A., Johnson, M., Liu, X., Kaiser, , Gouws, S., Kato, Y., Kudo, T., Kazawa, H., Stevens, K., Kurian, G., Patil, N., Wang, W., Young, C., Smith, J., Riesa, J., Rudnick, A., Vinyals, O., Corrado, G., Hughes, M., and Dean, J. Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation. CoRR, abs/1609.0, 2016.
|
| 212 |
+
|
| 213 |
+
Zar, J. H. Significance testing of the spearman rank correlation coefficient. Journal of the American Statistical Association, 67(339):578-580, 1972. ISSN 1537274X. doi: 10.1080/01621459.1972.10481251.
|
papers/ICML/ICML 2020/ICML 2020 Workshop/ICML 2020 Workshop SAS/cnLz5ckGs1y/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,299 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ ANALYSIS OF PREDICTIVE CODING MODELS FOR PHONEMIC REPRESENTATION LEARNING IN SMALL DATASETS
|
| 2 |
+
|
| 3 |
+
María Andrea Cruz Blandón ${}^{1}$ Okko Räsänen ${}^{1}{}^{2}$
|
| 4 |
+
|
| 5 |
+
§ ABSTRACT
|
| 6 |
+
|
| 7 |
+
Neural network models using predictive coding are interesting from the viewpoint of computational modelling of human language acquisition, where the objective is to understand how linguistic units could be learned from speech without any labels. Even though several promising predictive coding -based learning algorithms have been proposed in the literature, it is currently unclear how well they generalise to different languages and training dataset sizes. In addition, despite that such models have shown to be effective phonemic feature learners, it is unclear whether minimisation of the predictive loss functions of these models also leads to optimal phoneme-like representations. The present study investigates the behaviour of two predictive coding models, Autoregressive Predictive Coding and Contrastive Predictive Coding, in a phoneme discrimination task (ABX task) for two languages with different dataset sizes. Our experiments show a strong correlation between the autoregressive loss and the phoneme discrimination scores with the two datasets. However, to our surprise, the CPC model shows rapid convergence already after one pass over the training data, and, on average, its representations outperform those of APC on both languages.
|
| 8 |
+
|
| 9 |
+
§ 1. INTRODUCTION
|
| 10 |
+
|
| 11 |
+
According to a number of influential neurocognitive hypotheses, the human brain uses predictive mechanisms for perception of and learning from sensory data (Friston, 2005; 2010; Cope et al., 2017). Similar ideas have been adapted to unsupervised neural network models, one of them the so-called Predictive Coding (PC) framework (see (Spratling, 2017) for a review of PC algorithms). Previously, PC has been used in image processing (Hénaff et al., 2019) and speech processing (Oord et al., 2018; Chung et al., 2019; Chung & Glass, 2019; Lian et al., 2019; Schneider et al., 2019).
|
| 12 |
+
|
| 13 |
+
The PC-based models are of special interest for low-resource speech technology, where access to labelled data is limited, but also for research on early language acquisition, where neurocognitively motivated approaches are of particular interest. In the latter, good models of human language learning should learn linguistic information from speech without any a priori linguistic specification. In both low-resource processing and modelling of human learning, the models should generalise across languages. Low-resource systems should also work with small datasets, whereas high-quality datasets used to study language learning are also often limited in size. One of the resulting challenges is the application of the same models across the different corpora, where a good system would require little if any hyperparameter optimi-sation across the different use cases. Since hyperparameter optimisation is time-consuming and often not feasible, the use of conventional hyperparameters is common.
|
| 14 |
+
|
| 15 |
+
In this paper, we examine the performance of PC models applied to learn of phonemic representations from speech in the context of two new languages, French and Mandarin, whose corpora are also smaller compared to the original studies. The work contributes to the understanding of these models, and provides support for model selection when applying these models to real low-resource scenarios. We focus on three questions: a) is there a consistent relationship between the model loss functions and phoneme selectivity of the learned representations across different datasets, b) how much is this relationship affected by the dataset type and size, and c) how does learning in these models compare a function of the amount of training data available?
|
| 16 |
+
|
| 17 |
+
§ 2. PREDICTIVE CODING MODELS
|
| 18 |
+
|
| 19 |
+
In this section, we will explain the two selected PC models, APC (Chung et al., 2019) and CPC (Oord et al., 2018). The fundamental difference between the two is the optimisation problem that each model tries to solve. More specifically, APC uses an autoregressive loss trying to predict future input features accurately while CPC uses a contrastive loss that focuses on distinguishing real future latent representations from false future. The authors of APC argue that there is evidence that a low contrastive loss implies the existence of a classifier with a low unimodal loss (Chung et al., 2019). In contrast, in CPC, the authors claim that unimodal losses are not convenient when we want the model to excel in capturing the relationships between the data and its context in high dimensional data such as time-frequency structure of speech (Oord et al., 2018).
|
| 20 |
+
|
| 21 |
+
${}^{1}$ Unit of Computing Sciences, Tampere University, Finland ${}^{2}$ Dept. Signal Processing and Acoustics, Aalto University, Finland. Correspondence to: María Andrea Cruz Blandón <maria.cruzblandon@tuni.fi>, Okko Räsänen <okko.rasanen@tuni.fi>.
|
| 22 |
+
|
| 23 |
+
Published at the workshop on Self-supervision in Audio and Speech at the ${37}^{\text{ th }}$ International Conference on Machine Learning, Vienna, Austria. Copyright 2020 by the author(s).
|
| 24 |
+
|
| 25 |
+
§ 2.1. CONTRASTIVE PREDICTIVE CODING
|
| 26 |
+
|
| 27 |
+
The underlying motivation for CPC is to extract information from the temporal context that serves to describe the data more effectively. To achieve this, CPC's authors propose a model that aims to maximise the mutual information between the data and its future context.
|
| 28 |
+
|
| 29 |
+
The architecture comprises two blocks. In the first block, a non-linear encoder processes the input features (raw audio waveform in the original paper). The outputs of this block are called the latent representations, ${\mathbf{z}}_{t}$ . This block is followed by an autoregressive block that produces so-called context latent representations ${\mathbf{c}}_{t}$ using the history of previous latent representations ${\mathbf{z}}_{ \leq t}$ . Using ${\mathbf{c}}_{t}$ , the model predicts latent representations $k$ time steps ahead using ${\mathbf{z}}^{\prime }{}_{t + k} = {\mathbf{W}}_{k}{\mathbf{c}}_{t}$ , which correspond to the predictive coding part.
|
| 30 |
+
|
| 31 |
+
To maximise the mutual information between input features and context representations, the authors introduce InfoNCE loss. This loss is based on Noise-Contrastive Estimation (NCE) (Gutmann & Hyvärinen, 2010). Assuming there is a noise distribution close to the data distribution, the model can learn by comparison. The model reaches this aim by discriminating the samples taken from the data distribution and the ones taken from the noise distribution, which are called negative samples. In CPC, the negative samples are randomly taken from the data distribution as in (Bengio & Senécal, 2008). The InfoNCE loss corresponds to the categorical cross-entropy loss (see Eq. (1)), where a density ratio gives the score of the sample classification. The model does not require to learn the probabilistic data distribution directly, instead uses a log-bilinear model for the density ratio, ${f}_{k}\left( {{\mathbf{x}}_{t + k},{\mathbf{c}}_{t}}\right) = \exp \left( {{\mathbf{z}}_{t + k}^{T}{\mathbf{W}}_{k}{\mathbf{c}}_{t}}\right)$ .
|
| 32 |
+
|
| 33 |
+
$$
|
| 34 |
+
{L}_{\text{ InfoNCE }} = - \log \frac{{f}_{k}\left( {{\mathbf{x}}_{t + k},{\mathbf{c}}_{t}}\right) }{\mathop{\sum }\limits_{{{\mathbf{x}}_{j} \in \mathbf{X}}}{f}_{k}\left( {{\mathbf{x}}_{j},{\mathbf{c}}_{t}}\right) } \tag{1}
|
| 35 |
+
$$
|
| 36 |
+
|
| 37 |
+
§ 2.2. AUTOREGRESSIVE PREDICTIVE CODING
|
| 38 |
+
|
| 39 |
+
Based on the hypothesis that a low contrastive loss implies the existence of a linear classifier with a low unimodal loss (Chung et al., 2019), authors of APC propose an autoregressive model for the PC. APC is similar to autoencoder architectures in which the target features are the same as the input features, except that in APC, the target features are the input features occurring in future time steps.
|
| 40 |
+
|
| 41 |
+
APC architecture consists of a 'PreNet' block that maps the input features (80-dim log Mel spectrograms in the original paper) to a new vector space, an autoregressive model, and a 'PostNet' block implementing the PC part. The 'PostNet' block predicts the future $k$ features ${\mathbf{x}}_{t + k}$ , using the latent representation $\left( {\mathbf{z}}_{t}\right)$ output by the autoregressive model. As a result, the model learns the probability distribution of future features. APC uses the Mean Absolute Error (MAE) as the loss function to optimise the training (see equation 2), where ${y}_{t + k}$ is the prediction for the signal ${x}_{t + k}$ . Therefore, the latent representations should then encode information that helps the model to reconstruct the input features $k$ steps in the future.
|
| 42 |
+
|
| 43 |
+
$$
|
| 44 |
+
{L}_{\mathrm{{MAE}}} = \frac{\mathop{\sum }\limits_{{t = 1}}^{{N - k}}\left| {{\mathbf{x}}_{t + k} - {\mathbf{y}}_{t + k}}\right| }{N - k} \tag{2}
|
| 45 |
+
$$
|
| 46 |
+
|
| 47 |
+
§ 3. EXPERIMENTAL SETUP
|
| 48 |
+
|
| 49 |
+
In this section, we describe the corpora, model architectures, and the experimental setup we used to analyse the relationship between APC and CPC validation losses and their performance in a phoneme discrimination task.
|
| 50 |
+
|
| 51 |
+
§ 3.1. DATASETS AND PHONEME DISCRIMINATION TASKS
|
| 52 |
+
|
| 53 |
+
We tested APC and CPC models on a subset of the track 1 of the Zero Resource Speech Challenge 2020 datasets (Dunbar et al., 2017) that focuses on learning of phoneme-sensitive features in an unsupervised manner. The subset contains 24 $\mathrm{h}$ of French and ${2.5}\mathrm{\;h}$ of Mandarin conversational speech for model training, and 47,096 and 21, 247 one second utterances for testing in the two languages, respectively. The training datasets are composed of a few speakers with more speech (approx. ${20}\mathrm{\;{min}}$ for Mandarin and $2\mathrm{\;h}$ for French), and several speakers with short recordings (about ${10}\mathrm{\;{min}}$ each). We tried to maximise speaker diversity (unique speakers) in the training while maintaining train/validation split ratio of ${80}\% /{20}\%$ as closely as possible.
|
| 54 |
+
|
| 55 |
+
In the context of the challenge, the task consists of learning speech representations that are convenient for phoneme discrimination, for which the challenge incorporates a minimal pair ABX-task (Schatz et al., 2013; 2014). The task measures the phonemic discriminability of the learned representations (Versteegh et al., 2015; Dunbar et al., 2017). In our experiments, the evaluation tool provided by the challenge was used to calculate the ABX scores. ABX scores are reported separately for within-speaker (minimal pair tokens always from the same talker) and across-speaker conditions (tokens from different speakers), where the latter better reflects speaker-independent phonemic categorisation.
|
| 56 |
+
|
| 57 |
+
§ 3.2. IMPLEMENTATION OF THE PC MODELS
|
| 58 |
+
|
| 59 |
+
As input features, ${39}\mathrm{{MFCC}}\left( {{13}\text{ static } + \Delta + {\Delta \Delta }}\right)$ coefficients were extracted using a window length of ${25}\mathrm{\;{ms}}$ and a window shift of ${10}\mathrm{\;{ms}}$ . The data was split into $2\mathrm{\;s}$ samples. For each epoch, the order of the input data was randomised. All models were trained in a monolingual setup.
|
| 60 |
+
|
| 61 |
+
For APC, we followed the implementation published by (Chung et al., 2019). The network consists of three fully connected layers with 128 units with ReLU activations for ’PreNet’ with ${20}\%$ of dropout, three GRU layers with 512 units and residual connections (Wu et al., 2016) for the predictive part, and one convolutional layer with kernel size of one for the 'PostNet'. We used an initial learning rate of ${10}^{-4}$ unless otherwise specified. The prediction was carried out five frames (50-ms) ahead.
|
| 62 |
+
|
| 63 |
+
For CPC, we followed the implementation provided by (Schneider et al., 2019) for the contrastive loss calculation. We also followed the adaptation of the encoder proposed by (Chung et al., 2019) for using acoustic features as input features. The architecture consists of three fully connected layers with 512 units with ReLU activations for the encoder, and one GRU layer with 256 units for the autoregressive model, both blocks trained with a dropout of ${20}\%$ . As in (Schneider et al., 2019), we used ten negative samples taken from the batch and predicted 12 steps frames, that is ${120}\mathrm{\;{ms}}$ ahead. We used an initial learning rate of ${10}^{-3}$ .
|
| 64 |
+
|
| 65 |
+
We trained all models using a batch size of 32, and using Adam optimiser (Kingma & Ba, 2015). For all models, we used PCA to reduce the dimension of the latent vectors used for ABX task (maintaining 95% of the variance), as the original dimensionality was too high for the ABX-scoring tool to handle. The reported ABX-scores correspond to the extracted latent representations ${\mathbf{z}}_{t}$ for both models. Although context latent representations ${\mathbf{c}}_{t}$ were also analysed for the CPC model, we only report latent representations as there were no notable differences between ${\mathbf{c}}_{t}$ and ${\mathbf{z}}_{t}$ .
|
| 66 |
+
|
| 67 |
+
§ 3.3. EXPERIMENTS
|
| 68 |
+
|
| 69 |
+
To assess the correlation between the validation loss and the ABX scores, the APC and CPC models were trained for 100 epochs and saving the models every ten epochs for ABX-scoring ('APC-1' and 'CPC-1'). Each model was trained three times with random initialisation to consider the influence of initial parameters. In the case of CPC, we ran an additional experiment ('CPC-2') to investigate the behaviour of the model during the first ten epochs in more detail, saving after each of the first 10 epochs and then every 10 epochs, and running the experiment twice.
|
| 70 |
+
|
| 71 |
+
Table 1. Percentage of the French dataset used for training. The number of hours that the percentage represents, and the number of samples for the training set (T.) and for the validation set (V.)
|
| 72 |
+
|
| 73 |
+
max width=
|
| 74 |
+
|
| 75 |
+
PERCENTAGE Hours T. SAMPLES V. SAMPLES
|
| 76 |
+
|
| 77 |
+
1-4
|
| 78 |
+
100 25.1 36,031 9, 182
|
| 79 |
+
|
| 80 |
+
1-4
|
| 81 |
+
75 18.8 27,023 6, 886
|
| 82 |
+
|
| 83 |
+
1-4
|
| 84 |
+
50 12.6 18,015 4,591
|
| 85 |
+
|
| 86 |
+
1-4
|
| 87 |
+
25 6.3 9,007 2,295
|
| 88 |
+
|
| 89 |
+
1-4
|
| 90 |
+
|
| 91 |
+
To calculate the correlation, Pearson's correlation coefficient (r)was adopted; however, in cases were the linear correlation was not evident in the scatter plot, we also calculated Spearman’s rank correlation coefficient $\left( {r}_{s}\right)$ . Additionally, the significance of the correlation coefficients was validated performing a hypothesis test for $r$ and using the critical value (Zar,1972) for ${r}_{s}$ . In both cases with a significance level of $\alpha = {0.05}$ (critical values equal to ${r}_{s} = {0.678}$ , and $t = {1.86})$ . The $t$ test statistic for $r$ was calculating with the formula $t = r\sqrt{n - 2}/\sqrt{1 - {r}^{2}}$ , where $n$ is the number of points used for calculating $r$ .
|
| 92 |
+
|
| 93 |
+
Regarding the relationship between the dataset size and the performance of the predictive model, we train four models varying the percentage of samples for the training data from 100% to 25% decreasing on 25% each time. For this analysis, the French dataset was employed, see table 1.
|
| 94 |
+
|
| 95 |
+
§ 4. RESULTS
|
| 96 |
+
|
| 97 |
+
Fig. 1(a) shows the validation loss and the ABX-scores of the APC model for the French and Mandarin datasets (APC- 1). A striking correlation between the two values can be seen for the two languages; although the slope for Mandarin data is higher than for French data. There is also more variability in the French runs ( $r = {0.817} \pm {0.076}$ for ABX across-speaker; $r = {0.725} \pm {0.159}$ for ABX within speaker) than in the Mandarin dataset $(r = {0.991} \pm {0.005}$ for ABX across-speaker; $r = {0.978} \pm {0.009}$ for ABX within-speaker). Since the French training started to overfit already after 20 epochs (with increasing validation loss), we re-ran these experiments for French dataset but using a lower learning rate $\left( {{lr} = {10}^{ - }5}\right)$ (APC-2). As a result, the variability among the runs was reduced $(r = {0.997} \pm {0.001}$ for ABX across-speaker and $r = {0.809} \pm {0.248}$ for ABX within speaker. See Supplementary Material, Fig. S1 for the scatter plot).
|
| 98 |
+
|
| 99 |
+
In the first experiment for CPC (CPC-1), there was little relative variation in both the InfoNCE loss and the ABX-scores. A closer analysis revealed that the validation loss was decreasing with more epochs, whereas the ABX-scores were oscillating with small changes (standard deviation for the three runs: ${SD} = {0.217}$ for ABX across-speaker for Mandarin; ${SD} = {0.318}$ for ABX within-speaker for Mandarin; ${SD} = {0.145}$ for ABX across-speaker for French; ${SD} = {0.174}$ for ABX within-speaker for French). This behaviour suggested the model was converging to phoneme-like representations already in the first ten epochs. To evaluate this hypothesis, we ran a second experiment also evaluating all the models from the ten first epochs. To our surprise, the CPC model shows a rapid convergence after one pass over the training data (see ABX-scores for larger values of the validation loss). The oscillation pattern observed in the previous experiments persists with later epochs, and the change in overall ABX-score is nearly zero for almost all cases except for Mandarin ABX across-speaker condition, where a slight improvement is observed with more training. Notably, the CPC ABX performance after one epoch is already comparable to the APC best results.
|
| 100 |
+
|
| 101 |
+
< g r a p h i c s >
|
| 102 |
+
|
| 103 |
+
Figure 1. Scatter plots of the APC-1 and CPC-2 models ABX performance as a function of validation loss, including a detailed picture of the first ten epochs for the CPC model. Symbol markers: (+) First run, (-) Second run, and (*) Third run.
|
| 104 |
+
|
| 105 |
+
Table 2 lists the correlation coefficients calculated for the averaged performance of the APC model (Mandarin APC-1 and French APC-2). Since the relationship between the InfoNCE loss and the ABX-scores is highly variable for the CPC model across the runs, we calculated the correlation coefficients for each run (CPC-1 (run id)). All APC correlation coefficients were found to be significant with significance criterion of $\alpha = {0.05}(t = {42.260}$ for APC-2 FR ABX across-speaker; $t = {6.637}$ for APC-2 FR ABX within-speaker; $t = {22.923}$ for APC-2 MA ABX across-speaker; $t = {13.461}$ for APC-2 MA ABX within-speaker). The CPC model, on the other hand, shows both positive and negative correlation for the same language (see, e.g., $r$ of ABX across-speaker score for CPC-1 (1,3) MA). This remarkable discrepancy highlights the variability between runs when the model has rapidly converged.
|
| 106 |
+
|
| 107 |
+
Table 3 shows the correlation coefficients obtained for the CPC model for the first ten epochs (CPC-2) for both languages. The relationship between the validation loss and the ABX across-speaker score shown in Fig. 1(b) was also reflected in the correlation coefficients obtained. Both $r$ and ${r}_{s}$ are significant and exhibit a strong positive correlation throughout the training $(r\left( 8\right) = {0.972},\rho < {0.05}$ for the first and $r\left( 8\right) = {0.960},\rho < {0.05}$ for the second run). The strong correlation for the ABX across-speaker score also shows a feature of the InfoNCE loss that is worth noting, although it was exhibited for some runs only. The selection of the negative samples could have an impact on the information that is favoured in the representations (Oord et al., 2018; Chung et al., 2019). The rationale behind this is that by using the same utterance to extract the negative samples, the information about speaker features will not be relevant for distinguishing true and negative samples, thus encouraging phonemic information. We run additional experiments to evaluate if the ratio change (relative proportion of change between consecutive epochs) for the validation loss was correlated to the ABX-scores, but our results did not provide statistical evidence of such correlation.
|
| 108 |
+
|
| 109 |
+
Table 2. Correlation coefficients between the validation loss and the ABX-scores for the French (FR) and Mandarin (MA) datasets. Pearson’s(r)and Spearman’s Rank $\left( {r}_{s}\right)$ correlation coefficients are reported for ABX-scores. (*) $\rho < {0.05}$ . Analysis of APC averaged performance and CPC runs.
|
| 110 |
+
|
| 111 |
+
max width=
|
| 112 |
+
|
| 113 |
+
2*MODEL 2|c|ACROSS-SPEAKER 2|c|WITHIN-SPEAKER
|
| 114 |
+
|
| 115 |
+
2-5
|
| 116 |
+
$r$ ${r}_{s}$ $r$ ${r}_{s}$
|
| 117 |
+
|
| 118 |
+
1-5
|
| 119 |
+
APC-2 FR ${\mathbf{{0.998}}}^{ * }$ ${1.000}^{ * }$ ${\mathbf{{0.920}}}^{ * }$ ${0.879}^{ * }$
|
| 120 |
+
|
| 121 |
+
1-5
|
| 122 |
+
APC-1 MA ${\mathbf{{0.992}}}^{ * }$ ${0.903}^{ * }$ ${\mathbf{{0.979}}}^{ * }$ ${0.867}^{ * }$
|
| 123 |
+
|
| 124 |
+
1-5
|
| 125 |
+
CPC-1 (1) FR -0.202 -0.115 -0.703* -0.770*
|
| 126 |
+
|
| 127 |
+
1-5
|
| 128 |
+
CPC-1 (2) FR ${0.920}^{ * }$ ${0.867}^{ * }$ ${0.836}^{ * }$ 0.588
|
| 129 |
+
|
| 130 |
+
1-5
|
| 131 |
+
CPC-1 (3) FR -0.511 $- {0.661}^{ * }$ -0.228 -0.055
|
| 132 |
+
|
| 133 |
+
1-5
|
| 134 |
+
CPC-1 (1) MA $- {0.705}^{ * }$ -0.552 -0.525 -0648*
|
| 135 |
+
|
| 136 |
+
1-5
|
| 137 |
+
CPC-1 (2) MA 0.282 0.006 $- {0.759}^{ * }$ $- {0.782}^{ * }$
|
| 138 |
+
|
| 139 |
+
1-5
|
| 140 |
+
CPC-1 (3) MA ${0.913}^{ * }$ ${0.782}^{ * }$ 0.310 0.430
|
| 141 |
+
|
| 142 |
+
1-5
|
| 143 |
+
|
| 144 |
+
As for the dataset size comparison, Table 4 shows the ABX-scores obtained after training the APC model with different dataset size for the French language. Unlike earlier, the model was trained with a learning rate of ${10}^{ - }5$ , as this was found to improve training stability in the earlier experiments. Considering the strong correlation between MAE and the ABX-scores, each model was chosen based on the lowest validation loss. The differences in the ABX-scores are relatively negligible when taking into account that the models were trained for a maximum of 100 epochs (usually with the lowest validation loss value). This implies that the models could still improve their representations with more training. That being said, with only ${25}\%$ of the total data, that is ${6.3}\mathrm{\;h}$ of the French dataset, the APC model already converged with the hyperparameters here defined. Contradictory to the idea that more training data improves the performance, this result shows that hyperparameter tuning would be more beneficial in this case than increasing the training data. For CPC, it was problematic because we could not use the validation loss as the selection criterium, and we could not conduct the experiments in time. However, see the supplementary material for an upper bound of the true performance assuming a rapid convergence.
|
| 145 |
+
|
| 146 |
+
Table 3. Correlation coefficients for the first ten epochs of the CPC model. (*) $\rho \geq {0.05}$ .
|
| 147 |
+
|
| 148 |
+
max width=
|
| 149 |
+
|
| 150 |
+
2*MODEL 2|c|ACROSS-SPEAKER 2|c|WITHIN-SPEAKER
|
| 151 |
+
|
| 152 |
+
2-5
|
| 153 |
+
$r$ ${r}_{s}$ $r$ ${r}_{s}$
|
| 154 |
+
|
| 155 |
+
1-5
|
| 156 |
+
CPC-2 (1) FR ${0.218}^{ * }$ ${0.219}^{ * }$ -0.869 -0.851
|
| 157 |
+
|
| 158 |
+
1-5
|
| 159 |
+
CPC-2 (2) FR 0.795 ${0.255}^{ * }$ $- {0.323}^{ * }$ $- {0.608}^{ * }$
|
| 160 |
+
|
| 161 |
+
1-5
|
| 162 |
+
CPC-2 (1) MA 0.948 0.988 $- {0.406}^{ * }$ ${0.285}^{ * }$
|
| 163 |
+
|
| 164 |
+
1-5
|
| 165 |
+
CPC-2 (2) MA 0.957 0.964 $- {0.587}^{ * }$ $- {0.479}^{ * }$
|
| 166 |
+
|
| 167 |
+
1-5
|
| 168 |
+
|
| 169 |
+
Table 4. Performance of the APC model as a function of the dataset size.
|
| 170 |
+
|
| 171 |
+
max width=
|
| 172 |
+
|
| 173 |
+
PERCENTAGE ACROSS-SPEAKER WITHIN-SPEAKER
|
| 174 |
+
|
| 175 |
+
1-3
|
| 176 |
+
100 19.265 12.790
|
| 177 |
+
|
| 178 |
+
1-3
|
| 179 |
+
75 19.921 13.202
|
| 180 |
+
|
| 181 |
+
1-3
|
| 182 |
+
50 19.878 12.879
|
| 183 |
+
|
| 184 |
+
1-3
|
| 185 |
+
25 20.358 13.074
|
| 186 |
+
|
| 187 |
+
1-3
|
| 188 |
+
$\bar{x} \pm {SD}$ ${19.856} \pm {0.449}$ ${12.986} \pm {0.186}$
|
| 189 |
+
|
| 190 |
+
1-3
|
| 191 |
+
|
| 192 |
+
As a final comparison, Table 5 lists the best ABX-scores obtained for the APC-1 and CPC-1 models, and the training epoch for which the best model was obtained. We also report CPC-2 model only after one epoch of training to demonstrate its fast learning. MFCC-based ABX-scores are also reported as a baseline. Both PC models improved the ABX-scores in comparison with the baseline, except for Mandarin ABX within-speaker score. The CPC model outperforms the APC model in both languages and ABX-scores.
|
| 193 |
+
|
| 194 |
+
§ 5. DISCUSSION AND CONCLUSIONS
|
| 195 |
+
|
| 196 |
+
In this paper, we analysed the behaviour of PC models in the context of phoneme discrimination tasks with relatively small datasets. Our experiments confirmed that APC and CPC models are also suitable for relatively small corpora. In the original papers, the APC and CPC models were trained on 100- and 360-hour subsets from Librispeech (Panayotov et al., 2015), respectively. Our results show that these models also learn phoneme-discriminating representations from much smaller corpora down to mere 2.5 hours of speech.
|
| 197 |
+
|
| 198 |
+
Table 5. Best ABX-scores obtained for the APC and CPC models among all the three runs of the first experiment and ABX-scores of the CPC model in the first epoch of the second experiment. In bold the lowest scores.
|
| 199 |
+
|
| 200 |
+
max width=
|
| 201 |
+
|
| 202 |
+
MODEL EPOCH ACROSS-S WITHIN-S
|
| 203 |
+
|
| 204 |
+
1-4
|
| 205 |
+
APC-1 FR 10 18.698 11.740
|
| 206 |
+
|
| 207 |
+
1-4
|
| 208 |
+
APC-1 MA 100 12.624 10.197
|
| 209 |
+
|
| 210 |
+
1-4
|
| 211 |
+
CPC-1 FR 10 17.500 9.791
|
| 212 |
+
|
| 213 |
+
1-4
|
| 214 |
+
CPC-1 MA 20 11.837 9.185
|
| 215 |
+
|
| 216 |
+
1-4
|
| 217 |
+
CPC-2 FR 1 17.463 9.854
|
| 218 |
+
|
| 219 |
+
1-4
|
| 220 |
+
CPC-2 MA 1 13.058 9.202
|
| 221 |
+
|
| 222 |
+
1-4
|
| 223 |
+
MFCC FR - 21.050 10.150
|
| 224 |
+
|
| 225 |
+
1-4
|
| 226 |
+
MFCC MA - 14.584 9.140
|
| 227 |
+
|
| 228 |
+
1-4
|
| 229 |
+
|
| 230 |
+
A very high and consistent correlation $\left( {r \approx {0.97}}\right)$ between the MAE loss and ABX scores was found for the APC model across the two datasets. However, this correlation was affected by the sampling of epochs for the ABX evaluation, where a large proportion of the scores were obtained after the model had already saturated in performance. Despite this effect, which could easily be avoided by using early stopping, the APC behaves similarly for both datasets.
|
| 231 |
+
|
| 232 |
+
On the contrary, there was no significant correlation between validation loss and ABX scores for the CPC model. In fact, our results suggest that the CPC model was rapidly converging to effective phoneme-sensitive representations already during the first ten epochs. After this, the model continues learning representations that improve the predictive loss, but this is not reflected in better phonemic representations. The latter requires further experiments to understand the underpinning of this behaviour. Interestingly, the very good CPC performance already after one pass over the training data resembles the conditions of human language acquisition, where a child never has access to the same input twice.
|
| 233 |
+
|
| 234 |
+
Finally, APC results are especially important as they could be interpreted as evidence of adaptability to different dataset sizes and robustness to different languages; the validation loss can be employed for selecting the model when extracting phonemic features for different datasets. On the other hand, although the CPC model obtained the best ABX scores in early iterations, its validation loss is less directly linked with the phonemic nature of the learned representations in the case of small datasets.
|
| 235 |
+
|
| 236 |
+
§ ACKNOWLEDGEMENTS
|
| 237 |
+
|
| 238 |
+
This study was funded by Academy of Finland grants no. 314602 and 320053.
|
| 239 |
+
|
| 240 |
+
§ S1. SUPPLEMENTARY MATERIAL
|
| 241 |
+
|
| 242 |
+
§ S1.1. CODE AND STATISTICAL DATA
|
| 243 |
+
|
| 244 |
+
Our implementation of the APC and CPC model and all the data points and statistical metrics could be found on https://github.com/SPEECHCOG/pc_ models_analysis
|
| 245 |
+
|
| 246 |
+
§ S1.2. SCATTER PLOTS
|
| 247 |
+
|
| 248 |
+
Figure S1 shows the APC-2 experiment for the French dataset. Figure S2 illustrates the CPC-1 experiment, three runs for each language with 100 epochs per run, and figure S3 is the detailed view of the ABX across-speaker scores over epochs for the three runs of the French dataset.
|
| 249 |
+
|
| 250 |
+
< g r a p h i c s >
|
| 251 |
+
|
| 252 |
+
Figure S1. Scatter plot of the French APC model ABX performance as a function of the validation loss (APC-2). Model trained with $\operatorname{lr} = {10}^{ - }5$ . Symbol markers: (+) First run,(-) Second run, and (*) Third run.
|
| 253 |
+
|
| 254 |
+
§ S1.3.CPC DATASET SIZE EXPERIMENT
|
| 255 |
+
|
| 256 |
+
In the case of the CPC model, there was not a significant correlation between the validation loss and the ABX-scores. As a consequence, it was less accurate to use the validation loss as the selection criterium of the model than for the APC model. To offer an upper bound of the real performance of the CPC model, we ran the dataset size experiment (see subsection 3.3) assuming a rapid convergence. For this experiment, we used the same architecture as explained in subsection 3.2. Table S1 shows the ABX-scores obtained after training the model for ten epochs with different dataset size for the French language. As in APC, by using roughly six hours of the French dataset (25%) the model obtained ABX-scores comparable to the ABX-scores obtained with the full dataset.
|
| 257 |
+
|
| 258 |
+
< g r a p h i c s >
|
| 259 |
+
|
| 260 |
+
Figure S2. Scatter plot of the CPC model ABX performance as a function of the validation loss (CPC-1). Symbol markers: (+) First run, $\left( \cdot \right)$ Second run, and $\left( *\right)$ Third run.
|
| 261 |
+
|
| 262 |
+
< g r a p h i c s >
|
| 263 |
+
|
| 264 |
+
Figure S3. ABX across-speaker scores as a function of the epoch for the three runs of the French CPC model (CPC-1). Symbol markers: (+) First run, (-) Second run, and (*) Third run.
|
| 265 |
+
|
| 266 |
+
On the other hand, unlike the APC model, the ABX across-speaker score shows a slight improvement by increasing the dataset size. The infoNCE loss benefits from more data for the comparison of negative and true samples resulting in more speaker-independent phoneme representations. However, notice that the differences in the ABX within-speaker scores are relatively negligible. This behaviour is comparable to the results for the Mandarin CPC-2 models, where the ABX across-speaker score was improving over time, whereas the ABX within-speaker score was oscillating around the same value (see figure 1(b)). Further experiments are necessary to understand this behaviour.
|
| 267 |
+
|
| 268 |
+
Table S1. Performance of the CPC model as a function of the dataset size. Assuming a rapid convergence in 10 epochs.
|
| 269 |
+
|
| 270 |
+
max width=
|
| 271 |
+
|
| 272 |
+
PERCENTAGE ACROSS-SPEAKER WITHIN-SPEAKER
|
| 273 |
+
|
| 274 |
+
1-3
|
| 275 |
+
100 16.872 10.325
|
| 276 |
+
|
| 277 |
+
1-3
|
| 278 |
+
75 17.535 11.166
|
| 279 |
+
|
| 280 |
+
1-3
|
| 281 |
+
50 17.778 10.361
|
| 282 |
+
|
| 283 |
+
1-3
|
| 284 |
+
25 18.406 10.478
|
| 285 |
+
|
| 286 |
+
1-3
|
| 287 |
+
$\bar{x} \pm {SD}$ ${17.648} \pm {0.634}$ ${10.583} \pm {0.394}$
|
| 288 |
+
|
| 289 |
+
1-3
|
| 290 |
+
|
| 291 |
+
§ S1.4.APC WITH MEAN SQUARE ERROR LOSS
|
| 292 |
+
|
| 293 |
+
To explore the behaviour of the APC model with a different unimodal loss, we ran an extra experiment utilising the Mean Square Error (MSE) loss for training the model. Similar to previous experiments, we ran the model three times for 100 epochs and evaluated the performance on the ABX task every ten epochs for the Mandarin dataset.
|
| 294 |
+
|
| 295 |
+
Figure S4 shows the APC model ABX performance as a function of the MSE loss. The behaviour is comparable to APC with MAE loss. The Pearson's correlation coefficients are $r = {0.953} \pm {0.014},\rho < {0.05}$ for ABX across-speaker score and $r = {0.908} \pm {0.005},\rho < {0.05}$ for ABX within-speaker score. These results expose a high correlation between the ABX-scores and the MSE loss. In order to compare the correlation coefficients of the two APC models (with MAE loss and with MSE loss), we performed a Z-test. We set the level of significance to $\alpha = {0.05}$ indicating a critical value of $\pm {1.96}$ and employed Fisher's transformation for the correlation coefficients of the averaged performance (APC (MSE): r=0.956 for ABX across-speaker and $\mathrm{r} = {0.915}$ for $\mathrm{{ABX}}$ within-speaker; $\mathrm{{APC}}$ (MAE): $\mathrm{r} = {0.992}$ for $\mathrm{{ABX}}$ across-speaker and $\mathrm{r} = {0.979}$ for $\mathrm{{ABX}}$ within-speaker. All coefficients with $\rho < {0.05}$ ). The observed $\mathrm{Z}$ values are ${Z}_{\text{ obs }} = - {1.672}$ for ABX across-speaker and ${Z}_{\text{ obs }} = - {1.326}$ for ABX within-speaker. We did not find sufficient evidence to conclude a significant difference between the correlation coefficients of the APC (MAE) model and the APC (MSE) model.
|
| 296 |
+
|
| 297 |
+
< g r a p h i c s >
|
| 298 |
+
|
| 299 |
+
Figure S4. Scatter plot of the Mandarin APC model ABX performance as a function of the Mean Square Error loss. Model trained with $\operatorname{lr} = {10}^{ - }4$ . Symbol markers: (+) First run,(-) Second run, and (*) Third run.
|
papers/ICML/ICML 2020/ICML 2020 Workshop/ICML 2020 Workshop SAS/gp8Hkp9y0bw/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,182 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Learning Speech Representations from Raw Audio by Joint Audiovisual Self-Supervision
|
| 2 |
+
|
| 3 |
+
Abhinav Shukla ${}^{1}$ Stavros Petridis ${}^{12}$ Maja Pantic ${}^{13}$
|
| 4 |
+
|
| 5 |
+
## Abstract
|
| 6 |
+
|
| 7 |
+
The intuitive interaction between the audio and visual modalities is valuable for cross-modal self-supervised learning. This concept has been demonstrated for generic audiovisual tasks like video action recognition and acoustic scene classification. However, self-supervision remains under-explored for audiovisual speech. We propose a method to learn self-supervised speech representations from the raw audio waveform. We train a raw audio encoder by combining audio-only self-supervision (by predicting informative audio attributes) with visual self-supervision (by generating talking faces from audio). The visual pretext task drives the audio representations to capture information related to lip movements. This enriches the audio encoder with visual information and the encoder can be used for evaluation without the visual modality. Our method attains competitive performance with respect to existing self-supervised audio features on established isolated word classification benchmarks, and significantly outperforms other methods at learning from fewer labels. Notably, our method also outperforms fully supervised training, thus providing a strong initialization for speech related tasks. Our results demonstrate the potential of multimodal self-supervision in audiovisual speech for learning good audio representations.
|
| 8 |
+
|
| 9 |
+
## 1. Introduction
|
| 10 |
+
|
| 11 |
+
Self-supervised learning of representations from large unlabeled datasets is a popular contemporary trend in machine learning. After being widely adopted in areas like natural language processing and computer vision, self-supervision is now rapidly developing as a noteworthy topic in audio and speech processing. Self-supervision aims to capture the most informative properties from the underlying structure of unlabeled data to learn generalized representations. This is extremely promising in problem settings involving a large amount of unlabeled data but limited labeled data. In the context of audio and speech processing, this is relevant to low resource languages, emotion recognition, cross-cultural speech recognition and other such problems with small-sized datasets. Even though there has been recent research interest in self-supervised learning for speech data, most works focus only on the audio modality alone. Audiovisual speech data offers interesting possibilities for cross-modal self-supervision, which is something relatively lesser explored. In this work, we present a method for self-supervised representation learning of audio features that leverages both the audio and visual modalities. We demonstrate how generating a talking lip video from a single frame and the corresponding audio can be used as a pretext task for visual self-supervision to train a raw audio encoder. We combine this with audio-only self-supervision based on predicting informative audio attributes, similar to (Pascual et al., 2019). This results in an audio encoder trained by joint audiovisual self-supervision. We evaluate the method on spoken word classification and achieve competitive results when comparing with existing self-supervised methods. Our method also results in significantly better performance when learning with limited data ( ${10}\%$ of training set) for the downstream tasks. Importantly, our method also outperforms fully supervised training (directly training the encoder on the downstream task). Our observations motivate the utility of self-supervised pretraining for audio related tasks. We demonstrate that cross-modal supervision in audiovisual speech can learn better representations compared to unimodal audio-only or visual-only self-supervision.
|
| 12 |
+
|
| 13 |
+
### 1.1. Related work
|
| 14 |
+
|
| 15 |
+
Self-supervised learning has been very influential in recent advances in natural language processing (BERT (Devlin et al., 2018), RoBERTa (Liu et al., 2019) etc.) and computer vision (CPC (Oord et al., 2018), MoCo (He et al., 2020), PIRL (Misra & van der Maaten, 2019) etc.). It is also beginning to mature as a relevant topic in audio and speech processing. CPC (Contrast Predictive Coding) (Oord et al., 2018) was a seminal work in self-supervised learning which also demonstrated the applicability of contrastive self-supervised learning to audio. Wav2vec (Schneider et al., 2019) refines the idea from CPC specifically for speech. CPC based self-supervision has also been shown to generalize well to multiple languages (Rivière et al., 2020). APC (Autoregressive Predictive Coding) (Chung et al., 2019) is a similar approach that predicts the next token of a speech segment from the history. Another very relevant recent work is PASE (Problem Agnostic Speech Encoder) (Pascual et al., 2019), which aims to learn multi-task speech representations from raw audio by predicting a number of handcrafted features such as MFCCs, prosody and waveform. Teacher-student models have also been explored for audio self-supervision where the trained model from a previous epoch acts as the teacher model for the next epoch (Kumar & Ithapu, 2020). All of the works discussed so far are unimodal audio-only self-supervised methods. There are also a few other works that utilize both audio and visual information. There are multiple ways to capture this cross-modal interaction including audiovisual synchronization (Owens et al., 2018), cross-modal transition modeling (Pham et al., 2019), cross-modal pseudolabel based clustering (Alwassel et al., 2019), contrastive learning (Tian et al., 2019; Patrick et al., 2020), and audiovisual instance discrimination (Morgado et al., 2020). However most of these works present cross-modal self-supervision in the context of generic audiovisual data, with application to tasks like video action recognition and acoustic scene classification. There is limited work that explores self-supervision specifically in the context of audiovisual speech. We have explored this concept in recent related work (Shukla et al., 2020c;b;a). This work extends the idea from our prior work. Specifically, we move from learning speech representations directly from raw audio instead of from mel features. We also adopt a different and more refined approach for audio-only self-supervision (described in Section 2.3).
|
| 16 |
+
|
| 17 |
+
---
|
| 18 |
+
|
| 19 |
+
Abhinav Shukla's work was supported by a PhD scholarship by Samsung Electronics, UK. ${}^{1}$ Imperial College London, UK ${}^{2}$ Samsung AI Centre, Cambridge, UK ${}^{3}$ Facebook London, UK. Correspondence to: Abhinav Shukla <a.shukla@imperial.ac.uk>.
|
| 20 |
+
|
| 21 |
+
Published at the workshop on Self-supervision in Audio and Speech at the ${37}^{\text{th }}$ International Conference on Machine Learning, Vienna, Austria. Copyright 2020 by the author(s).
|
| 22 |
+
|
| 23 |
+
---
|
| 24 |
+
|
| 25 |
+
## 2. Method
|
| 26 |
+
|
| 27 |
+
### 2.1. Audio encoder architecture
|
| 28 |
+
|
| 29 |
+
We use a 1D Resnet 18 (He et al., 2016) encoder as the backbone for all of our proposed methods (detailed architecture in appendix). The encoder ${f}_{a}$ (see Fig. 2 and 3) takes as input a ${16}\mathrm{{kHz}}$ raw audio waveform and converts it into a 512-D audio feature vector for every timestep. The output sample rate is 25 audio feature vectors per second, which matches that of 25 FPS video in the LRW dataset. This allows us to have a one-to-one mapping between the two modalities, which helps in cross-modal learning and allows us to avoid oversampling or undersampling either modality. Other contemporary self-supervised methods (Alwassel et al., 2019; Patrick et al., 2020) use a 2D Resnet18 audio encoder operating on mel features (operating similar to image based CNNs). However, we wanted our audio encoder to directly operate on the raw audio waveform and perform end-to-end self-supervised representation learning without starting from an intermediate feature like MFCCs or log mel spectrograms, which is why we chose a 1D Resnet18.
|
| 30 |
+
|
| 31 |
+
### 2.2. Visual Self-Supervision
|
| 32 |
+
|
| 33 |
+
For visual self-supervision, we generate a talking lip video from a still image and the corresponding audio (see Fig. 1 and Fig. 2). The model is comprised of three components: (i) the audio encoder ${f}_{a}$ (1D Resnet18),(ii) the identity encoder ${f}_{id}$ , and (iii) the frame decoder ${f}_{d}$ . The model operates on 1 second long segments from an audiovisual speech dataset. The audio encoder ${f}_{a}$ (Fig. 2 bottom-left) converts the 1 second audio sample $x$ into a 512 dimensional embedding with 25 timesteps $\left( {z}_{\text{aud }}\right)$ . The identity encoder ${f}_{id}$ (Fig. 2 top-left) is a 6 layer CNN that converts the mouth region of the first video frame ${x}_{im}$ (a ${64} \times {64}$ image) into a 64 dimensional identity embedding $\left( {z}_{id}\right)$ . This embedding is replicated 25 times to match the timesteps of the audio embedding. The latent representation $z$ is the concatenation of ${z}_{\text{aud }}$ and ${z}_{id}$ (as shown in Fig. 2). This then goes through the frame decoder ${f}_{d}$ (see Fig. 2 top-right), which is a CNN that uses strided transposed convolutions to generate the video frames of the lip movements. The skip connections between the identity encoder and frame decoder help in preserving subject identity in the generated frames. An L1 reconstruction loss between frames from the generated video $\left( {{f}_{d}\left( z\right) }\right)$ and those from the real video $\left( {y}_{\text{video }}\right)$ is used to train the network. We use the L1 loss as opposed to the L2 loss to get relatively sharper reconstructions. Our model aims to predict lip movements given only audio and speaker identity information from the first frame. In this process, the audio encoder is driven to produce useful speech features that correlate with lip movements (because accurate lip movement reconstruction will reduce the loss). The audio features obtained by reconstructing lip movements are likely to contain information about the speech content. Our proposed method is related to our prior work on visual self-supervision to learn audio features (Shukla et al., 2020c;b;a). In this work, the key difference is that we use a raw audio encoder for end-to-end learning as opposed to the log mel spectrogram encoder we used in (Shukla et al., 2020b;a). Also, instead of reconstructing the full face, we focus on the mouth region which contains visual information about the speech content, which we hypothesized would lead to better representations for speech recognition.
|
| 34 |
+
|
| 35 |
+
$$
|
| 36 |
+
z\left( {x,{x}_{im}}\right) = \operatorname{cat}\left( {{f}_{a}\left( x\right) ,{f}_{id}\left( {x}_{im}\right) }\right) \tag{1}
|
| 37 |
+
$$
|
| 38 |
+
|
| 39 |
+
$$
|
| 40 |
+
{L}_{\text{video }}\left( {x,{x}_{im}}\right) = \left| {{f}_{d}\left( {z\left( {x,{x}_{im}}\right) }\right) - {y}_{\text{video }}}\right| \tag{2}
|
| 41 |
+
$$
|
| 42 |
+
|
| 43 |
+
Training : Audio-visual Self-Supervision
|
| 44 |
+
|
| 45 |
+

|
| 46 |
+
|
| 47 |
+
Figure 1. An illustration of the encoder-decoder model we use for joint audiovisual self-supervision. From an unlabeled sample of audiovisual speech, we use the raw audio waveform and the first video frame to generate a talking lip video. Lip movement reconstruction offers visual self-supervision. We also use decoders to reconstruct salient audio attributes (MFCCs, log mel, waveform) for audio-only self-supervision. By jointly optimizing the reconstruction losses for both modalities, we get joint audiovisual self-supervision. The trained audio encoder can then be used for audio-only downstream tasks.
|
| 48 |
+
|
| 49 |
+
Table 1. Results for spoken word classification (Accuracy in %) on the Speech Commands (SPC, 30 classes) (Warden, 2018) and the Lip Reading in the Wild (LRW, 500 classes) (Chung & Zisserman, 2016) datasets. For evaluation, a 2 layer GRU model is used on the encoder outputs for each pretraining method, before finetuning on the downstream task.
|
| 50 |
+
|
| 51 |
+
<table><tr><td rowspan="2">Pretraining method</td><td rowspan="2">Self-supervision</td><td rowspan="2">Input type</td><td colspan="4">Dataset and %of Labels used</td></tr><tr><td>SPC 100%</td><td>SPC 10%</td><td>LRW 100%</td><td>LRW 10%</td></tr><tr><td>MFCC</td><td>-</td><td>-</td><td>94.33</td><td>87.08</td><td>90.16</td><td>37.56</td></tr><tr><td>PASE (Pascual et al., 2019)</td><td>Audio</td><td>Raw audio</td><td>95.61</td><td>83.81</td><td>93.40</td><td>1.88</td></tr><tr><td>APC (Chung et al., 2019)</td><td>Audio</td><td>Mel features</td><td>94.87</td><td>89.91</td><td>93.97</td><td>57.41</td></tr><tr><td>wav2vec (Schneider et al., 2019)</td><td>Audio</td><td>Raw audio</td><td>96.04</td><td>91.57</td><td>94.60</td><td>19.50</td></tr><tr><td>L1 (Shukla et al., 2020b)</td><td>Visual</td><td>Mel features</td><td>95.11</td><td>86.43</td><td>94.45</td><td>33.43</td></tr><tr><td>L1 + Odd (Shukla et al., 2020b)</td><td>Audiovisual</td><td>Mel features</td><td>95.77</td><td>90.16</td><td>94.72</td><td>67.98</td></tr><tr><td>Ours (A)</td><td>Audio</td><td>Raw audio</td><td>95.06</td><td>90.56</td><td>94.14</td><td>69.70</td></tr><tr><td>Ours (V)</td><td>Visual</td><td>Raw audio</td><td>94.38</td><td>88.31</td><td>92.18</td><td>52.99</td></tr><tr><td>Ours (AV)</td><td>Audiovisual</td><td>Raw audio</td><td>95.21</td><td>90.63</td><td>95.37</td><td>77.13</td></tr><tr><td>Supervised 1D Resnet18</td><td>-</td><td>Raw audio</td><td>93.79</td><td>81.12</td><td>90.34</td><td>13.72</td></tr></table>
|
| 52 |
+
|
| 53 |
+
### 2.3. Audio Self-Supervision
|
| 54 |
+
|
| 55 |
+
In prior work (Shukla et al., 2020b), we employed temporal order based pretext task for audio-only self-supervision (predicting which of the inputs are jumbled or reversed). We wanted to examine whether it is possible to yield better speech representations using a more refined pretext task. In this work, our methodology for audio-only self-supervision is inspired from PASE (Pascual et al., 2019). We predict three informative audio attributes: (i) MFCCs, (ii) Log mel spectrograms, and (iii) the waveform. The key difference of our method with PASE is the fact that we directly train a 1D Resnet18 encoder model on the raw audio waveform. PASE requires intermediate steps like adding speech distortions for data augmentation, SincNet filters, and a penultimate Quasi-RNN layer. We also adopt only 3 of the most informative predicted attributes from PASE for simplicity. Fig. 3 illustrates our method for audio-only self-supervision. The audio encoder $\left( {f}_{a}\right)$ converts 1 second of ${16}\mathrm{{kHz}}$ input audio (x)into a 512 dimensional audio embedding $\left( {z}_{\text{aud }}\right)$ with 25 timesteps (exactly the same as in the method for visual self-supervision). The audio representation is then used as input to three separate decoders $\left( {{f}_{mfcc},{f}_{\text{logmel }}\& {f}_{\text{wav }}}\right)$ that reconstruct the desired audio attributes. We keep the decoder architectures as simple as possible in order to incen-tivize the important information about the audio attributes to be captured by the audio encoder. The MFCC and the log mel spectrogram decoders (Fig. 3 right) are both comprised of a single fully connected layer of 256 units. The waveform decoder (Fig. 3 top-left) is made of a transposed convolution layer followed by a convolution layer that outputs the reconstructed waveform (in an autoencoder-like fashion). We use an L1 loss between each reconstructed attribute with its ground truth $\left( {y}_{\text{attrib }}\right)$ to train the model. The total loss is the sum of the MFCC loss, the log mel loss, and the waveform loss. For attrib $\in \{ {mfcc},{logmel},{wav}\}$ , the loss is:
|
| 56 |
+
|
| 57 |
+
$$
|
| 58 |
+
{L}_{\text{audio }}\left( x\right) = \mathop{\sum }\limits_{\text{attrib }}\left| {{f}_{\text{attrib }}\left( {{f}_{a}\left( x\right) }\right) - {y}_{\text{attrib }}}\right| \tag{3}
|
| 59 |
+
$$
|
| 60 |
+
|
| 61 |
+
### 2.4. Audiovisual Self-Supervision
|
| 62 |
+
|
| 63 |
+
For joint audiovisual self-supervision (see Fig. 1), we simply combine the two proposed methods for visual-only and audio-only self-supervision. Since the same audio encoder architecture has been used in both models, we can simply use the shared audio representation as input to each of the four decoders (frame decoder, MFCC decoder, log mel decoder, waveform decoder). The total loss is the sum of the audio-only and the visual-only losses. The audio encoder $\left( {f}_{a}\right)$ is thus trained end-to-end and is driven to produce features that contain information about each of the predicted attributes from both the audio and the visual modalities.
|
| 64 |
+
|
| 65 |
+
$$
|
| 66 |
+
{L}_{\text{total }}\left( {x,{x}_{im}}\right) = {L}_{\text{video }}\left( {x,{x}_{im}}\right) + {L}_{\text{audio }}\left( x\right) \tag{4}
|
| 67 |
+
$$
|
| 68 |
+
|
| 69 |
+
## 3. Experiments
|
| 70 |
+
|
| 71 |
+
Datasets The LRW dataset (Chung & Zisserman, 2016) is a large, in-the-wild dataset of 500 different isolated words primarily from BBC recordings. It is an audiovisual speech dataset and is thus appropriate for training our methods. We use a subset of LRW that has only nearly frontal videos (with yaw, pitch and roll restricted to a maximum of 10 degrees), in order to have a cleaner supervisory signal from the visual modality. This filtering leaves us with a total of around 40 hours of usable data. We use this subset of the LRW dataset for self-supervised pretraining of our proposed methods. We also use it as a spoken word classification evaluation dataset. The SPC (Speech Commands v0.01) dataset (Warden, 2018) contains 64,727 total utterances of 30 different words by 1,881 speakers. We use SPC also as a spoken word classification evaluation dataset.
|
| 72 |
+
|
| 73 |
+
Baselines We compare our methods against other self-supervised methods for learning speech representations. For all the baselines, we use the code (and pretrained models) provided by the authors. We compare against PASE (Pascual et al., 2019), APC (Chung et al., 2019) and wav2vec (Schneider et al., 2019). We also compare against our prior related work. L1 (Shukla et al., 2020b) is similar to our proposed method for visual-only self-supervision but is based on log mel spectrograms as opposed to raw audio. L1 + Odd (Shukla et al., 2020b) is an audio-visual self-supervised method. We use a more refined audio self-supervision approach in this work. We also compare our methods against two supervised learning baselines for audio. We use 39 dimensional MFCCs (13 coefficients, 13 deltas, and 13 delta-deltas) as the first supervised baseline. The second baseline is a fully supervised 1D Resnet 18 model (same architecture as our pretrained encoders but trained from scratch directly on the evaluation datasets).
|
| 74 |
+
|
| 75 |
+
Experimental setup We evaluate all methods on isolated word classification on the Speech Commands (SPC) (Warden, 2018) and Lip Reading in the Wild (LRW) (Chung & Zisserman, 2016) datasets. We use a 2 layer BiGRU (with 256 units in each layer) on the encoder outputs followed by a linear layer with as many units as the number of target classes (30 for SPC, 500 for LRW). This acts as the downstream classifier and remains the same for every method. For downstream classifiction, we finetune the models (as shown in bottom of Fig. 1) for 50 epochs. The learning rate is 0.0001 for the first 40 epochs and 0.00001 for the last 10 epochs. We use the standard softmax + cross entropy loss for training. We opted to use a BiGRU for simplicity, however this can be replaced by any model that can classify variable length sequences into discrete categories (such as LSTMs, TCNs, LiGRUs (Ravanelli et al., 2018)). The results can be seen in Table 1.
|
| 76 |
+
|
| 77 |
+
Table 2. Results for spoken word classification (Accuracy in %) under various levels of introduced noise (SNR in dB). Babble noise from the NOISEX database is used to perturb the audio samples in the LRW and SPC datasets.
|
| 78 |
+
|
| 79 |
+
<table><tr><td rowspan="2">Dataset</td><td rowspan="2">Model</td><td colspan="7">Noise level (SNR)</td></tr><tr><td>-5 dB</td><td>0 dB</td><td>5dB</td><td>10 dB</td><td>15 dB</td><td>20 dB</td><td>Clean</td></tr><tr><td rowspan="4">SPC</td><td>MFCC</td><td>76.31</td><td>84.97</td><td>90.56</td><td>91.98</td><td>93.05</td><td>94.19</td><td>94.33</td></tr><tr><td>Ours (A)</td><td>79.35</td><td>88.42</td><td>92.34</td><td>93.41</td><td>94.63</td><td>95.04</td><td>95.06</td></tr><tr><td>Ours (V)</td><td>77.92</td><td>86.92</td><td>91.01</td><td>92.80</td><td>93.47</td><td>93.88</td><td>94.38</td></tr><tr><td>Ours (AV)</td><td>79.79</td><td>88.69</td><td>92.21</td><td>93.57</td><td>94.65</td><td>95.02</td><td>95.21</td></tr><tr><td rowspan="4">LRW</td><td>MFCC</td><td>50.18</td><td>70.75</td><td>81.08</td><td>85.74</td><td>88.41</td><td>90.11</td><td>90.16</td></tr><tr><td>Ours (A)</td><td>58.84</td><td>79.13</td><td>89.14</td><td>91.72</td><td>92.87</td><td>93.84</td><td>94.14</td></tr><tr><td>Ours (V)</td><td>51.40</td><td>73.47</td><td>84.61</td><td>88.11</td><td>90.98</td><td>91.58</td><td>92.18</td></tr><tr><td>Ours (AV)</td><td>64.63</td><td>82.59</td><td>90.08</td><td>92.09</td><td>92.91</td><td>93.87</td><td>95.37</td></tr></table>
|
| 80 |
+
|
| 81 |
+
Results with all labels With 100% of the training dataset used, all self-supervised methods achieve comparable performance and outperform fully supervised training. On the SPC dataset, the best overall performance is attained by wav2vec with an accuracy of ${96.04}\%$ , followed by our prior work at 95.77%, PASE at 95.61% and our proposed method at 95.21%. On LRW, the best performance is by our method with an accuracy of 95.37%.
|
| 82 |
+
|
| 83 |
+
Learning with fewer labels The concept of self-supervision is especially relevant to situations where labeled data is scarce. To compare the methods in such situations, we perform the same word classification experiments on the SPC and LRW datasets but with only ${10}\%$ of the samples being used in the training set (the validation and test sets remain unchanged). Note that we completely omit the remaining ${90}\%$ of the training set (see Tables 6,7,8 for exact split details). This leaves us with around 170 training examples per class for the SPC dataset (30 classes) and only around 20 training examples per class for the LRW dataset (500 classes). This makes the problem significantly more challenging. On SPC, there is a slight degradation in the performance of all methods. Our method attains an accuracy of ${90.63}\%$ which is second to only wav2vec at an accuracy of 91.57%. On LRW, all other methods get severely affected and overfit to the small training set. Our method is the least affected and significantly outperforms all other methods with a best performance of 77.13%.
|
| 84 |
+
|
| 85 |
+
Noisy situations We also compare the performance of the variations of our method under various levels of artificially induced noise. We introduce babble noise from the NOISEX (Varga & Steeneken, 1993) database to create noisy versions of the SPC and LRW datasets. We use six levels of noise, in the range of $- 5\mathrm{\;{dB}}$ SNR to ${20}\mathrm{\;{dB}}\mathrm{{SNR}}$ in increments of 5 $\mathrm{{dB}}$ . The results for the noisy datasets can be seen in Table 2. All our methods outperform MFCCs at all noise levels on both datasets. The joint audiovisual method is the best.
|
| 86 |
+
|
| 87 |
+
## 4. Discussion
|
| 88 |
+
|
| 89 |
+
There are multiple interesting observations from our obtained results. Audio-only supervision yields better results than visual-only supervision. However, the model trained with joint audiovisual self-supervision performs better than the models trained with unimodal audio-only and visual-only self-supervision in almost all scenarios. including noisy datasets. This highlights the utility of the complementary information encoded by visual self-supervision and demonstrates the potential of multimodal self-supervision as a useful tool in speech representation learning. Also notably, despite all tested methods being very similar in performance on the full datasets, there is a clear gap when using a small training set and our method is the best at learning with fewer labels, which is very relevant to low resource domains. This can have significant impact in problems like low resource language ASR, emotion recognition and cross-cultural ASR. Our method also significantly outperforms fully supervised training from scratch, which further motivates the utility of self-supervised pretraining for speech.
|
| 90 |
+
|
| 91 |
+
Future work This is a work in progress and there are many other speech related applications that we can evaluate our model on. In this work, we only focused on the classification of isolated words. We will also test the model on continuous CTC based speech recognition on datasets like Librispeech and TIMIT, and other tasks like speaker identification and speech emotion recognition. An especially relevant application would be low resource language ASR. There are also interesting directions to explore to improve our method. In this work, we exhibit how joint audiovisual information can be used for audio representation learning. In a similar manner, we could also utilize this cross-modal information for visual representation learning (e.g. predicting speech attributes from the visual modality). Another interesting line of work is multimodal contrastive self-supervised learning which has been demonstrated for generic audiovisual data but not for audiovisual speech.
|
| 92 |
+
|
| 93 |
+
References
|
| 94 |
+
|
| 95 |
+
Alwassel, H., Mahajan, D., Torresani, L., Ghanem, B., and Tran, D. Self-supervised learning by cross-modal audio-video clustering. arXiv preprint arXiv:1911.12667, 2019.
|
| 96 |
+
|
| 97 |
+
Chung, J. and Zisserman, A. Lip reading in the wild. In ${ACCV},{2016}$ .
|
| 98 |
+
|
| 99 |
+
Chung, Y., Hsu, W., Tang, H., and Glass, J. An unsupervised autoregressive model for speech representation learning. arXiv:1904.03240, 2019.
|
| 100 |
+
|
| 101 |
+
Devlin, J., Chang, M., Lee, K., and Toutanova, K. Bert: Pretraining of deep bidirectional transformers for language understanding. arXiv:1810.04805, 2018.
|
| 102 |
+
|
| 103 |
+
He, K., Zhang, X., Ren, S., and Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770-778, 2016.
|
| 104 |
+
|
| 105 |
+
He, K., Fan, H., Wu, Y., Xie, S., and Girshick, R. Momentum contrast for unsupervised visual representation learning. CVPR, 2020.
|
| 106 |
+
|
| 107 |
+
Kumar, A. and Ithapu, V. K. Secost: Sequential co-supervision for weakly labeled audio event detection. Proceedings of the International Conference on Acoustics Speech and Signal Processing (ICASSP), 2020.
|
| 108 |
+
|
| 109 |
+
Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., and Stoyanov, V. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692, 2019.
|
| 110 |
+
|
| 111 |
+
Misra, I. and van der Maaten, L. Self-supervised learning of pretext-invariant representations. arXiv preprint arXiv:1912.01991, 2019.
|
| 112 |
+
|
| 113 |
+
Morgado, P., Vasconcelos, N., and Misra, I. Audio-visual instance discrimination with cross-modal agreement. arXiv preprint arXiv:2004.12943, 2020.
|
| 114 |
+
|
| 115 |
+
Oord, A., Li, Y., and Vinyals, O. Representation learning with contrastive predictive coding. arXiv preprint arXiv:1807.03748, 2018.
|
| 116 |
+
|
| 117 |
+
Owens, A., Wu, J., McDermott, J., Freeman, W., and Tor-ralba, A. Learning sight from sound: Ambient sound provides supervision for visual learning. IJCV, 126(10): 1120-1137, 2018.
|
| 118 |
+
|
| 119 |
+
Pascual, S., Ravanelli, M., Serrà, J., Bonafonte, A., and Ben-gio, Y. Learning problem-agnostic speech representations from multiple self-supervised tasks. Interspeech, 2019.
|
| 120 |
+
|
| 121 |
+
Patrick, M., Asano, Y. M., Fong, R., Henriques, J. F., Zweig, G., and Vedaldi, A. Multi-modal self-supervision from generalized data transformations. arXiv preprint arXiv:2003.04298, 2020.
|
| 122 |
+
|
| 123 |
+
Pham, H., Liang, P., Manzini, T., Morency, L., and Póczos, B. Found in translation: Learning robust joint representations by cyclic translations between modalities. In AAAI, volume 33, pp. 6892-6899, 2019.
|
| 124 |
+
|
| 125 |
+
Ravanelli, M., Brakel, P., Omologo, M., and Bengio, Y. Light gated recurrent units for speech recognition. IEEE Transactions on Emerging Topics in Computational Intelligence, 2(2):92-102, 2018.
|
| 126 |
+
|
| 127 |
+
Rivière, M., Joulin, A., Mazaré, P.-E., and Dupoux, E. Unsupervised pretraining transfers well across languages. arXiv preprint arXiv:2002.02848, 2020.
|
| 128 |
+
|
| 129 |
+
Schneider, S., Baevski, A., Collobert, R., and Auli, M. wav2vec: Unsupervised pre-training for speech recognition. arXiv:1904.05862, 2019.
|
| 130 |
+
|
| 131 |
+
Shukla, A., Petridis, S., and Pantic, M. Visual self-supervision by facial reconstruction for speech representation learning. Sight and Sound Workshop, CVPR, 2020a.
|
| 132 |
+
|
| 133 |
+
Shukla, A., Petridis, S., and Pantic, M. Does visual self-supervision improve the learning of speech representations? arXiv preprint arXiv:2005.01400, 2020b.
|
| 134 |
+
|
| 135 |
+
Shukla, A., Vougioukas, K., Ma, P., Petridis, S., and Pantic, M. Visually guided self supervised learning of speech representations. Proceedings of the International Conference on Acoustics Speech and Signal Processing (ICASSP), 2020c.
|
| 136 |
+
|
| 137 |
+
Tian, Y., Krishnan, D., and Isola, P. Contrastive multiview coding. arXiv preprint arXiv:1906.05849, 2019.
|
| 138 |
+
|
| 139 |
+
Varga, A. and Steeneken, H. J. Assessment for automatic speech recognition: Ii. noisex-92: A database and an experiment to study the effect of additive noise on speech recognition systems. Speech communication, 12(3):247- 251, 1993.
|
| 140 |
+
|
| 141 |
+
Warden, P. Speech commands: A dataset for limited-vocabulary speech recognition. arXiv preprint arXiv:1804.03209, 2018.
|
| 142 |
+
|
| 143 |
+
## Appendix
|
| 144 |
+
|
| 145 |
+
## A. Audio encoders
|
| 146 |
+
|
| 147 |
+
Table 3. Encoder type and number of trainable parameters in each of the compared methods.
|
| 148 |
+
|
| 149 |
+
<table><tr><td>Method</td><td>Encoder type</td><td>Parameters</td></tr><tr><td>PASE</td><td>SincNet + CNN + FC</td><td>5,818,020</td></tr><tr><td>APC</td><td>Log mel + GRU</td><td>4,105,296</td></tr><tr><td>wav2vec</td><td>CNN</td><td>32,537,088</td></tr><tr><td>$\mathrm{L}1 + \mathrm{{Odd}}$</td><td>Log mel + GRU</td><td>4,065,282</td></tr><tr><td>Ours</td><td>1D Resnet18</td><td>3,848,576</td></tr></table>
|
| 150 |
+
|
| 151 |
+
Table 4. Feature dimensionality and sample rate of each of the compared methods.
|
| 152 |
+
|
| 153 |
+
<table><tr><td>Method</td><td>Dim.</td><td>$\mathrm{{Hz}}$</td></tr><tr><td>MFCC</td><td>39</td><td>101</td></tr><tr><td>PASE</td><td>100</td><td>100</td></tr><tr><td>APC</td><td>512</td><td>101</td></tr><tr><td>wav2vec</td><td>512</td><td>98</td></tr><tr><td>L1</td><td>512</td><td>101</td></tr><tr><td>$\mathrm{L}1 + \mathrm{{Odd}}$</td><td>512</td><td>101</td></tr><tr><td>Ours</td><td>512</td><td>25</td></tr></table>
|
| 154 |
+
|
| 155 |
+
Table 5. Pretraining dataset and duration for each method
|
| 156 |
+
|
| 157 |
+
<table><tr><td>Method</td><td>Pretraining Dataset</td><td>Duration</td></tr><tr><td>PASE</td><td>Librispeech subset</td><td>10 hours</td></tr><tr><td>APC</td><td>Librispeech train-clean-360</td><td>360 hours</td></tr><tr><td>wav2vec</td><td>Full Librispeech + WSJ</td><td>1000 hours</td></tr><tr><td>L1</td><td>LRW frontal subset</td><td>36 hours</td></tr><tr><td>L1 + Odd</td><td>LRW frontal subset</td><td>36 hours</td></tr><tr><td>Ours</td><td>LRW frontal subset</td><td>36 hours</td></tr></table>
|
| 158 |
+
|
| 159 |
+
Pretraining datasets for baselines The results in Table 1 for all the baseline methods (PASE, APC, wav2vec) have been computed using the public code and pretrained models provided by the authors. These baseline methods (and our method) have been pretrained on varying amounts and types of data. For a completely fair comparison, all methods need to be pretrained with the same data. We experimented with pretraining all baseline methods on the same 36 hour LRW frontal subset that we use for our method. The results obtained with the baseline methods using this approach were either equivalent or worse to those with the public pretrained models. This shows that our model may be able to learn better representations on the same amount of pretraining data. However for the results, we use the public pretrained models which may assist with reproducibility.
|
| 160 |
+
|
| 161 |
+
## B. Dataset and split details
|
| 162 |
+
|
| 163 |
+
Table 6. The number of data samples in each split of each dataset.
|
| 164 |
+
|
| 165 |
+
<table><tr><td rowspan="2">Dataset - % labels</td><td colspan="3">Split size (samples)</td></tr><tr><td>Train</td><td>Val</td><td>Test</td></tr><tr><td>SPC-100%</td><td>51088</td><td>6798</td><td>6835</td></tr><tr><td>SPC-10%</td><td>5097</td><td>6798</td><td>6835</td></tr><tr><td>LRW-100%</td><td>112812</td><td>5878</td><td>5987</td></tr><tr><td>LRW-10%</td><td>11054</td><td>5878</td><td>5987</td></tr></table>
|
| 166 |
+
|
| 167 |
+
Table 7. The duration (in hours) of each split of each dataset.
|
| 168 |
+
|
| 169 |
+
<table><tr><td rowspan="2">Dataset - % labels</td><td colspan="3">Split duration (hours)</td></tr><tr><td>Train</td><td>Val</td><td>Test</td></tr><tr><td>SPC-100%</td><td>14.19</td><td>1.89</td><td>1.90</td></tr><tr><td>SPC-10%</td><td>1.41</td><td>1.89</td><td>1.90</td></tr><tr><td>LRW-100%</td><td>36.35</td><td>1.89</td><td>1.92</td></tr><tr><td>LRW-10%</td><td>3.56</td><td>1.89</td><td>1.92</td></tr></table>
|
| 170 |
+
|
| 171 |
+
Table 8. The average number of samples (rounded to nearest integer) and duration in minutes of each class in the training set.
|
| 172 |
+
|
| 173 |
+
<table><tr><td>Dataset</td><td>Classes</td><td>n/class</td><td>t/class</td></tr><tr><td>SPC-100%</td><td>30</td><td>1703</td><td>28.38</td></tr><tr><td>SPC-10%</td><td>30</td><td>170</td><td>2.82</td></tr><tr><td>LRW-100%</td><td>500</td><td>225</td><td>4.36</td></tr><tr><td>LRW-10%</td><td>500</td><td>22</td><td>0.42</td></tr></table>
|
| 174 |
+
|
| 175 |
+

|
| 176 |
+
|
| 177 |
+
Figure 2. A detailed illustration of the encoder-decoder model we use for lip video reconstruction. From an unlabeled sample of audiovisual speech, we use the audio and the first frame of the video $\left( {t = 0}\right)$ to generate a video with $t$ frames. The model contains: (1) an identity encoder which produces a 64-D identity embedding; (2) an audio encoder which converts the input audio (t frames of 80 dimensional log mel spectrograms) into a 512-D audio embedding; (3) a frame decoder which generates video from the concatenated latent representation using transposed convolutions.
|
| 178 |
+
|
| 179 |
+

|
| 180 |
+
|
| 181 |
+
Figure 3. A detailed illustration of the encoder-decoder model we use for audio-only self-supervised representation learning. From an input waveform of 1 second, we predict three informative attributes: MFCC, log mel spectrogram and the waveform. The decoders are kept as simple as possible to incentivize the audio representations to capture the necessary information about the attributes.
|
| 182 |
+
|
papers/ICML/ICML 2020/ICML 2020 Workshop/ICML 2020 Workshop SAS/gp8Hkp9y0bw/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,155 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ LEARNING SPEECH REPRESENTATIONS FROM RAW AUDIO BY JOINT AUDIOVISUAL SELF-SUPERVISION
|
| 2 |
+
|
| 3 |
+
Abhinav Shukla ${}^{1}$ Stavros Petridis ${}^{12}$ Maja Pantic ${}^{13}$
|
| 4 |
+
|
| 5 |
+
§ ABSTRACT
|
| 6 |
+
|
| 7 |
+
The intuitive interaction between the audio and visual modalities is valuable for cross-modal self-supervised learning. This concept has been demonstrated for generic audiovisual tasks like video action recognition and acoustic scene classification. However, self-supervision remains under-explored for audiovisual speech. We propose a method to learn self-supervised speech representations from the raw audio waveform. We train a raw audio encoder by combining audio-only self-supervision (by predicting informative audio attributes) with visual self-supervision (by generating talking faces from audio). The visual pretext task drives the audio representations to capture information related to lip movements. This enriches the audio encoder with visual information and the encoder can be used for evaluation without the visual modality. Our method attains competitive performance with respect to existing self-supervised audio features on established isolated word classification benchmarks, and significantly outperforms other methods at learning from fewer labels. Notably, our method also outperforms fully supervised training, thus providing a strong initialization for speech related tasks. Our results demonstrate the potential of multimodal self-supervision in audiovisual speech for learning good audio representations.
|
| 8 |
+
|
| 9 |
+
§ 1. INTRODUCTION
|
| 10 |
+
|
| 11 |
+
Self-supervised learning of representations from large unlabeled datasets is a popular contemporary trend in machine learning. After being widely adopted in areas like natural language processing and computer vision, self-supervision is now rapidly developing as a noteworthy topic in audio and speech processing. Self-supervision aims to capture the most informative properties from the underlying structure of unlabeled data to learn generalized representations. This is extremely promising in problem settings involving a large amount of unlabeled data but limited labeled data. In the context of audio and speech processing, this is relevant to low resource languages, emotion recognition, cross-cultural speech recognition and other such problems with small-sized datasets. Even though there has been recent research interest in self-supervised learning for speech data, most works focus only on the audio modality alone. Audiovisual speech data offers interesting possibilities for cross-modal self-supervision, which is something relatively lesser explored. In this work, we present a method for self-supervised representation learning of audio features that leverages both the audio and visual modalities. We demonstrate how generating a talking lip video from a single frame and the corresponding audio can be used as a pretext task for visual self-supervision to train a raw audio encoder. We combine this with audio-only self-supervision based on predicting informative audio attributes, similar to (Pascual et al., 2019). This results in an audio encoder trained by joint audiovisual self-supervision. We evaluate the method on spoken word classification and achieve competitive results when comparing with existing self-supervised methods. Our method also results in significantly better performance when learning with limited data ( ${10}\%$ of training set) for the downstream tasks. Importantly, our method also outperforms fully supervised training (directly training the encoder on the downstream task). Our observations motivate the utility of self-supervised pretraining for audio related tasks. We demonstrate that cross-modal supervision in audiovisual speech can learn better representations compared to unimodal audio-only or visual-only self-supervision.
|
| 12 |
+
|
| 13 |
+
§ 1.1. RELATED WORK
|
| 14 |
+
|
| 15 |
+
Self-supervised learning has been very influential in recent advances in natural language processing (BERT (Devlin et al., 2018), RoBERTa (Liu et al., 2019) etc.) and computer vision (CPC (Oord et al., 2018), MoCo (He et al., 2020), PIRL (Misra & van der Maaten, 2019) etc.). It is also beginning to mature as a relevant topic in audio and speech processing. CPC (Contrast Predictive Coding) (Oord et al., 2018) was a seminal work in self-supervised learning which also demonstrated the applicability of contrastive self-supervised learning to audio. Wav2vec (Schneider et al., 2019) refines the idea from CPC specifically for speech. CPC based self-supervision has also been shown to generalize well to multiple languages (Rivière et al., 2020). APC (Autoregressive Predictive Coding) (Chung et al., 2019) is a similar approach that predicts the next token of a speech segment from the history. Another very relevant recent work is PASE (Problem Agnostic Speech Encoder) (Pascual et al., 2019), which aims to learn multi-task speech representations from raw audio by predicting a number of handcrafted features such as MFCCs, prosody and waveform. Teacher-student models have also been explored for audio self-supervision where the trained model from a previous epoch acts as the teacher model for the next epoch (Kumar & Ithapu, 2020). All of the works discussed so far are unimodal audio-only self-supervised methods. There are also a few other works that utilize both audio and visual information. There are multiple ways to capture this cross-modal interaction including audiovisual synchronization (Owens et al., 2018), cross-modal transition modeling (Pham et al., 2019), cross-modal pseudolabel based clustering (Alwassel et al., 2019), contrastive learning (Tian et al., 2019; Patrick et al., 2020), and audiovisual instance discrimination (Morgado et al., 2020). However most of these works present cross-modal self-supervision in the context of generic audiovisual data, with application to tasks like video action recognition and acoustic scene classification. There is limited work that explores self-supervision specifically in the context of audiovisual speech. We have explored this concept in recent related work (Shukla et al., 2020c;b;a). This work extends the idea from our prior work. Specifically, we move from learning speech representations directly from raw audio instead of from mel features. We also adopt a different and more refined approach for audio-only self-supervision (described in Section 2.3).
|
| 16 |
+
|
| 17 |
+
Abhinav Shukla's work was supported by a PhD scholarship by Samsung Electronics, UK. ${}^{1}$ Imperial College London, UK ${}^{2}$ Samsung AI Centre, Cambridge, UK ${}^{3}$ Facebook London, UK. Correspondence to: Abhinav Shukla <a.shukla@imperial.ac.uk>.
|
| 18 |
+
|
| 19 |
+
Published at the workshop on Self-supervision in Audio and Speech at the ${37}^{\text{ th }}$ International Conference on Machine Learning, Vienna, Austria. Copyright 2020 by the author(s).
|
| 20 |
+
|
| 21 |
+
§ 2. METHOD
|
| 22 |
+
|
| 23 |
+
§ 2.1. AUDIO ENCODER ARCHITECTURE
|
| 24 |
+
|
| 25 |
+
We use a 1D Resnet 18 (He et al., 2016) encoder as the backbone for all of our proposed methods (detailed architecture in appendix). The encoder ${f}_{a}$ (see Fig. 2 and 3) takes as input a ${16}\mathrm{{kHz}}$ raw audio waveform and converts it into a 512-D audio feature vector for every timestep. The output sample rate is 25 audio feature vectors per second, which matches that of 25 FPS video in the LRW dataset. This allows us to have a one-to-one mapping between the two modalities, which helps in cross-modal learning and allows us to avoid oversampling or undersampling either modality. Other contemporary self-supervised methods (Alwassel et al., 2019; Patrick et al., 2020) use a 2D Resnet18 audio encoder operating on mel features (operating similar to image based CNNs). However, we wanted our audio encoder to directly operate on the raw audio waveform and perform end-to-end self-supervised representation learning without starting from an intermediate feature like MFCCs or log mel spectrograms, which is why we chose a 1D Resnet18.
|
| 26 |
+
|
| 27 |
+
§ 2.2. VISUAL SELF-SUPERVISION
|
| 28 |
+
|
| 29 |
+
For visual self-supervision, we generate a talking lip video from a still image and the corresponding audio (see Fig. 1 and Fig. 2). The model is comprised of three components: (i) the audio encoder ${f}_{a}$ (1D Resnet18),(ii) the identity encoder ${f}_{id}$ , and (iii) the frame decoder ${f}_{d}$ . The model operates on 1 second long segments from an audiovisual speech dataset. The audio encoder ${f}_{a}$ (Fig. 2 bottom-left) converts the 1 second audio sample $x$ into a 512 dimensional embedding with 25 timesteps $\left( {z}_{\text{ aud }}\right)$ . The identity encoder ${f}_{id}$ (Fig. 2 top-left) is a 6 layer CNN that converts the mouth region of the first video frame ${x}_{im}$ (a ${64} \times {64}$ image) into a 64 dimensional identity embedding $\left( {z}_{id}\right)$ . This embedding is replicated 25 times to match the timesteps of the audio embedding. The latent representation $z$ is the concatenation of ${z}_{\text{ aud }}$ and ${z}_{id}$ (as shown in Fig. 2). This then goes through the frame decoder ${f}_{d}$ (see Fig. 2 top-right), which is a CNN that uses strided transposed convolutions to generate the video frames of the lip movements. The skip connections between the identity encoder and frame decoder help in preserving subject identity in the generated frames. An L1 reconstruction loss between frames from the generated video $\left( {{f}_{d}\left( z\right) }\right)$ and those from the real video $\left( {y}_{\text{ video }}\right)$ is used to train the network. We use the L1 loss as opposed to the L2 loss to get relatively sharper reconstructions. Our model aims to predict lip movements given only audio and speaker identity information from the first frame. In this process, the audio encoder is driven to produce useful speech features that correlate with lip movements (because accurate lip movement reconstruction will reduce the loss). The audio features obtained by reconstructing lip movements are likely to contain information about the speech content. Our proposed method is related to our prior work on visual self-supervision to learn audio features (Shukla et al., 2020c;b;a). In this work, the key difference is that we use a raw audio encoder for end-to-end learning as opposed to the log mel spectrogram encoder we used in (Shukla et al., 2020b;a). Also, instead of reconstructing the full face, we focus on the mouth region which contains visual information about the speech content, which we hypothesized would lead to better representations for speech recognition.
|
| 30 |
+
|
| 31 |
+
$$
|
| 32 |
+
z\left( {x,{x}_{im}}\right) = \operatorname{cat}\left( {{f}_{a}\left( x\right) ,{f}_{id}\left( {x}_{im}\right) }\right) \tag{1}
|
| 33 |
+
$$
|
| 34 |
+
|
| 35 |
+
$$
|
| 36 |
+
{L}_{\text{ video }}\left( {x,{x}_{im}}\right) = \left| {{f}_{d}\left( {z\left( {x,{x}_{im}}\right) }\right) - {y}_{\text{ video }}}\right| \tag{2}
|
| 37 |
+
$$
|
| 38 |
+
|
| 39 |
+
Training : Audio-visual Self-Supervision
|
| 40 |
+
|
| 41 |
+
< g r a p h i c s >
|
| 42 |
+
|
| 43 |
+
Figure 1. An illustration of the encoder-decoder model we use for joint audiovisual self-supervision. From an unlabeled sample of audiovisual speech, we use the raw audio waveform and the first video frame to generate a talking lip video. Lip movement reconstruction offers visual self-supervision. We also use decoders to reconstruct salient audio attributes (MFCCs, log mel, waveform) for audio-only self-supervision. By jointly optimizing the reconstruction losses for both modalities, we get joint audiovisual self-supervision. The trained audio encoder can then be used for audio-only downstream tasks.
|
| 44 |
+
|
| 45 |
+
Table 1. Results for spoken word classification (Accuracy in %) on the Speech Commands (SPC, 30 classes) (Warden, 2018) and the Lip Reading in the Wild (LRW, 500 classes) (Chung & Zisserman, 2016) datasets. For evaluation, a 2 layer GRU model is used on the encoder outputs for each pretraining method, before finetuning on the downstream task.
|
| 46 |
+
|
| 47 |
+
max width=
|
| 48 |
+
|
| 49 |
+
2*Pretraining method 2*Self-supervision 2*Input type 4|c|Dataset and %of Labels used
|
| 50 |
+
|
| 51 |
+
4-7
|
| 52 |
+
SPC 100% SPC 10% LRW 100% LRW 10%
|
| 53 |
+
|
| 54 |
+
1-7
|
| 55 |
+
MFCC - - 94.33 87.08 90.16 37.56
|
| 56 |
+
|
| 57 |
+
1-7
|
| 58 |
+
PASE (Pascual et al., 2019) Audio Raw audio 95.61 83.81 93.40 1.88
|
| 59 |
+
|
| 60 |
+
1-7
|
| 61 |
+
APC (Chung et al., 2019) Audio Mel features 94.87 89.91 93.97 57.41
|
| 62 |
+
|
| 63 |
+
1-7
|
| 64 |
+
wav2vec (Schneider et al., 2019) Audio Raw audio 96.04 91.57 94.60 19.50
|
| 65 |
+
|
| 66 |
+
1-7
|
| 67 |
+
L1 (Shukla et al., 2020b) Visual Mel features 95.11 86.43 94.45 33.43
|
| 68 |
+
|
| 69 |
+
1-7
|
| 70 |
+
L1 + Odd (Shukla et al., 2020b) Audiovisual Mel features 95.77 90.16 94.72 67.98
|
| 71 |
+
|
| 72 |
+
1-7
|
| 73 |
+
Ours (A) Audio Raw audio 95.06 90.56 94.14 69.70
|
| 74 |
+
|
| 75 |
+
1-7
|
| 76 |
+
Ours (V) Visual Raw audio 94.38 88.31 92.18 52.99
|
| 77 |
+
|
| 78 |
+
1-7
|
| 79 |
+
Ours (AV) Audiovisual Raw audio 95.21 90.63 95.37 77.13
|
| 80 |
+
|
| 81 |
+
1-7
|
| 82 |
+
Supervised 1D Resnet18 - Raw audio 93.79 81.12 90.34 13.72
|
| 83 |
+
|
| 84 |
+
1-7
|
| 85 |
+
|
| 86 |
+
§ 2.3. AUDIO SELF-SUPERVISION
|
| 87 |
+
|
| 88 |
+
In prior work (Shukla et al., 2020b), we employed temporal order based pretext task for audio-only self-supervision (predicting which of the inputs are jumbled or reversed). We wanted to examine whether it is possible to yield better speech representations using a more refined pretext task. In this work, our methodology for audio-only self-supervision is inspired from PASE (Pascual et al., 2019). We predict three informative audio attributes: (i) MFCCs, (ii) Log mel spectrograms, and (iii) the waveform. The key difference of our method with PASE is the fact that we directly train a 1D Resnet18 encoder model on the raw audio waveform. PASE requires intermediate steps like adding speech distortions for data augmentation, SincNet filters, and a penultimate Quasi-RNN layer. We also adopt only 3 of the most informative predicted attributes from PASE for simplicity. Fig. 3 illustrates our method for audio-only self-supervision. The audio encoder $\left( {f}_{a}\right)$ converts 1 second of ${16}\mathrm{{kHz}}$ input audio (x)into a 512 dimensional audio embedding $\left( {z}_{\text{ aud }}\right)$ with 25 timesteps (exactly the same as in the method for visual self-supervision). The audio representation is then used as input to three separate decoders $\left( {{f}_{mfcc},{f}_{\text{ logmel }}\& {f}_{\text{ wav }}}\right)$ that reconstruct the desired audio attributes. We keep the decoder architectures as simple as possible in order to incen-tivize the important information about the audio attributes to be captured by the audio encoder. The MFCC and the log mel spectrogram decoders (Fig. 3 right) are both comprised of a single fully connected layer of 256 units. The waveform decoder (Fig. 3 top-left) is made of a transposed convolution layer followed by a convolution layer that outputs the reconstructed waveform (in an autoencoder-like fashion). We use an L1 loss between each reconstructed attribute with its ground truth $\left( {y}_{\text{ attrib }}\right)$ to train the model. The total loss is the sum of the MFCC loss, the log mel loss, and the waveform loss. For attrib $\in \{ {mfcc},{logmel},{wav}\}$ , the loss is:
|
| 89 |
+
|
| 90 |
+
$$
|
| 91 |
+
{L}_{\text{ audio }}\left( x\right) = \mathop{\sum }\limits_{\text{ attrib }}\left| {{f}_{\text{ attrib }}\left( {{f}_{a}\left( x\right) }\right) - {y}_{\text{ attrib }}}\right| \tag{3}
|
| 92 |
+
$$
|
| 93 |
+
|
| 94 |
+
§ 2.4. AUDIOVISUAL SELF-SUPERVISION
|
| 95 |
+
|
| 96 |
+
For joint audiovisual self-supervision (see Fig. 1), we simply combine the two proposed methods for visual-only and audio-only self-supervision. Since the same audio encoder architecture has been used in both models, we can simply use the shared audio representation as input to each of the four decoders (frame decoder, MFCC decoder, log mel decoder, waveform decoder). The total loss is the sum of the audio-only and the visual-only losses. The audio encoder $\left( {f}_{a}\right)$ is thus trained end-to-end and is driven to produce features that contain information about each of the predicted attributes from both the audio and the visual modalities.
|
| 97 |
+
|
| 98 |
+
$$
|
| 99 |
+
{L}_{\text{ total }}\left( {x,{x}_{im}}\right) = {L}_{\text{ video }}\left( {x,{x}_{im}}\right) + {L}_{\text{ audio }}\left( x\right) \tag{4}
|
| 100 |
+
$$
|
| 101 |
+
|
| 102 |
+
§ 3. EXPERIMENTS
|
| 103 |
+
|
| 104 |
+
Datasets The LRW dataset (Chung & Zisserman, 2016) is a large, in-the-wild dataset of 500 different isolated words primarily from BBC recordings. It is an audiovisual speech dataset and is thus appropriate for training our methods. We use a subset of LRW that has only nearly frontal videos (with yaw, pitch and roll restricted to a maximum of 10 degrees), in order to have a cleaner supervisory signal from the visual modality. This filtering leaves us with a total of around 40 hours of usable data. We use this subset of the LRW dataset for self-supervised pretraining of our proposed methods. We also use it as a spoken word classification evaluation dataset. The SPC (Speech Commands v0.01) dataset (Warden, 2018) contains 64,727 total utterances of 30 different words by 1,881 speakers. We use SPC also as a spoken word classification evaluation dataset.
|
| 105 |
+
|
| 106 |
+
Baselines We compare our methods against other self-supervised methods for learning speech representations. For all the baselines, we use the code (and pretrained models) provided by the authors. We compare against PASE (Pascual et al., 2019), APC (Chung et al., 2019) and wav2vec (Schneider et al., 2019). We also compare against our prior related work. L1 (Shukla et al., 2020b) is similar to our proposed method for visual-only self-supervision but is based on log mel spectrograms as opposed to raw audio. L1 + Odd (Shukla et al., 2020b) is an audio-visual self-supervised method. We use a more refined audio self-supervision approach in this work. We also compare our methods against two supervised learning baselines for audio. We use 39 dimensional MFCCs (13 coefficients, 13 deltas, and 13 delta-deltas) as the first supervised baseline. The second baseline is a fully supervised 1D Resnet 18 model (same architecture as our pretrained encoders but trained from scratch directly on the evaluation datasets).
|
| 107 |
+
|
| 108 |
+
Experimental setup We evaluate all methods on isolated word classification on the Speech Commands (SPC) (Warden, 2018) and Lip Reading in the Wild (LRW) (Chung & Zisserman, 2016) datasets. We use a 2 layer BiGRU (with 256 units in each layer) on the encoder outputs followed by a linear layer with as many units as the number of target classes (30 for SPC, 500 for LRW). This acts as the downstream classifier and remains the same for every method. For downstream classifiction, we finetune the models (as shown in bottom of Fig. 1) for 50 epochs. The learning rate is 0.0001 for the first 40 epochs and 0.00001 for the last 10 epochs. We use the standard softmax + cross entropy loss for training. We opted to use a BiGRU for simplicity, however this can be replaced by any model that can classify variable length sequences into discrete categories (such as LSTMs, TCNs, LiGRUs (Ravanelli et al., 2018)). The results can be seen in Table 1.
|
| 109 |
+
|
| 110 |
+
Table 2. Results for spoken word classification (Accuracy in %) under various levels of introduced noise (SNR in dB). Babble noise from the NOISEX database is used to perturb the audio samples in the LRW and SPC datasets.
|
| 111 |
+
|
| 112 |
+
max width=
|
| 113 |
+
|
| 114 |
+
2*Dataset 2*Model 7|c|Noise level (SNR)
|
| 115 |
+
|
| 116 |
+
3-9
|
| 117 |
+
-5 dB 0 dB 5dB 10 dB 15 dB 20 dB Clean
|
| 118 |
+
|
| 119 |
+
1-9
|
| 120 |
+
4*SPC MFCC 76.31 84.97 90.56 91.98 93.05 94.19 94.33
|
| 121 |
+
|
| 122 |
+
2-9
|
| 123 |
+
Ours (A) 79.35 88.42 92.34 93.41 94.63 95.04 95.06
|
| 124 |
+
|
| 125 |
+
2-9
|
| 126 |
+
Ours (V) 77.92 86.92 91.01 92.80 93.47 93.88 94.38
|
| 127 |
+
|
| 128 |
+
2-9
|
| 129 |
+
Ours (AV) 79.79 88.69 92.21 93.57 94.65 95.02 95.21
|
| 130 |
+
|
| 131 |
+
1-9
|
| 132 |
+
4*LRW MFCC 50.18 70.75 81.08 85.74 88.41 90.11 90.16
|
| 133 |
+
|
| 134 |
+
2-9
|
| 135 |
+
Ours (A) 58.84 79.13 89.14 91.72 92.87 93.84 94.14
|
| 136 |
+
|
| 137 |
+
2-9
|
| 138 |
+
Ours (V) 51.40 73.47 84.61 88.11 90.98 91.58 92.18
|
| 139 |
+
|
| 140 |
+
2-9
|
| 141 |
+
Ours (AV) 64.63 82.59 90.08 92.09 92.91 93.87 95.37
|
| 142 |
+
|
| 143 |
+
1-9
|
| 144 |
+
|
| 145 |
+
Results with all labels With 100% of the training dataset used, all self-supervised methods achieve comparable performance and outperform fully supervised training. On the SPC dataset, the best overall performance is attained by wav2vec with an accuracy of ${96.04}\%$ , followed by our prior work at 95.77%, PASE at 95.61% and our proposed method at 95.21%. On LRW, the best performance is by our method with an accuracy of 95.37%.
|
| 146 |
+
|
| 147 |
+
Learning with fewer labels The concept of self-supervision is especially relevant to situations where labeled data is scarce. To compare the methods in such situations, we perform the same word classification experiments on the SPC and LRW datasets but with only ${10}\%$ of the samples being used in the training set (the validation and test sets remain unchanged). Note that we completely omit the remaining ${90}\%$ of the training set (see Tables 6,7,8 for exact split details). This leaves us with around 170 training examples per class for the SPC dataset (30 classes) and only around 20 training examples per class for the LRW dataset (500 classes). This makes the problem significantly more challenging. On SPC, there is a slight degradation in the performance of all methods. Our method attains an accuracy of ${90.63}\%$ which is second to only wav2vec at an accuracy of 91.57%. On LRW, all other methods get severely affected and overfit to the small training set. Our method is the least affected and significantly outperforms all other methods with a best performance of 77.13%.
|
| 148 |
+
|
| 149 |
+
Noisy situations We also compare the performance of the variations of our method under various levels of artificially induced noise. We introduce babble noise from the NOISEX (Varga & Steeneken, 1993) database to create noisy versions of the SPC and LRW datasets. We use six levels of noise, in the range of $- 5\mathrm{\;{dB}}$ SNR to ${20}\mathrm{\;{dB}}\mathrm{{SNR}}$ in increments of 5 $\mathrm{{dB}}$ . The results for the noisy datasets can be seen in Table 2. All our methods outperform MFCCs at all noise levels on both datasets. The joint audiovisual method is the best.
|
| 150 |
+
|
| 151 |
+
§ 4. DISCUSSION
|
| 152 |
+
|
| 153 |
+
There are multiple interesting observations from our obtained results. Audio-only supervision yields better results than visual-only supervision. However, the model trained with joint audiovisual self-supervision performs better than the models trained with unimodal audio-only and visual-only self-supervision in almost all scenarios. including noisy datasets. This highlights the utility of the complementary information encoded by visual self-supervision and demonstrates the potential of multimodal self-supervision as a useful tool in speech representation learning. Also notably, despite all tested methods being very similar in performance on the full datasets, there is a clear gap when using a small training set and our method is the best at learning with fewer labels, which is very relevant to low resource domains. This can have significant impact in problems like low resource language ASR, emotion recognition and cross-cultural ASR. Our method also significantly outperforms fully supervised training from scratch, which further motivates the utility of self-supervised pretraining for speech.
|
| 154 |
+
|
| 155 |
+
Future work This is a work in progress and there are many other speech related applications that we can evaluate our model on. In this work, we only focused on the classification of isolated words. We will also test the model on continuous CTC based speech recognition on datasets like Librispeech and TIMIT, and other tasks like speaker identification and speech emotion recognition. An especially relevant application would be low resource language ASR. There are also interesting directions to explore to improve our method. In this work, we exhibit how joint audiovisual information can be used for audio representation learning. In a similar manner, we could also utilize this cross-modal information for visual representation learning (e.g. predicting speech attributes from the visual modality). Another interesting line of work is multimodal contrastive self-supervised learning which has been demonstrated for generic audiovisual data but not for audiovisual speech.
|
papers/ICML/ICML 2020/ICML 2020 Workshop/ICML 2020 Workshop SAS/gyUMlKhTJZe/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,241 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Speaker Diarization as a Fully Online Learning Problem in MiniVox
|
| 2 |
+
|
| 3 |
+
Baihan Lin ${}^{1}$ Xinxin Zhang ${}^{2}$
|
| 4 |
+
|
| 5 |
+
## Abstract
|
| 6 |
+
|
| 7 |
+
We proposed a novel AI framework to conduct real-time multi-speaker diarization and recognition without prior registration and pretraining in a fully online learning setting. Our contributions are two-fold. First, we proposed a new benchmark to evaluate the rarely studied fully online speaker diarization problem. We built upon existing datasets of real world utterances to automatically curate ${MiniVox}$ , an experimental environment which generates infinite configurations of continuous multi-speaker speech stream. Secondly, we considered the practical problem of online learning with episodically revealed rewards and introduced a solution based on semi-supervised and self-supervised learning methods. Lastly, we provided a workable web-based recognition system which interactively handles the cold start problem of new user's addition by transferring representations of old arms to new ones with an extendable contextual bandit. We demonstrated that our proposed method obtained robust performance in the online MiniVox framework. ${}^{1}$
|
| 8 |
+
|
| 9 |
+
## 1. Introduction
|
| 10 |
+
|
| 11 |
+
Speaker recognition involves two essential steps: registration and identification (Tirumala et al., 2017). In laboratory setting, the state-of-the-art approaches usually emphasize the registration step with deep networks (Snyder et al., 2018) trained on large-scale speaker profile dataset (Nagrani et al., 2017). However, in real life, requiring all users to complete voiceprint registration before a multi-speaker teleconference is hardly a preferable way of system deployment. Dealing with this challenge, speaker diarization is the task to partition an audio stream into homogeneous segments according to the speaker identity (Anguera et al., 2012). Recent advancements have enabled (1) contrastive audio embedding extractions such as Mel Frequency Cepstral Coefficients (MFCC) (Hasan et al., 2004), i-vectors (Shum et al., 2013) and d-vectors (Wang et al., 2018); (2) effective clustering modules such as Gaussian mixture models (GMM) (Zajíc et al., 2017), mean shift (Senoussaoui et al., 2013), Kmeans and spectral clustering (Wang et al., 2018) and supervised Bayesian non-parametric methods (Fox et al., 2011; Zhang et al., 2019); and (3) reasonable resegmentation modules such as Viterbi and factor analysis subspace (Sell & Garcia-Romero, 2015). In this work, we proposed a new paradigm to consider the speaker diarization as a fully online learning problem of the speaker recognition task: it combines the embedding extraction, clustering and resegmentation into the same problem as an online decision making problem.
|
| 12 |
+
|
| 13 |
+
Why is this online learning problem different? The state-of-the-art speaker diarization systems usually require large datasets to train their audio extraction embeddings and clustering modules, especially the ones with deep neural networks and Bayesian nonparametric models. In many real-world applications in developing countries, however, the training set can be limited and hard to collect. Since these modules are pretrained, applying them to out-of-distribution environments can be problematic. For instance, an intelligent system trained with American elder speaker data might find it hard to generalize to a Japanese children diarization task because both the acoustic and contrastive features are different. To tackle this problem, we want the system to learn continually. To push this problem to the extreme, we are interested in a fully online learning setting, where not only the examples are available one by one, the agent receives no pretraining from any training set before deployment, and learns to detect speaker identity on the fly through reward feedbacks. To the best of our knowledge, this work is the first to consider diarization as a fully online learning problem. Through this work, we aim to understand the extent to which diarization can be solved as merely an online learning problem and whether traditional online learning algorithms (e.g. contextual bandits) can be beneficial to provide a practical solution.
|
| 14 |
+
|
| 15 |
+
---
|
| 16 |
+
|
| 17 |
+
${}^{1}$ Department of Applied Mathematics, University of Washington, Seattle, WA 98195, USA ${}^{2}$ Department of Electrical and Computer Engineering, University of Washington, Seattle, WA 98195, USA. Correspondence to: Baihan Lin <bai-han.lin@columbia.edu>.
|
| 18 |
+
|
| 19 |
+
The ${37}^{\text{th }}$ International Conference on Machine Learning, Vienna, Austria, PMLR 119, 2020. Copyright 2020 by the author(s).
|
| 20 |
+
|
| 21 |
+
${}^{1}$ The web-based application of a real-time system can be accessed at https://www.baihan.nyc/viz/VoiceID/.The code for benchmark evaluation can be accessed at https://github.com/ doerlbh/MiniVox
|
| 22 |
+
|
| 23 |
+
---
|
| 24 |
+
|
| 25 |
+
What is a preferable online speaker diarization system? A preferable AI engine for such a realistic speaker recognition and diarization system should (1) not require user registrations, (2) allow new user to be registered into the system real-time, (3) transfer voiceprint information from old users to new ones, (4) be up running without pretraining on large amount of data in advance. While attractive, assumption (4) introduced an additional caveat that the labeling of the user profiles happens purely on the fly, trading off models pretrained on big data with the user directly interacting with the system by correcting the agent as labels. To tackle these challenges, we formulated this problem into an interactive learning model with cold-start arms and episodically revealed rewards (users can either reveal no feedback, approving by not intervening, or correcting the agent).
|
| 26 |
+
|
| 27 |
+
Why do we need a new benchmark? Traditional dataset in the speaker diarization task are limited: CALLHOME American English (Canavan et al., 1997) and NIST RT-03 English CTS (Martin & Przybocki, 2000) contained limited number of utterances recorded under controlled conditions. For online learning experiments, a learn-from-scratch agent usually needs a large length of data stream to reach a comparable result. Large scale speaker recognition dataset like VoxCeleb (Nagrani et al., 2017; 2019) and Speakers in the Wild (SITW) (McLaren et al., 2016) contained thousands of speaker utterances recorded in various challenging multi-speaker acoustic environments, but they are usually only used to pretrain diarization embeddings. In this work, we proposed a new benchmark called MiniVox, which can transform any large scale speaker identification dataset into infinitely long audio streams with various configurations.
|
| 28 |
+
|
| 29 |
+
We built upon LinUCB (Li et al., 2010) and proposed a semi-supervised learning variant to account for the fact that the rewards are entirely missing in many episodes. For each episode without feedbacks, we applied a self-supervision process to assign a pseudo-action upon which the reward mapping is updated. Finally, we generated new arms by transferring learned arm parameters to similar profiles given user feedbacks.
|
| 30 |
+
|
| 31 |
+
### 2.The Fully Online Learning Problem
|
| 32 |
+
|
| 33 |
+
Algorithm 1 presents at a high-level our problem setting, where $c\left( t\right) \in {\mathbb{R}}^{d}$ is a vector describing the context at time $t,{r}_{a}\left( t\right) \in \left\lbrack {0,1}\right\rbrack$ is the reward of action $a$ at time $t$ , and $r\left( t\right) \in {\left\lbrack 0,1\right\rbrack }^{K}$ denotes a vector of rewards for all arms at time $t.{\mathbb{P}}_{c, r}$ denotes a joint probability distribution over (c, r), and $\pi : C \rightarrow A$ denotes a policy. Unlike traditional setting, in step 5 we have the rewards revealed in an episodic fashion (i.e. sometimes there are feedbacks of rewards being 0 or 1 , sometimes there are no feedbacks of any kind). We consider our setting an online semi-supervised learning problem (Yver, 2009; Ororbia et al., 2015), where the agent learns from both labeled and unlabeled data in online setting.
|
| 34 |
+
|
| 35 |
+

|
| 36 |
+
|
| 37 |
+
Figure 1. Arm expansion process of the bandit agents.
|
| 38 |
+
|
| 39 |
+
Algorithm 1 Online Learning with Episodic Rewards
|
| 40 |
+
|
| 41 |
+
---
|
| 42 |
+
|
| 43 |
+
for $\mathrm{t} = 1,2,3,\cdots ,\mathrm{T}$ do
|
| 44 |
+
|
| 45 |
+
$\left( {c\left( t\right) , r\left( t\right) }\right)$ is drawn according to ${\mathbb{P}}_{c, r}$
|
| 46 |
+
|
| 47 |
+
$c\left( t\right)$ is revealed to the player
|
| 48 |
+
|
| 49 |
+
Player chooses an action $i = {\pi }_{t}\left( {c\left( t\right) }\right)$
|
| 50 |
+
|
| 51 |
+
Feedbacks ${r}_{a}\left( t\right)$ for all arms are episodically revealed
|
| 52 |
+
|
| 53 |
+
Player updates its policy ${\pi }_{t}$
|
| 54 |
+
|
| 55 |
+
end for
|
| 56 |
+
|
| 57 |
+
---
|
| 58 |
+
|
| 59 |
+
## 3. Proposed Online Learning Solution
|
| 60 |
+
|
| 61 |
+
### 3.1. Contextual Bandits with Extendable Arms
|
| 62 |
+
|
| 63 |
+
In an ideal online learning scenario without oracle, we start with a single arm, and when new labels arrive new arms are then generated accordingly. This problem is loosely modelled by the bandits with infinitely many arms (Berry et al., 1997). For our specific application of speaker registration process, we applied the arm expansion process outlined in Figure 1: starting from a single arm (for the "new" action), if a feedback confirms a new addition, a new arm is initialized and appended to the arm list.
|
| 64 |
+
|
| 65 |
+
### 3.2. Episodically Rewarded LinUCB
|
| 66 |
+
|
| 67 |
+
We proposed Background Episodically Rewarded LinUCB (BerlinUCB), a semi-supervised and self-supervised online contextual bandit which updates the context representations and reward mapping separately given the state of the feedbacks being present or missing (Algorithm 2). We assume that (1) when there are feedbacks available, the feedbacks are genuine, assigned by the oracle, and (2) when the feedbacks are missing (not revealed by the background), it is either due to the fact that the action is preferred (no intervention required by the oracle, i.e. with an implied default rewards), or that the oracle didn't have a chance to respond or intervene (i.e. with unknown rewards). Especially in the Step 15, when there is no feedbacks, we assign the context ${\mathbf{x}}_{t}$ to a class ${a}^{\prime }$ (an action arm) with the self-supervision given the previous labelled context history. Since we don't have the actual label for this context, we only update reward mapping parameter ${\mathbf{b}}_{{a}^{\prime }}$ and leave the covariance matrix ${\mathbf{A}}_{{a}^{\prime }}$ untouched. The additional usage of unlabelled data (or unrevealed feedback) is especially important in our model.
|
| 68 |
+
|
| 69 |
+

|
| 70 |
+
|
| 71 |
+
Figure 2. (A) The flowchart of the Online Learning problem and (B) the MiniVox Benchmark.
|
| 72 |
+
|
| 73 |
+
Algorithm 2 BerlinUCB
|
| 74 |
+
|
| 75 |
+
---
|
| 76 |
+
|
| 77 |
+
Initialize ${c}_{t} \in {\mathbb{R}}_{ + },{\mathbf{A}}_{a} \leftarrow {\mathbf{I}}_{d},{\mathbf{b}}_{a} \leftarrow {\mathbf{0}}_{d \times 1}\forall a \in {\mathcal{A}}_{t}$
|
| 78 |
+
|
| 79 |
+
for $\mathrm{t} = 1,2,3,\cdots ,\mathrm{T}$ do
|
| 80 |
+
|
| 81 |
+
Observe features ${\mathbf{x}}_{t} \in {\mathbb{R}}^{d}$
|
| 82 |
+
|
| 83 |
+
for all $a \in {\mathcal{A}}_{t}$ do
|
| 84 |
+
|
| 85 |
+
${\widehat{\theta }}_{a} \leftarrow {\mathbf{A}}_{a}^{-1}{\mathbf{b}}_{a}$
|
| 86 |
+
|
| 87 |
+
${p}_{t, a} \leftarrow {\widehat{\theta }}_{a}^{\top }{\mathbf{x}}_{t} + {c}_{t}\sqrt{{\mathbf{x}}_{t}^{\top }{\mathbf{A}}_{a}^{-1}{\mathbf{x}}_{t}}$
|
| 88 |
+
|
| 89 |
+
end for
|
| 90 |
+
|
| 91 |
+
Choose arm ${a}_{t}{ = }_{a \in {\mathcal{A}}_{t}}{p}_{t, a}$
|
| 92 |
+
|
| 93 |
+
if the background revealed the feedbacks then
|
| 94 |
+
|
| 95 |
+
Observe feedback ${r}_{{a}_{t}, t}$
|
| 96 |
+
|
| 97 |
+
${\mathbf{A}}_{{a}_{t}} \leftarrow {\mathbf{A}}_{{a}_{t}} + {\mathbf{x}}_{t}{\mathbf{x}}_{t}^{\top }$
|
| 98 |
+
|
| 99 |
+
${\mathbf{b}}_{{a}_{t}} \leftarrow {\mathbf{b}}_{{a}_{t}} + {r}_{{a}_{t}, t}{\mathbf{x}}_{t}$
|
| 100 |
+
|
| 101 |
+
elif the background revealed NO feedbacks then
|
| 102 |
+
|
| 103 |
+
if use self-supervision feedback
|
| 104 |
+
|
| 105 |
+
${r}^{\prime } = \left\lbrack {{a}_{t} = = \operatorname{predict}\left( {\mathbf{x}}_{t}\right) }\right\rbrack \%$ clustering modules
|
| 106 |
+
|
| 107 |
+
${\mathbf{b}}_{{a}_{t}} \leftarrow {\mathbf{b}}_{{a}_{t}} + {r}^{\prime }{\mathbf{x}}_{t}$
|
| 108 |
+
|
| 109 |
+
elif % ignore self-supervision signals
|
| 110 |
+
|
| 111 |
+
${\mathbf{A}}_{{a}_{t}} \leftarrow {\mathbf{A}}_{{a}_{t}} + {\mathbf{x}}_{t}{\mathbf{x}}_{t}^{\top }$
|
| 112 |
+
|
| 113 |
+
end if
|
| 114 |
+
|
| 115 |
+
end if
|
| 116 |
+
|
| 117 |
+
end for
|
| 118 |
+
|
| 119 |
+
---
|
| 120 |
+
|
| 121 |
+
### 3.3. Self-Supervision and Semi-Supervision Modules
|
| 122 |
+
|
| 123 |
+
We construct our self-supervision modules given the cluster assumption of the semi-supervision problem: the points within the same cluster are more likely to share a label. As shown in many work in modern speaker diarization, clustering algorithms like GMM (Zajíc et al., 2017), mean shift (Senoussaoui et al., 2013) and spectral clustering (Wang et al., 2018) are especially powerful unsupervised modules, especially in their offline versions. Their online variants, however, often performs poorly (Zhang et al., 2019). As in this work, we focus on the completely online setting, we chose three popular clustering algorithms as self-supervision modules: GMM, Kmeans and K-nearest neighbors.
|
| 124 |
+
|
| 125 |
+
### 3.4. Complete Engine for Online Speaker Diarization
|
| 126 |
+
|
| 127 |
+
To adapt our BerlinUCB algorithm to the specific application of speaker recognition, we first define our actions. There are three major classes of actions: an arm "New" to denote that a new speaker is detected, an arm "No Speaker" to denote that no one is speaking, and $\mathrm{N}$ different arms "User n" to denote that user $\mathrm{n}$ is speaking. Table 1 presents the reward assignment given four types of feedbacks. Note that we assume that when the agent correctly identifies the speaker (or no speaker), the user (as the feedback dispenser) should send no feedbacks to the system by doing nothing. In another word, in an ideal scenario when the agent does a perfect job by correctly identifying the speaker all the time, we are not necessary to be around to correct it anymore (i.e. truly feedback free). As we pointed out earlier, this could be a challenge earlier on, because other than implicitly approving the agent's choice, receiving no feedbacks could also mean the feedbacks are not revealed properly (e.g. the human oracle took a break). Furthermore, we note that when "No Speaker" and "User n" arms are correctly identified, there is no feedback from us the human oracle (meaning that these arms would never have learned from a single positive reward if we don't use the "None" feedback iterations at all!). The semi-supervision by self-supervision step is exactly tailored for a scenario like this, where the lack of revealed positive reward for "No Speaker" and "User n" arms is compensated by the additional training of the reward mapping ${\mathbf{b}}_{{a}_{t}}$ if context ${\mathbf{x}}_{t}$ is correctly assigned.
|
| 128 |
+
|
| 129 |
+
To tackle the cold start problem, the agent grows it arms in the following fashion: the agent starts with two arms, "No Speaker" and "New"; if it is actually a new speaker speaking, we have the following three conditions: (1) if "New" is chosen, the user approves this arm by giving it a positive reward (i.e. clicking on it) and the agent initializes a new arm called "User $N$ " and update $N = N + 1$ (where $N$ is the number of registered speakers at the moment); (2)
|
| 130 |
+
|
| 131 |
+
<table><tr><td>Feedback types</td><td>$\left( {+, + }\right)$</td><td>$\left( {+, - }\right)$</td><td>$\left( {-, + }\right)$</td><td>None</td></tr><tr><td>New</td><td>$r = 1$</td><td>$r = 0$</td><td/><td rowspan="3">Alg. 2 Step 13</td></tr><tr><td>No Speaker</td><td>-</td><td>$r = 0$</td><td>$r = 0$</td></tr><tr><td>User n</td><td>-</td><td>$r = 0$</td><td>$r = 0$</td></tr></table>
|
| 132 |
+
|
| 133 |
+
Table 1. Routes given either no feedbacks, or a feedback telling the agent that the correct label is $a * .\left( {+, + }\right)$ means that the agent guessed it right by choosing the right arm; $\left( {+, - }\right)$ means that the agent chose this arm incorrectly, since the correct one is another arm; $\left( {-, + }\right)$ means that the agent didn’t choose this arm, while it turned out to be the correct one. "-" means NA.
|
| 134 |
+
|
| 135 |
+
if "No Speaker" is chosen, the user disapproves this arm by giving it a zero reward and clicking on the "New" instead), while the agent initializes a new arm; (3) if one of the user arms is chosen (e.g. "User 5" is chosen while in fact a new person is speaking), the agent copies the wrong user arm's parameters to initialize the new arm, since the voiceprint of the mistaken one might be beneficial to initialize the new user profile. In this way, we can transfer what has been learned for a similar context representations to the new arm.
|
| 136 |
+
|
| 137 |
+
## 4. Benchmark Description: MiniVox
|
| 138 |
+
|
| 139 |
+
MiniVox is an automatic framework to transform any speaker-labelled dataset into continuous speech datastream with episodically revealed label feedbacks. Since our online learning problem setting assumes learning the voiceprints without any previous training data at all, MiniVox's flexibility in length and configuration is especially important. As outlined in Figure 2, MiniVox has a straightforward data stream generation pipeline: given a pool of single-speaker-annotated utterances, randomly concatenate multiple pieces with a chosen number of speakers and a desired length. The reward stream is then sparsified with a parameter $p$ as the percentage of time a feedback is revealed.
|
| 140 |
+
|
| 141 |
+
There are two scenarios that we can evaluate in MiniVox: if we assume there is an oracle, the online learning model is given the fixed number of the speakers in the stream; if we assume there is no oracle, the online learning model will start from zero speaker and then gradually discover and register new speakers for future identification and diarization.
|
| 142 |
+
|
| 143 |
+
## 5. Empirical Evaluation
|
| 144 |
+
|
| 145 |
+
### 5.1. Experimental Setup and Metrics
|
| 146 |
+
|
| 147 |
+
We applied MiniVox on VoxCeleb (Nagrani et al., 2017) to generate three data streams with 5,10 and 20 speakers to simulate real-world conversations. We extracted two types of features (more details in section 5.2) and evaluated it in two scenarios (with or without oracle). The reward streams are sparsified given a revealing probability of0.5,0.1,0.01 and 0.001 . In summary, we evaluated our models in a combinatorial total of 3 speaker numbers $\times 4$ reward revealing probabilities $\times 2$ feature types $\times 2$ test scenarios $= {48}$ online learning environments. The online learning timescale range from $\sim {12000}$ to $\sim {60000}$ timeframes. For notation of a specific MiniVox, in this paper we would denote "MiniVox C5-MFCC-60k" as a MiniVox environment with 5 speakers ranging ${60}\mathrm{k}$ time frames using MFCC as features.
|
| 148 |
+
|
| 149 |
+
To evaluate the performance, we reported Diarization Error Rates (DER) in the above MiniVox environments. In addition, as a common metric in online learning literature, we also recorded the cumulative reward: at each frame, if the agent correctly predicts a given speaker, the reward is counted as +1 (no matter if the agent observes the reward).
|
| 150 |
+
|
| 151 |
+
We compared 9 agents: LinUCB is the contextual bandit with extendable arms proposed in section 3.1. BerlinUCB is the standard contextual bandit model designed for sparse feedbacks without the self-supervision modules. We have four baseline models: Kmeans, KNN (with K=5), GMM and a random agent ${}^{2}$ . To test the effect of self-supervision, we introduced three clustering modules in BerlinUCB (alg 2, Step 15) denoted: B-Kmeans, B-KNN, and B-GMM.
|
| 152 |
+
|
| 153 |
+
### 5.2. Feature Embeddings: MFCC and CNN
|
| 154 |
+
|
| 155 |
+
We utilized two feature embeddings for our evaluation: MFCC (Hasan et al., 2004) and a Convolutional Neural Network (CNN) embedding. We utilized the same CNN architecture as the VGG-M (Chatfield et al., 2014) used in VolCeleb evaluation (Nagrani et al., 2017). It takes the spectrogram of an utterance as the input, and generate a feature vector of 1024 in layer fc8 (for more details of this CNN, please refer to table 4 in (Nagrani et al., 2017)).
|
| 156 |
+
|
| 157 |
+
Why don't we use more complicated embeddings? Although more complicated embedding extraction modules such as i-vectors (Shum et al., 2013) or d-vectors (Wang et al., 2018) can improve diarization, they require extensive pretraining on big datasets, which is contradictory to our problem setting and beyond our research scope.
|
| 158 |
+
|
| 159 |
+
Why do we still include this CNN? The CNN model was trained for speaker verification task in VoxCeleb and we are curious about the relationship between a learned representation and our online learning agents. Despite this note, we are most interested in the performance given MFCC features, because we aim to push the system fully online, to the limit of not having pretraining of any type before deployment.
|
| 160 |
+
|
| 161 |
+
### 5.3. Results
|
| 162 |
+
|
| 163 |
+
Given MFCC features without pretraining, our online learning agent demonstrated a relatively robust performance. As shown in Figure 3(a, b, c, d), in many conditions, the proposed contextual bandits significantly outperformed baselines when revealing probability is very low $\left( {\mathrm{p} = {0.01}\text{or 0.1}}\right)$ .
|
| 164 |
+
|
| 165 |
+
---
|
| 166 |
+
|
| 167 |
+
${}^{2}$ In the oracle-free case, the random agent randomly selects from the "new" arm and the registered user arms, suggesting a possibility of going to infinitely (and incorrectly) many profiles.
|
| 168 |
+
|
| 169 |
+
---
|
| 170 |
+
|
| 171 |
+

|
| 172 |
+
|
| 173 |
+
Figure 3. Example reward curve. Positive: (a) C10-MFCC, p=0.01; (b) C20-MFCC, p=0.01; (c) C5-MFCC, p=0.01; (d) C5-MFCC, $\mathrm{p} = {0.5}$ ; (e) C20-MFCC, p=0.01, oracle. Negative: (f) C10-MFCC, p=0.01, oracle; (g) C10-MFCC, p=0.1, oracle; (h) C5-CNN, p=0.5.
|
| 174 |
+
|
| 175 |
+
Table 2. Diarization Error Rate (%) in MiniVox without Oracle
|
| 176 |
+
|
| 177 |
+
<table><tr><td rowspan="2"/><td colspan="3">MiniVox C5-MFCC-60k</td><td colspan="3">MiniVox C5-CNN-12k</td></tr><tr><td>$p = {0.5}$</td><td>$p = {0.1}$</td><td>$p = {0.01}$</td><td>$p = {0.5}$</td><td>$p = {0.1}$</td><td>$p = {0.01}$</td></tr><tr><td>BerlinUCB</td><td>71.81</td><td>80.03</td><td>82.38</td><td>17.42</td><td>32.03</td><td>65.16</td></tr><tr><td>LinUCB</td><td>74.74</td><td>78.71</td><td>79.30</td><td>17.81</td><td>32.73</td><td>58.98</td></tr><tr><td>B-Kmeans</td><td>82.82</td><td>79.15</td><td>77.39</td><td>28.83</td><td>63.67</td><td>82.58</td></tr><tr><td>B-KNN</td><td>78.71</td><td>80.62</td><td>77.39</td><td>28.36</td><td>82.58</td><td>82.58</td></tr><tr><td>B-GMM</td><td>85.32</td><td>83.41</td><td>87.67</td><td>99.61</td><td>99.61</td><td>99.69</td></tr><tr><td>Kmeans</td><td>86.20</td><td>85.76</td><td>82.67</td><td>5.47</td><td>8.91</td><td>40.23</td></tr><tr><td>KNN</td><td>70.34</td><td>72.98</td><td>78.12</td><td>6.09</td><td>13.75</td><td>53.75</td></tr><tr><td>GMM</td><td>99.27</td><td>99.27</td><td>99.27</td><td>99.61</td><td>99.61</td><td>99.69</td></tr><tr><td>random</td><td>83.41</td><td>81.50</td><td>82.97</td><td>77.89</td><td>78.98</td><td>77.66</td></tr></table>
|
| 178 |
+
|
| 179 |
+
<table><tr><td rowspan="2"/><td colspan="3">MiniVox C10-MFCC-60k</td><td colspan="3">MiniVox C10-CNN-12k</td></tr><tr><td>$p = {0.5}$</td><td>$p = {0.1}$</td><td>$p = {0.01}$</td><td>$p = {0.5}$</td><td>$p = {0.1}$</td><td>$p = {0.01}$</td></tr><tr><td>BerlinUCB</td><td>82.46</td><td>85.31</td><td>89.26</td><td>42.77</td><td>57.41</td><td>74.02</td></tr><tr><td>LinUCB</td><td>84.36</td><td>86.73</td><td>93.36</td><td>49.55</td><td>68.57</td><td>81.16</td></tr><tr><td>B-Kmeans</td><td>91.15</td><td>92.58</td><td>96.68</td><td>60.89</td><td>70.89</td><td>99.55</td></tr><tr><td>B-KNN</td><td>89.73</td><td>90.05</td><td>96.68</td><td>60.89</td><td>82.05</td><td>99.55</td></tr><tr><td>B-GMM</td><td>90.21</td><td>94.63</td><td>98.42</td><td>99.20</td><td>93.57</td><td>99.64</td></tr><tr><td>Kmeans</td><td>92.26</td><td>94.15</td><td>98.10</td><td>10.36</td><td>18.75</td><td>47.86</td></tr><tr><td>KNN</td><td>79.78</td><td>84.52</td><td>97.47</td><td>9.29</td><td>31.25</td><td>70.27</td></tr><tr><td>GMM</td><td>98.42</td><td>98.42</td><td>99.21</td><td>99.20</td><td>99.20</td><td>99.37</td></tr><tr><td>random</td><td>90.21</td><td>88.78</td><td>92.89</td><td>79.29</td><td>81.34</td><td>83.75</td></tr></table>
|
| 180 |
+
|
| 181 |
+
<table><tr><td rowspan="2"/><td colspan="3">MiniVox C20-MFCC-60k</td><td colspan="3">MiniVox C20-CNN-12k</td></tr><tr><td>$p = {0.5}$</td><td>$p = {0.1}$</td><td>$p = {0.01}$</td><td>$p = {0.5}$</td><td>$p = {0.1}$</td><td>$p = {0.01}$</td></tr><tr><td>BerlinUCB</td><td>88.62</td><td>87.02</td><td>92.79</td><td>41.72</td><td>59.06</td><td>83.28</td></tr><tr><td>LinUCB</td><td>91.35</td><td>88.94</td><td>88.46</td><td>51.56</td><td>83.52</td><td>74.84</td></tr><tr><td>B-Kmeans</td><td>95.19</td><td>95.99</td><td>96.96</td><td>72.03</td><td>75.31</td><td>99.53</td></tr><tr><td>B-KNN</td><td>93.43</td><td>95.99</td><td>96.79</td><td>72.03</td><td>74.06</td><td>99.53</td></tr><tr><td>B-GMM</td><td>92.79</td><td>96.31</td><td>97.76</td><td>87.73</td><td>81.09</td><td>83.28</td></tr><tr><td>Kmeans</td><td>90.54</td><td>93.43</td><td>95.51</td><td>6.02</td><td>12.81</td><td>54.77</td></tr><tr><td>KNN</td><td>86.38</td><td>89.26</td><td>95.99</td><td>8.67</td><td>32.66</td><td>75.08</td></tr><tr><td>GMM</td><td>96.96</td><td>97.44</td><td>98.88</td><td>98.98</td><td>98.98</td><td>99.37</td></tr><tr><td>random</td><td>93.59</td><td>94.07</td><td>95.35</td><td>87.03</td><td>87.73</td><td>89.69</td></tr></table>
|
| 182 |
+
|
| 183 |
+
Learning without Oracle. Table 2 reports DER in MiniVox without Oracle. In MFCC environments, we observed that in high-difficulty scenarios (such as C20), the proposed BerlinUCB variants outperformed all the baselines even when the reward revealing probability was as low as 0.01 . In low-difficulty scenarios, traditional clustering methods like KNN performed the best, while this benefit was inherited by B-KNN and B-Kmeans when feedbacks were sparse $\left( {\mathrm{p} = {0.01}}\right)$ . In the CNN cases, we observed that Kmeans performed the best. This is expected because the CNN model was trained with the constrastive loss for a high verification accuracy (Nagrani et al., 2017). While the clustering modules merely classify the CNN feature by their proximity, our online learning model need to learn about their reward mapping from scratch, while maintaining a good balance between exploitation and exploration.
|
| 184 |
+
|
| 185 |
+
Learning with Oracle. Given the number of speakers, traditional clustering agents performed better (Table 3). However, the behaviors vary: we observed that GMM performed the poorest in the oracle-free environments, but performed the best in the environments with oracle; we also noted that despite the best model in many oracle-free environments, Kmeans performed poorly in the MFCC environments with oracle. Another winning algorithm, KNN, requires the model to store all historical data points and search through the entire memory, which can be computationally inhibitory in real-world applications. Our online learning models maintains a relatively robust performance by keeping among the top 3 algorithms in most cases with and without oracle.
|
| 186 |
+
|
| 187 |
+
Table 3. Diarization Error Rate (%) in MiniVox with Oracle
|
| 188 |
+
|
| 189 |
+
<table><tr><td rowspan="2"/><td colspan="3">MiniVox C5-MFCC-60k</td><td colspan="3">MiniVox C5-CNN-12k</td></tr><tr><td>$p = {0.5}$</td><td>$p = {0.1}$</td><td>$p = {0.01}$</td><td>$p = {0.5}$</td><td>$p = {0.1}$</td><td>$p = {0.01}$</td></tr><tr><td>BerlinUCB</td><td>74.89</td><td>77.24</td><td>86.93</td><td>17.27</td><td>22.19</td><td>66.02</td></tr><tr><td>LinUCB</td><td>72.83</td><td>78.12</td><td>76.80</td><td>17.73</td><td>32.73</td><td>58.98</td></tr><tr><td>B-Kmeans</td><td>75.33</td><td>78.27</td><td>83.11</td><td>20.55</td><td>40.70</td><td>58.98</td></tr><tr><td>B-KNN</td><td>77.39</td><td>77.97</td><td>83.99</td><td>20.47</td><td>41.33</td><td>58.98</td></tr><tr><td>B-GMM</td><td>74.16</td><td>76.21</td><td>77.24</td><td>52.58</td><td>81.02</td><td>58.98</td></tr><tr><td>Kmeans</td><td>78.41</td><td>82.82</td><td>83.11</td><td>4.06</td><td>7.42</td><td>39.53</td></tr><tr><td>KNN</td><td>70.63</td><td>73.27</td><td>80.47</td><td>6.64</td><td>13.75</td><td>53.52</td></tr><tr><td>GMM</td><td>70.34</td><td>72.54</td><td>74.74</td><td>54.38</td><td>81.02</td><td>58.98</td></tr><tr><td>random</td><td>79.59</td><td>80.76</td><td>85.9</td><td>79.92</td><td>80.39</td><td>85.55</td></tr></table>
|
| 190 |
+
|
| 191 |
+
<table><tr><td rowspan="2"/><td colspan="3">MiniVox C10-MFCC-60k</td><td colspan="3">MiniVox C10-CNN-12k</td></tr><tr><td>$p = {0.5}$</td><td>$p = {0.1}$</td><td>$p = {0.01}$</td><td>$p = {0.5}$</td><td>$p = {0.1}$</td><td>$p = {0.01}$</td></tr><tr><td>BerlinUCB</td><td>88.31</td><td>90.21</td><td>95.89</td><td>45.18</td><td>65.27</td><td>79.38</td></tr><tr><td>LinUCB</td><td>84.99</td><td>91.63</td><td>97.00</td><td>50.00</td><td>72.14</td><td>65.18</td></tr><tr><td>B-Kmeans</td><td>87.84</td><td>91.47</td><td>91.94</td><td>50.27</td><td>72.50</td><td>72.32</td></tr><tr><td>B-KNN</td><td>86.73</td><td>85.78</td><td>92.58</td><td>49.64</td><td>72.14</td><td>77.77</td></tr><tr><td>B-GMM</td><td>88.94</td><td>84.52</td><td>92.58</td><td>76.52</td><td>71.88</td><td>69.46</td></tr><tr><td>Kmeans</td><td>89.42</td><td>89.57</td><td>98.74</td><td>11.16</td><td>20.27</td><td>49.49</td></tr><tr><td>KNN</td><td>80.25</td><td>84.68</td><td>97.79</td><td>9.55</td><td>31.25</td><td>70.45</td></tr><tr><td>GMM</td><td>90.36</td><td>79.62</td><td>91.63</td><td>76.52</td><td>78.30</td><td>77.77</td></tr><tr><td>random</td><td>87.99</td><td>92.26</td><td>97.16</td><td>90.00</td><td>90.89</td><td>92.32</td></tr></table>
|
| 192 |
+
|
| 193 |
+
<table><tr><td rowspan="2"/><td colspan="3">MiniVox C20-MFCC-60k</td><td colspan="3">MiniVox C20-CNN-12k</td></tr><tr><td>$p = {0.5}$</td><td>$p = {0.1}$</td><td>$p = {0.01}$</td><td>$p = {0.5}$</td><td>$p = {0.1}$</td><td>$p = {0.01}$</td></tr><tr><td>BerlinUCB</td><td>92.31</td><td>94.55</td><td>96.31</td><td>58.75</td><td>68.98</td><td>88.83</td></tr><tr><td>LinUCB</td><td>89.10</td><td>93.43</td><td>95.67</td><td>53.44</td><td>70.47</td><td>83.44</td></tr><tr><td>B-Kmeans</td><td>92.95</td><td>95.67</td><td>96.96</td><td>55.16</td><td>70.86</td><td>94.06</td></tr><tr><td>B-KNN</td><td>91.83</td><td>92.47</td><td>97.44</td><td>54.30</td><td>89.84</td><td>96.72</td></tr><tr><td>B-GMM</td><td>95.19</td><td>91.99</td><td>97.44</td><td>86.48</td><td>77.97</td><td>96.64</td></tr><tr><td>Kmeans</td><td>91.67</td><td>94.23</td><td>98.08</td><td>7.66</td><td>13.75</td><td>55.63</td></tr><tr><td>KNN</td><td>86.86</td><td>89.26</td><td>98.08</td><td>9.690</td><td>32.73</td><td>75.08</td></tr><tr><td>GMM</td><td>98.08</td><td>94.87</td><td>98.88</td><td>93.52</td><td>95.08</td><td>97.11</td></tr><tr><td>random</td><td>94.71</td><td>94.71</td><td>98.88</td><td>95.55</td><td>95.86</td><td>97.03</td></tr></table>
|
| 194 |
+
|
| 195 |
+
Is self-supervision useful? To our surprise, our benchmark results suggested that the proposed self-supervision modules didn't improve upon both the baseline models and our proposed contextual bandit models. Only in specific conditions (e.g. MiniVox C5-MFCC-60k p=0.01), the self-supervised contextual bandits outperformed both the standard Berlin-UCB and all the baseline. Further investigation into the reward curve revealed more complicated interactions between the self-supervision modules with the online learning modules (the contextual bandit): as shown in Figure 3(f, g, h), B-GMM and B-KNN maintained build upon the effective reward mapping from their BerlinUCB backbone, and benefited from the unlabelled data points to perform fairly well.
|
| 196 |
+
|
| 197 |
+
## References
|
| 198 |
+
|
| 199 |
+
Anguera, X., Bozonnet, S., Evans, N., Fredouille, C., Friedland, G., and Vinyals, O. Speaker diarization: A review of recent research. IEEE Transactions on Audio, Speech, and Language Processing, 20(2):356-370, 2012.
|
| 200 |
+
|
| 201 |
+
Berry, D. A., Chen, R. W., Zame, A., Heath, D. C., and Shepp, L. A. Bandit problems with infinitely many arms. The Annals of Statistics, pp. 2103-2116, 1997.
|
| 202 |
+
|
| 203 |
+
Canavan, A., Graff, D., and Zipperlen, G. Callhome american english speech ldc97s42. web download. Philadelphia, PA, USA: Linguistic Data Consortium, University of Pennsylvania, 1997.
|
| 204 |
+
|
| 205 |
+
Chatfield, K., Simonyan, K., Vedaldi, A., and Zisserman, A. Return of the devil in the details: Delving deep into convolutional nets. In British Machine Vision Conference, 2014.
|
| 206 |
+
|
| 207 |
+
Fox, E. B., Sudderth, E. B., Jordan, M. I., and Willsky, A. S. A sticky hdp-hmm with application to speaker diarization. The Annals of Applied Statistics, pp. 1020-1056, 2011.
|
| 208 |
+
|
| 209 |
+
Hasan, M. R., Jamil, M., Rahman, M., et al. Speaker identification using mel frequency cepstral coefficients. variations, 1(4), 2004.
|
| 210 |
+
|
| 211 |
+
Li, L., Chu, W., Langford, J., and Schapire, R. E. A contextual-bandit approach to personalized news article recommendation. In Proceedings of the 19th international conference on World wide web, pp. 661-670, 2010.
|
| 212 |
+
|
| 213 |
+
Martin, A. and Przybocki, M. The nist 1999 speaker recognition evaluation-an overview. Digital signal processing, ${10}\left( {1 - 3}\right) : 1 - {18},{2000}$ .
|
| 214 |
+
|
| 215 |
+
McLaren, M., Ferrer, L., Castan, D., and Lawson, A. The speakers in the wild (sitw) speaker recognition database. In Interspeech, pp. 818-822, 2016.
|
| 216 |
+
|
| 217 |
+
Nagrani, A., Chung, J. S., and Zisserman, A. Voxceleb: a large-scale speaker identification dataset. In INTERSPEECH, 2017.
|
| 218 |
+
|
| 219 |
+
Nagrani, A., Chung, J. S., Xie, W., and Zisserman, A. Vox-celeb: Large-scale speaker verification in the wild. Computer Science and Language, 2019.
|
| 220 |
+
|
| 221 |
+
Ororbia, I., Alexander, G., Giles, C. L., and Reitter, D. Online semi-supervised learning with deep hybrid boltzmann machines and denoising autoencoders. arXiv preprint arXiv:1511.06964, 2015.
|
| 222 |
+
|
| 223 |
+
Sell, G. and Garcia-Romero, D. Diarization resegmentation in the factor analysis subspace. In 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 4794-4798. IEEE, 2015.
|
| 224 |
+
|
| 225 |
+
Senoussaoui, M., Kenny, P., Stafylakis, T., and Dumouchel,
|
| 226 |
+
|
| 227 |
+
P. A study of the cosine distance-based mean shift for telephone speech diarization. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 22(1):217-227, 2013.
|
| 228 |
+
|
| 229 |
+
Shum, S. H., Dehak, N., Dehak, R., and Glass, J. R. Unsupervised methods for speaker diarization: An integrated and iterative approach. IEEE Transactions on Audio, Speech, and Language Processing, 21(10):2015-2028, 2013.
|
| 230 |
+
|
| 231 |
+
Snyder, D., Garcia-Romero, D., Sell, G., Povey, D., and Khudanpur, S. X-vectors: Robust dnn embeddings for speaker recognition. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 5329-5333. IEEE, 2018.
|
| 232 |
+
|
| 233 |
+
Tirumala, S. S., Shahamiri, S. R., Garhwal, A. S., and Wang, R. Speaker identification features extraction methods: A systematic review. Expert Systems with Applications, 90: 250-271, 2017.
|
| 234 |
+
|
| 235 |
+
Wang, Q., Downey, C., Wan, L., Mansfield, P. A., and Moreno, I. L. Speaker diarization with lstm. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 5239-5243. IEEE, 2018.
|
| 236 |
+
|
| 237 |
+
Yver, B. Online semi-supervised learning: Application to dynamic learning from radar data. In 2009 International Radar Conference "Surveillance for a Safer World" (RADAR 2009), pp. 1-6, Oct 2009.
|
| 238 |
+
|
| 239 |
+
Zajíc, Z., Hrúz, M., and Müller, L. Speaker diarization using convolutional neural network for statistics accumulation refinement. In INTERSPEECH, pp. 3562-3566, 2017.
|
| 240 |
+
|
| 241 |
+
Zhang, A., Wang, Q., Zhu, Z., Paisley, J., and Wang, C. Fully supervised speaker diarization. In ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 6301-6305. IEEE, 2019.
|
papers/ICML/ICML 2020/ICML 2020 Workshop/ICML 2020 Workshop SAS/gyUMlKhTJZe/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,396 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ SPEAKER DIARIZATION AS A FULLY ONLINE LEARNING PROBLEM IN MINIVOX
|
| 2 |
+
|
| 3 |
+
Baihan Lin ${}^{1}$ Xinxin Zhang ${}^{2}$
|
| 4 |
+
|
| 5 |
+
§ ABSTRACT
|
| 6 |
+
|
| 7 |
+
We proposed a novel AI framework to conduct real-time multi-speaker diarization and recognition without prior registration and pretraining in a fully online learning setting. Our contributions are two-fold. First, we proposed a new benchmark to evaluate the rarely studied fully online speaker diarization problem. We built upon existing datasets of real world utterances to automatically curate ${MiniVox}$ , an experimental environment which generates infinite configurations of continuous multi-speaker speech stream. Secondly, we considered the practical problem of online learning with episodically revealed rewards and introduced a solution based on semi-supervised and self-supervised learning methods. Lastly, we provided a workable web-based recognition system which interactively handles the cold start problem of new user's addition by transferring representations of old arms to new ones with an extendable contextual bandit. We demonstrated that our proposed method obtained robust performance in the online MiniVox framework. ${}^{1}$
|
| 8 |
+
|
| 9 |
+
§ 1. INTRODUCTION
|
| 10 |
+
|
| 11 |
+
Speaker recognition involves two essential steps: registration and identification (Tirumala et al., 2017). In laboratory setting, the state-of-the-art approaches usually emphasize the registration step with deep networks (Snyder et al., 2018) trained on large-scale speaker profile dataset (Nagrani et al., 2017). However, in real life, requiring all users to complete voiceprint registration before a multi-speaker teleconference is hardly a preferable way of system deployment. Dealing with this challenge, speaker diarization is the task to partition an audio stream into homogeneous segments according to the speaker identity (Anguera et al., 2012). Recent advancements have enabled (1) contrastive audio embedding extractions such as Mel Frequency Cepstral Coefficients (MFCC) (Hasan et al., 2004), i-vectors (Shum et al., 2013) and d-vectors (Wang et al., 2018); (2) effective clustering modules such as Gaussian mixture models (GMM) (Zajíc et al., 2017), mean shift (Senoussaoui et al., 2013), Kmeans and spectral clustering (Wang et al., 2018) and supervised Bayesian non-parametric methods (Fox et al., 2011; Zhang et al., 2019); and (3) reasonable resegmentation modules such as Viterbi and factor analysis subspace (Sell & Garcia-Romero, 2015). In this work, we proposed a new paradigm to consider the speaker diarization as a fully online learning problem of the speaker recognition task: it combines the embedding extraction, clustering and resegmentation into the same problem as an online decision making problem.
|
| 12 |
+
|
| 13 |
+
Why is this online learning problem different? The state-of-the-art speaker diarization systems usually require large datasets to train their audio extraction embeddings and clustering modules, especially the ones with deep neural networks and Bayesian nonparametric models. In many real-world applications in developing countries, however, the training set can be limited and hard to collect. Since these modules are pretrained, applying them to out-of-distribution environments can be problematic. For instance, an intelligent system trained with American elder speaker data might find it hard to generalize to a Japanese children diarization task because both the acoustic and contrastive features are different. To tackle this problem, we want the system to learn continually. To push this problem to the extreme, we are interested in a fully online learning setting, where not only the examples are available one by one, the agent receives no pretraining from any training set before deployment, and learns to detect speaker identity on the fly through reward feedbacks. To the best of our knowledge, this work is the first to consider diarization as a fully online learning problem. Through this work, we aim to understand the extent to which diarization can be solved as merely an online learning problem and whether traditional online learning algorithms (e.g. contextual bandits) can be beneficial to provide a practical solution.
|
| 14 |
+
|
| 15 |
+
${}^{1}$ Department of Applied Mathematics, University of Washington, Seattle, WA 98195, USA ${}^{2}$ Department of Electrical and Computer Engineering, University of Washington, Seattle, WA 98195, USA. Correspondence to: Baihan Lin <bai-han.lin@columbia.edu>.
|
| 16 |
+
|
| 17 |
+
The ${37}^{\text{ th }}$ International Conference on Machine Learning, Vienna, Austria, PMLR 119, 2020. Copyright 2020 by the author(s).
|
| 18 |
+
|
| 19 |
+
${}^{1}$ The web-based application of a real-time system can be accessed at https://www.baihan.nyc/viz/VoiceID/.The code for benchmark evaluation can be accessed at https://github.com/ doerlbh/MiniVox
|
| 20 |
+
|
| 21 |
+
What is a preferable online speaker diarization system? A preferable AI engine for such a realistic speaker recognition and diarization system should (1) not require user registrations, (2) allow new user to be registered into the system real-time, (3) transfer voiceprint information from old users to new ones, (4) be up running without pretraining on large amount of data in advance. While attractive, assumption (4) introduced an additional caveat that the labeling of the user profiles happens purely on the fly, trading off models pretrained on big data with the user directly interacting with the system by correcting the agent as labels. To tackle these challenges, we formulated this problem into an interactive learning model with cold-start arms and episodically revealed rewards (users can either reveal no feedback, approving by not intervening, or correcting the agent).
|
| 22 |
+
|
| 23 |
+
Why do we need a new benchmark? Traditional dataset in the speaker diarization task are limited: CALLHOME American English (Canavan et al., 1997) and NIST RT-03 English CTS (Martin & Przybocki, 2000) contained limited number of utterances recorded under controlled conditions. For online learning experiments, a learn-from-scratch agent usually needs a large length of data stream to reach a comparable result. Large scale speaker recognition dataset like VoxCeleb (Nagrani et al., 2017; 2019) and Speakers in the Wild (SITW) (McLaren et al., 2016) contained thousands of speaker utterances recorded in various challenging multi-speaker acoustic environments, but they are usually only used to pretrain diarization embeddings. In this work, we proposed a new benchmark called MiniVox, which can transform any large scale speaker identification dataset into infinitely long audio streams with various configurations.
|
| 24 |
+
|
| 25 |
+
We built upon LinUCB (Li et al., 2010) and proposed a semi-supervised learning variant to account for the fact that the rewards are entirely missing in many episodes. For each episode without feedbacks, we applied a self-supervision process to assign a pseudo-action upon which the reward mapping is updated. Finally, we generated new arms by transferring learned arm parameters to similar profiles given user feedbacks.
|
| 26 |
+
|
| 27 |
+
§ 2.THE FULLY ONLINE LEARNING PROBLEM
|
| 28 |
+
|
| 29 |
+
Algorithm 1 presents at a high-level our problem setting, where $c\left( t\right) \in {\mathbb{R}}^{d}$ is a vector describing the context at time $t,{r}_{a}\left( t\right) \in \left\lbrack {0,1}\right\rbrack$ is the reward of action $a$ at time $t$ , and $r\left( t\right) \in {\left\lbrack 0,1\right\rbrack }^{K}$ denotes a vector of rewards for all arms at time $t.{\mathbb{P}}_{c,r}$ denotes a joint probability distribution over (c, r), and $\pi : C \rightarrow A$ denotes a policy. Unlike traditional setting, in step 5 we have the rewards revealed in an episodic fashion (i.e. sometimes there are feedbacks of rewards being 0 or 1, sometimes there are no feedbacks of any kind). We consider our setting an online semi-supervised learning problem (Yver, 2009; Ororbia et al., 2015), where the agent learns from both labeled and unlabeled data in online setting.
|
| 30 |
+
|
| 31 |
+
< g r a p h i c s >
|
| 32 |
+
|
| 33 |
+
Figure 1. Arm expansion process of the bandit agents.
|
| 34 |
+
|
| 35 |
+
Algorithm 1 Online Learning with Episodic Rewards
|
| 36 |
+
|
| 37 |
+
for $\mathrm{t} = 1,2,3,\cdots ,\mathrm{T}$ do
|
| 38 |
+
|
| 39 |
+
$\left( {c\left( t\right) ,r\left( t\right) }\right)$ is drawn according to ${\mathbb{P}}_{c,r}$
|
| 40 |
+
|
| 41 |
+
$c\left( t\right)$ is revealed to the player
|
| 42 |
+
|
| 43 |
+
Player chooses an action $i = {\pi }_{t}\left( {c\left( t\right) }\right)$
|
| 44 |
+
|
| 45 |
+
Feedbacks ${r}_{a}\left( t\right)$ for all arms are episodically revealed
|
| 46 |
+
|
| 47 |
+
Player updates its policy ${\pi }_{t}$
|
| 48 |
+
|
| 49 |
+
end for
|
| 50 |
+
|
| 51 |
+
§ 3. PROPOSED ONLINE LEARNING SOLUTION
|
| 52 |
+
|
| 53 |
+
§ 3.1. CONTEXTUAL BANDITS WITH EXTENDABLE ARMS
|
| 54 |
+
|
| 55 |
+
In an ideal online learning scenario without oracle, we start with a single arm, and when new labels arrive new arms are then generated accordingly. This problem is loosely modelled by the bandits with infinitely many arms (Berry et al., 1997). For our specific application of speaker registration process, we applied the arm expansion process outlined in Figure 1: starting from a single arm (for the "new" action), if a feedback confirms a new addition, a new arm is initialized and appended to the arm list.
|
| 56 |
+
|
| 57 |
+
§ 3.2. EPISODICALLY REWARDED LINUCB
|
| 58 |
+
|
| 59 |
+
We proposed Background Episodically Rewarded LinUCB (BerlinUCB), a semi-supervised and self-supervised online contextual bandit which updates the context representations and reward mapping separately given the state of the feedbacks being present or missing (Algorithm 2). We assume that (1) when there are feedbacks available, the feedbacks are genuine, assigned by the oracle, and (2) when the feedbacks are missing (not revealed by the background), it is either due to the fact that the action is preferred (no intervention required by the oracle, i.e. with an implied default rewards), or that the oracle didn't have a chance to respond or intervene (i.e. with unknown rewards). Especially in the Step 15, when there is no feedbacks, we assign the context ${\mathbf{x}}_{t}$ to a class ${a}^{\prime }$ (an action arm) with the self-supervision given the previous labelled context history. Since we don't have the actual label for this context, we only update reward mapping parameter ${\mathbf{b}}_{{a}^{\prime }}$ and leave the covariance matrix ${\mathbf{A}}_{{a}^{\prime }}$ untouched. The additional usage of unlabelled data (or unrevealed feedback) is especially important in our model.
|
| 60 |
+
|
| 61 |
+
< g r a p h i c s >
|
| 62 |
+
|
| 63 |
+
Figure 2. (A) The flowchart of the Online Learning problem and (B) the MiniVox Benchmark.
|
| 64 |
+
|
| 65 |
+
Algorithm 2 BerlinUCB
|
| 66 |
+
|
| 67 |
+
Initialize ${c}_{t} \in {\mathbb{R}}_{ + },{\mathbf{A}}_{a} \leftarrow {\mathbf{I}}_{d},{\mathbf{b}}_{a} \leftarrow {\mathbf{0}}_{d \times 1}\forall a \in {\mathcal{A}}_{t}$
|
| 68 |
+
|
| 69 |
+
for $\mathrm{t} = 1,2,3,\cdots ,\mathrm{T}$ do
|
| 70 |
+
|
| 71 |
+
Observe features ${\mathbf{x}}_{t} \in {\mathbb{R}}^{d}$
|
| 72 |
+
|
| 73 |
+
for all $a \in {\mathcal{A}}_{t}$ do
|
| 74 |
+
|
| 75 |
+
${\widehat{\theta }}_{a} \leftarrow {\mathbf{A}}_{a}^{-1}{\mathbf{b}}_{a}$
|
| 76 |
+
|
| 77 |
+
${p}_{t,a} \leftarrow {\widehat{\theta }}_{a}^{\top }{\mathbf{x}}_{t} + {c}_{t}\sqrt{{\mathbf{x}}_{t}^{\top }{\mathbf{A}}_{a}^{-1}{\mathbf{x}}_{t}}$
|
| 78 |
+
|
| 79 |
+
end for
|
| 80 |
+
|
| 81 |
+
Choose arm ${a}_{t}{ = }_{a \in {\mathcal{A}}_{t}}{p}_{t,a}$
|
| 82 |
+
|
| 83 |
+
if the background revealed the feedbacks then
|
| 84 |
+
|
| 85 |
+
Observe feedback ${r}_{{a}_{t},t}$
|
| 86 |
+
|
| 87 |
+
${\mathbf{A}}_{{a}_{t}} \leftarrow {\mathbf{A}}_{{a}_{t}} + {\mathbf{x}}_{t}{\mathbf{x}}_{t}^{\top }$
|
| 88 |
+
|
| 89 |
+
${\mathbf{b}}_{{a}_{t}} \leftarrow {\mathbf{b}}_{{a}_{t}} + {r}_{{a}_{t},t}{\mathbf{x}}_{t}$
|
| 90 |
+
|
| 91 |
+
elif the background revealed NO feedbacks then
|
| 92 |
+
|
| 93 |
+
if use self-supervision feedback
|
| 94 |
+
|
| 95 |
+
${r}^{\prime } = \left\lbrack {{a}_{t} = = \operatorname{predict}\left( {\mathbf{x}}_{t}\right) }\right\rbrack \%$ clustering modules
|
| 96 |
+
|
| 97 |
+
${\mathbf{b}}_{{a}_{t}} \leftarrow {\mathbf{b}}_{{a}_{t}} + {r}^{\prime }{\mathbf{x}}_{t}$
|
| 98 |
+
|
| 99 |
+
elif % ignore self-supervision signals
|
| 100 |
+
|
| 101 |
+
${\mathbf{A}}_{{a}_{t}} \leftarrow {\mathbf{A}}_{{a}_{t}} + {\mathbf{x}}_{t}{\mathbf{x}}_{t}^{\top }$
|
| 102 |
+
|
| 103 |
+
end if
|
| 104 |
+
|
| 105 |
+
end if
|
| 106 |
+
|
| 107 |
+
end for
|
| 108 |
+
|
| 109 |
+
§ 3.3. SELF-SUPERVISION AND SEMI-SUPERVISION MODULES
|
| 110 |
+
|
| 111 |
+
We construct our self-supervision modules given the cluster assumption of the semi-supervision problem: the points within the same cluster are more likely to share a label. As shown in many work in modern speaker diarization, clustering algorithms like GMM (Zajíc et al., 2017), mean shift (Senoussaoui et al., 2013) and spectral clustering (Wang et al., 2018) are especially powerful unsupervised modules, especially in their offline versions. Their online variants, however, often performs poorly (Zhang et al., 2019). As in this work, we focus on the completely online setting, we chose three popular clustering algorithms as self-supervision modules: GMM, Kmeans and K-nearest neighbors.
|
| 112 |
+
|
| 113 |
+
§ 3.4. COMPLETE ENGINE FOR ONLINE SPEAKER DIARIZATION
|
| 114 |
+
|
| 115 |
+
To adapt our BerlinUCB algorithm to the specific application of speaker recognition, we first define our actions. There are three major classes of actions: an arm "New" to denote that a new speaker is detected, an arm "No Speaker" to denote that no one is speaking, and $\mathrm{N}$ different arms "User n" to denote that user $\mathrm{n}$ is speaking. Table 1 presents the reward assignment given four types of feedbacks. Note that we assume that when the agent correctly identifies the speaker (or no speaker), the user (as the feedback dispenser) should send no feedbacks to the system by doing nothing. In another word, in an ideal scenario when the agent does a perfect job by correctly identifying the speaker all the time, we are not necessary to be around to correct it anymore (i.e. truly feedback free). As we pointed out earlier, this could be a challenge earlier on, because other than implicitly approving the agent's choice, receiving no feedbacks could also mean the feedbacks are not revealed properly (e.g. the human oracle took a break). Furthermore, we note that when "No Speaker" and "User n" arms are correctly identified, there is no feedback from us the human oracle (meaning that these arms would never have learned from a single positive reward if we don't use the "None" feedback iterations at all!). The semi-supervision by self-supervision step is exactly tailored for a scenario like this, where the lack of revealed positive reward for "No Speaker" and "User n" arms is compensated by the additional training of the reward mapping ${\mathbf{b}}_{{a}_{t}}$ if context ${\mathbf{x}}_{t}$ is correctly assigned.
|
| 116 |
+
|
| 117 |
+
To tackle the cold start problem, the agent grows it arms in the following fashion: the agent starts with two arms, "No Speaker" and "New"; if it is actually a new speaker speaking, we have the following three conditions: (1) if "New" is chosen, the user approves this arm by giving it a positive reward (i.e. clicking on it) and the agent initializes a new arm called "User $N$ " and update $N = N + 1$ (where $N$ is the number of registered speakers at the moment); (2)
|
| 118 |
+
|
| 119 |
+
max width=
|
| 120 |
+
|
| 121 |
+
Feedback types $\left( {+, + }\right)$ $\left( {+, - }\right)$ $\left( {-, + }\right)$ None
|
| 122 |
+
|
| 123 |
+
1-5
|
| 124 |
+
New $r = 1$ $r = 0$ X 3*Alg. 2 Step 13
|
| 125 |
+
|
| 126 |
+
1-4
|
| 127 |
+
No Speaker - $r = 0$ $r = 0$
|
| 128 |
+
|
| 129 |
+
1-4
|
| 130 |
+
User n - $r = 0$ $r = 0$
|
| 131 |
+
|
| 132 |
+
1-5
|
| 133 |
+
|
| 134 |
+
Table 1. Routes given either no feedbacks, or a feedback telling the agent that the correct label is $a * .\left( {+, + }\right)$ means that the agent guessed it right by choosing the right arm; $\left( {+, - }\right)$ means that the agent chose this arm incorrectly, since the correct one is another arm; $\left( {-, + }\right)$ means that the agent didn’t choose this arm, while it turned out to be the correct one. "-" means NA.
|
| 135 |
+
|
| 136 |
+
if "No Speaker" is chosen, the user disapproves this arm by giving it a zero reward and clicking on the "New" instead), while the agent initializes a new arm; (3) if one of the user arms is chosen (e.g. "User 5" is chosen while in fact a new person is speaking), the agent copies the wrong user arm's parameters to initialize the new arm, since the voiceprint of the mistaken one might be beneficial to initialize the new user profile. In this way, we can transfer what has been learned for a similar context representations to the new arm.
|
| 137 |
+
|
| 138 |
+
§ 4. BENCHMARK DESCRIPTION: MINIVOX
|
| 139 |
+
|
| 140 |
+
MiniVox is an automatic framework to transform any speaker-labelled dataset into continuous speech datastream with episodically revealed label feedbacks. Since our online learning problem setting assumes learning the voiceprints without any previous training data at all, MiniVox's flexibility in length and configuration is especially important. As outlined in Figure 2, MiniVox has a straightforward data stream generation pipeline: given a pool of single-speaker-annotated utterances, randomly concatenate multiple pieces with a chosen number of speakers and a desired length. The reward stream is then sparsified with a parameter $p$ as the percentage of time a feedback is revealed.
|
| 141 |
+
|
| 142 |
+
There are two scenarios that we can evaluate in MiniVox: if we assume there is an oracle, the online learning model is given the fixed number of the speakers in the stream; if we assume there is no oracle, the online learning model will start from zero speaker and then gradually discover and register new speakers for future identification and diarization.
|
| 143 |
+
|
| 144 |
+
§ 5. EMPIRICAL EVALUATION
|
| 145 |
+
|
| 146 |
+
§ 5.1. EXPERIMENTAL SETUP AND METRICS
|
| 147 |
+
|
| 148 |
+
We applied MiniVox on VoxCeleb (Nagrani et al., 2017) to generate three data streams with 5,10 and 20 speakers to simulate real-world conversations. We extracted two types of features (more details in section 5.2) and evaluated it in two scenarios (with or without oracle). The reward streams are sparsified given a revealing probability of0.5,0.1,0.01 and 0.001 . In summary, we evaluated our models in a combinatorial total of 3 speaker numbers $\times 4$ reward revealing probabilities $\times 2$ feature types $\times 2$ test scenarios $= {48}$ online learning environments. The online learning timescale range from $\sim {12000}$ to $\sim {60000}$ timeframes. For notation of a specific MiniVox, in this paper we would denote "MiniVox C5-MFCC-60k" as a MiniVox environment with 5 speakers ranging ${60}\mathrm{k}$ time frames using MFCC as features.
|
| 149 |
+
|
| 150 |
+
To evaluate the performance, we reported Diarization Error Rates (DER) in the above MiniVox environments. In addition, as a common metric in online learning literature, we also recorded the cumulative reward: at each frame, if the agent correctly predicts a given speaker, the reward is counted as +1 (no matter if the agent observes the reward).
|
| 151 |
+
|
| 152 |
+
We compared 9 agents: LinUCB is the contextual bandit with extendable arms proposed in section 3.1. BerlinUCB is the standard contextual bandit model designed for sparse feedbacks without the self-supervision modules. We have four baseline models: Kmeans, KNN (with K=5), GMM and a random agent ${}^{2}$ . To test the effect of self-supervision, we introduced three clustering modules in BerlinUCB (alg 2, Step 15) denoted: B-Kmeans, B-KNN, and B-GMM.
|
| 153 |
+
|
| 154 |
+
§ 5.2. FEATURE EMBEDDINGS: MFCC AND CNN
|
| 155 |
+
|
| 156 |
+
We utilized two feature embeddings for our evaluation: MFCC (Hasan et al., 2004) and a Convolutional Neural Network (CNN) embedding. We utilized the same CNN architecture as the VGG-M (Chatfield et al., 2014) used in VolCeleb evaluation (Nagrani et al., 2017). It takes the spectrogram of an utterance as the input, and generate a feature vector of 1024 in layer fc8 (for more details of this CNN, please refer to table 4 in (Nagrani et al., 2017)).
|
| 157 |
+
|
| 158 |
+
Why don't we use more complicated embeddings? Although more complicated embedding extraction modules such as i-vectors (Shum et al., 2013) or d-vectors (Wang et al., 2018) can improve diarization, they require extensive pretraining on big datasets, which is contradictory to our problem setting and beyond our research scope.
|
| 159 |
+
|
| 160 |
+
Why do we still include this CNN? The CNN model was trained for speaker verification task in VoxCeleb and we are curious about the relationship between a learned representation and our online learning agents. Despite this note, we are most interested in the performance given MFCC features, because we aim to push the system fully online, to the limit of not having pretraining of any type before deployment.
|
| 161 |
+
|
| 162 |
+
§ 5.3. RESULTS
|
| 163 |
+
|
| 164 |
+
Given MFCC features without pretraining, our online learning agent demonstrated a relatively robust performance. As shown in Figure 3(a, b, c, d), in many conditions, the proposed contextual bandits significantly outperformed baselines when revealing probability is very low $\left( {\mathrm{p} = {0.01}\text{ or 0.1 }}\right)$ .
|
| 165 |
+
|
| 166 |
+
${}^{2}$ In the oracle-free case, the random agent randomly selects from the "new" arm and the registered user arms, suggesting a possibility of going to infinitely (and incorrectly) many profiles.
|
| 167 |
+
|
| 168 |
+
< g r a p h i c s >
|
| 169 |
+
|
| 170 |
+
Figure 3. Example reward curve. Positive: (a) C10-MFCC, p=0.01; (b) C20-MFCC, p=0.01; (c) C5-MFCC, p=0.01; (d) C5-MFCC, $\mathrm{p} = {0.5}$ ; (e) C20-MFCC, p=0.01, oracle. Negative: (f) C10-MFCC, p=0.01, oracle; (g) C10-MFCC, p=0.1, oracle; (h) C5-CNN, p=0.5.
|
| 171 |
+
|
| 172 |
+
Table 2. Diarization Error Rate (%) in MiniVox without Oracle
|
| 173 |
+
|
| 174 |
+
max width=
|
| 175 |
+
|
| 176 |
+
2*X 3|c|MiniVox C5-MFCC-60k 3|c|MiniVox C5-CNN-12k
|
| 177 |
+
|
| 178 |
+
2-7
|
| 179 |
+
$p = {0.5}$ $p = {0.1}$ $p = {0.01}$ $p = {0.5}$ $p = {0.1}$ $p = {0.01}$
|
| 180 |
+
|
| 181 |
+
1-7
|
| 182 |
+
BerlinUCB 71.81 80.03 82.38 17.42 32.03 65.16
|
| 183 |
+
|
| 184 |
+
1-7
|
| 185 |
+
LinUCB 74.74 78.71 79.30 17.81 32.73 58.98
|
| 186 |
+
|
| 187 |
+
1-7
|
| 188 |
+
B-Kmeans 82.82 79.15 77.39 28.83 63.67 82.58
|
| 189 |
+
|
| 190 |
+
1-7
|
| 191 |
+
B-KNN 78.71 80.62 77.39 28.36 82.58 82.58
|
| 192 |
+
|
| 193 |
+
1-7
|
| 194 |
+
B-GMM 85.32 83.41 87.67 99.61 99.61 99.69
|
| 195 |
+
|
| 196 |
+
1-7
|
| 197 |
+
Kmeans 86.20 85.76 82.67 5.47 8.91 40.23
|
| 198 |
+
|
| 199 |
+
1-7
|
| 200 |
+
KNN 70.34 72.98 78.12 6.09 13.75 53.75
|
| 201 |
+
|
| 202 |
+
1-7
|
| 203 |
+
GMM 99.27 99.27 99.27 99.61 99.61 99.69
|
| 204 |
+
|
| 205 |
+
1-7
|
| 206 |
+
random 83.41 81.50 82.97 77.89 78.98 77.66
|
| 207 |
+
|
| 208 |
+
1-7
|
| 209 |
+
|
| 210 |
+
max width=
|
| 211 |
+
|
| 212 |
+
2*X 3|c|MiniVox C10-MFCC-60k 3|c|MiniVox C10-CNN-12k
|
| 213 |
+
|
| 214 |
+
2-7
|
| 215 |
+
$p = {0.5}$ $p = {0.1}$ $p = {0.01}$ $p = {0.5}$ $p = {0.1}$ $p = {0.01}$
|
| 216 |
+
|
| 217 |
+
1-7
|
| 218 |
+
BerlinUCB 82.46 85.31 89.26 42.77 57.41 74.02
|
| 219 |
+
|
| 220 |
+
1-7
|
| 221 |
+
LinUCB 84.36 86.73 93.36 49.55 68.57 81.16
|
| 222 |
+
|
| 223 |
+
1-7
|
| 224 |
+
B-Kmeans 91.15 92.58 96.68 60.89 70.89 99.55
|
| 225 |
+
|
| 226 |
+
1-7
|
| 227 |
+
B-KNN 89.73 90.05 96.68 60.89 82.05 99.55
|
| 228 |
+
|
| 229 |
+
1-7
|
| 230 |
+
B-GMM 90.21 94.63 98.42 99.20 93.57 99.64
|
| 231 |
+
|
| 232 |
+
1-7
|
| 233 |
+
Kmeans 92.26 94.15 98.10 10.36 18.75 47.86
|
| 234 |
+
|
| 235 |
+
1-7
|
| 236 |
+
KNN 79.78 84.52 97.47 9.29 31.25 70.27
|
| 237 |
+
|
| 238 |
+
1-7
|
| 239 |
+
GMM 98.42 98.42 99.21 99.20 99.20 99.37
|
| 240 |
+
|
| 241 |
+
1-7
|
| 242 |
+
random 90.21 88.78 92.89 79.29 81.34 83.75
|
| 243 |
+
|
| 244 |
+
1-7
|
| 245 |
+
|
| 246 |
+
max width=
|
| 247 |
+
|
| 248 |
+
2*X 3|c|MiniVox C20-MFCC-60k 3|c|MiniVox C20-CNN-12k
|
| 249 |
+
|
| 250 |
+
2-7
|
| 251 |
+
$p = {0.5}$ $p = {0.1}$ $p = {0.01}$ $p = {0.5}$ $p = {0.1}$ $p = {0.01}$
|
| 252 |
+
|
| 253 |
+
1-7
|
| 254 |
+
BerlinUCB 88.62 87.02 92.79 41.72 59.06 83.28
|
| 255 |
+
|
| 256 |
+
1-7
|
| 257 |
+
LinUCB 91.35 88.94 88.46 51.56 83.52 74.84
|
| 258 |
+
|
| 259 |
+
1-7
|
| 260 |
+
B-Kmeans 95.19 95.99 96.96 72.03 75.31 99.53
|
| 261 |
+
|
| 262 |
+
1-7
|
| 263 |
+
B-KNN 93.43 95.99 96.79 72.03 74.06 99.53
|
| 264 |
+
|
| 265 |
+
1-7
|
| 266 |
+
B-GMM 92.79 96.31 97.76 87.73 81.09 83.28
|
| 267 |
+
|
| 268 |
+
1-7
|
| 269 |
+
Kmeans 90.54 93.43 95.51 6.02 12.81 54.77
|
| 270 |
+
|
| 271 |
+
1-7
|
| 272 |
+
KNN 86.38 89.26 95.99 8.67 32.66 75.08
|
| 273 |
+
|
| 274 |
+
1-7
|
| 275 |
+
GMM 96.96 97.44 98.88 98.98 98.98 99.37
|
| 276 |
+
|
| 277 |
+
1-7
|
| 278 |
+
random 93.59 94.07 95.35 87.03 87.73 89.69
|
| 279 |
+
|
| 280 |
+
1-7
|
| 281 |
+
|
| 282 |
+
Learning without Oracle. Table 2 reports DER in MiniVox without Oracle. In MFCC environments, we observed that in high-difficulty scenarios (such as C20), the proposed BerlinUCB variants outperformed all the baselines even when the reward revealing probability was as low as 0.01 . In low-difficulty scenarios, traditional clustering methods like KNN performed the best, while this benefit was inherited by B-KNN and B-Kmeans when feedbacks were sparse $\left( {\mathrm{p} = {0.01}}\right)$ . In the CNN cases, we observed that Kmeans performed the best. This is expected because the CNN model was trained with the constrastive loss for a high verification accuracy (Nagrani et al., 2017). While the clustering modules merely classify the CNN feature by their proximity, our online learning model need to learn about their reward mapping from scratch, while maintaining a good balance between exploitation and exploration.
|
| 283 |
+
|
| 284 |
+
Learning with Oracle. Given the number of speakers, traditional clustering agents performed better (Table 3). However, the behaviors vary: we observed that GMM performed the poorest in the oracle-free environments, but performed the best in the environments with oracle; we also noted that despite the best model in many oracle-free environments, Kmeans performed poorly in the MFCC environments with oracle. Another winning algorithm, KNN, requires the model to store all historical data points and search through the entire memory, which can be computationally inhibitory in real-world applications. Our online learning models maintains a relatively robust performance by keeping among the top 3 algorithms in most cases with and without oracle.
|
| 285 |
+
|
| 286 |
+
Table 3. Diarization Error Rate (%) in MiniVox with Oracle
|
| 287 |
+
|
| 288 |
+
max width=
|
| 289 |
+
|
| 290 |
+
2*X 3|c|MiniVox C5-MFCC-60k 3|c|MiniVox C5-CNN-12k
|
| 291 |
+
|
| 292 |
+
2-7
|
| 293 |
+
$p = {0.5}$ $p = {0.1}$ $p = {0.01}$ $p = {0.5}$ $p = {0.1}$ $p = {0.01}$
|
| 294 |
+
|
| 295 |
+
1-7
|
| 296 |
+
BerlinUCB 74.89 77.24 86.93 17.27 22.19 66.02
|
| 297 |
+
|
| 298 |
+
1-7
|
| 299 |
+
LinUCB 72.83 78.12 76.80 17.73 32.73 58.98
|
| 300 |
+
|
| 301 |
+
1-7
|
| 302 |
+
B-Kmeans 75.33 78.27 83.11 20.55 40.70 58.98
|
| 303 |
+
|
| 304 |
+
1-7
|
| 305 |
+
B-KNN 77.39 77.97 83.99 20.47 41.33 58.98
|
| 306 |
+
|
| 307 |
+
1-7
|
| 308 |
+
B-GMM 74.16 76.21 77.24 52.58 81.02 58.98
|
| 309 |
+
|
| 310 |
+
1-7
|
| 311 |
+
Kmeans 78.41 82.82 83.11 4.06 7.42 39.53
|
| 312 |
+
|
| 313 |
+
1-7
|
| 314 |
+
KNN 70.63 73.27 80.47 6.64 13.75 53.52
|
| 315 |
+
|
| 316 |
+
1-7
|
| 317 |
+
GMM 70.34 72.54 74.74 54.38 81.02 58.98
|
| 318 |
+
|
| 319 |
+
1-7
|
| 320 |
+
random 79.59 80.76 85.9 79.92 80.39 85.55
|
| 321 |
+
|
| 322 |
+
1-7
|
| 323 |
+
|
| 324 |
+
max width=
|
| 325 |
+
|
| 326 |
+
2*X 3|c|MiniVox C10-MFCC-60k 3|c|MiniVox C10-CNN-12k
|
| 327 |
+
|
| 328 |
+
2-7
|
| 329 |
+
$p = {0.5}$ $p = {0.1}$ $p = {0.01}$ $p = {0.5}$ $p = {0.1}$ $p = {0.01}$
|
| 330 |
+
|
| 331 |
+
1-7
|
| 332 |
+
BerlinUCB 88.31 90.21 95.89 45.18 65.27 79.38
|
| 333 |
+
|
| 334 |
+
1-7
|
| 335 |
+
LinUCB 84.99 91.63 97.00 50.00 72.14 65.18
|
| 336 |
+
|
| 337 |
+
1-7
|
| 338 |
+
B-Kmeans 87.84 91.47 91.94 50.27 72.50 72.32
|
| 339 |
+
|
| 340 |
+
1-7
|
| 341 |
+
B-KNN 86.73 85.78 92.58 49.64 72.14 77.77
|
| 342 |
+
|
| 343 |
+
1-7
|
| 344 |
+
B-GMM 88.94 84.52 92.58 76.52 71.88 69.46
|
| 345 |
+
|
| 346 |
+
1-7
|
| 347 |
+
Kmeans 89.42 89.57 98.74 11.16 20.27 49.49
|
| 348 |
+
|
| 349 |
+
1-7
|
| 350 |
+
KNN 80.25 84.68 97.79 9.55 31.25 70.45
|
| 351 |
+
|
| 352 |
+
1-7
|
| 353 |
+
GMM 90.36 79.62 91.63 76.52 78.30 77.77
|
| 354 |
+
|
| 355 |
+
1-7
|
| 356 |
+
random 87.99 92.26 97.16 90.00 90.89 92.32
|
| 357 |
+
|
| 358 |
+
1-7
|
| 359 |
+
|
| 360 |
+
max width=
|
| 361 |
+
|
| 362 |
+
2*X 3|c|MiniVox C20-MFCC-60k 3|c|MiniVox C20-CNN-12k
|
| 363 |
+
|
| 364 |
+
2-7
|
| 365 |
+
$p = {0.5}$ $p = {0.1}$ $p = {0.01}$ $p = {0.5}$ $p = {0.1}$ $p = {0.01}$
|
| 366 |
+
|
| 367 |
+
1-7
|
| 368 |
+
BerlinUCB 92.31 94.55 96.31 58.75 68.98 88.83
|
| 369 |
+
|
| 370 |
+
1-7
|
| 371 |
+
LinUCB 89.10 93.43 95.67 53.44 70.47 83.44
|
| 372 |
+
|
| 373 |
+
1-7
|
| 374 |
+
B-Kmeans 92.95 95.67 96.96 55.16 70.86 94.06
|
| 375 |
+
|
| 376 |
+
1-7
|
| 377 |
+
B-KNN 91.83 92.47 97.44 54.30 89.84 96.72
|
| 378 |
+
|
| 379 |
+
1-7
|
| 380 |
+
B-GMM 95.19 91.99 97.44 86.48 77.97 96.64
|
| 381 |
+
|
| 382 |
+
1-7
|
| 383 |
+
Kmeans 91.67 94.23 98.08 7.66 13.75 55.63
|
| 384 |
+
|
| 385 |
+
1-7
|
| 386 |
+
KNN 86.86 89.26 98.08 9.690 32.73 75.08
|
| 387 |
+
|
| 388 |
+
1-7
|
| 389 |
+
GMM 98.08 94.87 98.88 93.52 95.08 97.11
|
| 390 |
+
|
| 391 |
+
1-7
|
| 392 |
+
random 94.71 94.71 98.88 95.55 95.86 97.03
|
| 393 |
+
|
| 394 |
+
1-7
|
| 395 |
+
|
| 396 |
+
Is self-supervision useful? To our surprise, our benchmark results suggested that the proposed self-supervision modules didn't improve upon both the baseline models and our proposed contextual bandit models. Only in specific conditions (e.g. MiniVox C5-MFCC-60k p=0.01), the self-supervised contextual bandits outperformed both the standard Berlin-UCB and all the baseline. Further investigation into the reward curve revealed more complicated interactions between the self-supervision modules with the online learning modules (the contextual bandit): as shown in Figure 3(f, g, h), B-GMM and B-KNN maintained build upon the effective reward mapping from their BerlinUCB backbone, and benefited from the unlabelled data points to perform fairly well.
|