Add files using upload-large-folder tool
Browse filesThis view is limited to 50 files because it contains too many changes. See raw diff
- papers/ML_Reproducibility_Challenge/ML_Reproducibility_Challenge 2021/ML_Reproducibility_Challenge 2021 Fall/SIQEl6f7h0Y/Initial_manuscript_md/Initial_manuscript.md +333 -0
- papers/ML_Reproducibility_Challenge/ML_Reproducibility_Challenge 2021/ML_Reproducibility_Challenge 2021 Fall/SIQEl6f7h0Y/Initial_manuscript_tex/Initial_manuscript.tex +445 -0
- papers/ML_Reproducibility_Challenge/ML_Reproducibility_Challenge 2021/ML_Reproducibility_Challenge 2021 Fall/SK8gAhfX2AK/Initial_manuscript_md/Initial_manuscript.md +320 -0
- papers/ML_Reproducibility_Challenge/ML_Reproducibility_Challenge 2021/ML_Reproducibility_Challenge 2021 Fall/SK8gAhfX2AK/Initial_manuscript_tex/Initial_manuscript.tex +245 -0
- papers/ML_Reproducibility_Challenge/ML_Reproducibility_Challenge 2021/ML_Reproducibility_Challenge 2021 Fall/SNeep2MXn0K/Initial_manuscript_md/Initial_manuscript.md +269 -0
- papers/ML_Reproducibility_Challenge/ML_Reproducibility_Challenge 2021/ML_Reproducibility_Challenge 2021 Fall/SNeep2MXn0K/Initial_manuscript_tex/Initial_manuscript.tex +283 -0
- papers/ML_Reproducibility_Challenge/ML_Reproducibility_Challenge 2021/ML_Reproducibility_Challenge 2021 Fall/SSSGs3M7nRY/Initial_manuscript_md/Initial_manuscript.md +253 -0
- papers/ML_Reproducibility_Challenge/ML_Reproducibility_Challenge 2021/ML_Reproducibility_Challenge 2021 Fall/SSSGs3M7nRY/Initial_manuscript_tex/Initial_manuscript.tex +236 -0
- papers/ML_Reproducibility_Challenge/ML_Reproducibility_Challenge 2021/ML_Reproducibility_Challenge 2021 Fall/SVx46hzmhRK/Initial_manuscript_md/Initial_manuscript.md +238 -0
- papers/ML_Reproducibility_Challenge/ML_Reproducibility_Challenge 2021/ML_Reproducibility_Challenge 2021 Fall/SVx46hzmhRK/Initial_manuscript_tex/Initial_manuscript.tex +241 -0
- papers/ML_Reproducibility_Challenge/ML_Reproducibility_Challenge 2021/ML_Reproducibility_Challenge 2021 Fall/SW4eu2MmnRY/Initial_manuscript_md/Initial_manuscript.md +51 -0
- papers/ML_Reproducibility_Challenge/ML_Reproducibility_Challenge 2021/ML_Reproducibility_Challenge 2021 Fall/SW4eu2MmnRY/Initial_manuscript_tex/Initial_manuscript.tex +45 -0
- papers/ML_Reproducibility_Challenge/ML_Reproducibility_Challenge 2021/ML_Reproducibility_Challenge 2021 Fall/SWNM52GXh0Y/Initial_manuscript_md/Initial_manuscript.md +304 -0
- papers/ML_Reproducibility_Challenge/ML_Reproducibility_Challenge 2021/ML_Reproducibility_Challenge 2021 Fall/SWNM52GXh0Y/Initial_manuscript_tex/Initial_manuscript.tex +271 -0
- papers/ML_Reproducibility_Challenge/ML_Reproducibility_Challenge 2021/ML_Reproducibility_Challenge 2021 Fall/SY84JTG73CK/Initial_manuscript_md/Initial_manuscript.md +309 -0
- papers/ML_Reproducibility_Challenge/ML_Reproducibility_Challenge 2021/ML_Reproducibility_Challenge 2021 Fall/SY84JTG73CK/Initial_manuscript_tex/Initial_manuscript.tex +436 -0
- papers/ML_Reproducibility_Challenge/ML_Reproducibility_Challenge 2021/ML_Reproducibility_Challenge 2021 Fall/SYUxyazQh0Y/Initial_manuscript_md/Initial_manuscript.md +304 -0
- papers/ML_Reproducibility_Challenge/ML_Reproducibility_Challenge 2021/ML_Reproducibility_Challenge 2021 Fall/SYUxyazQh0Y/Initial_manuscript_tex/Initial_manuscript.tex +227 -0
- papers/ML_Reproducibility_Challenge/ML_Reproducibility_Challenge 2021/ML_Reproducibility_Challenge 2021 Fall/SZNMKnzQhAY/Initial_manuscript_md/Initial_manuscript.md +39 -0
- papers/ML_Reproducibility_Challenge/ML_Reproducibility_Challenge 2021/ML_Reproducibility_Challenge 2021 Fall/SZNMKnzQhAY/Initial_manuscript_tex/Initial_manuscript.tex +39 -0
- papers/ML_Reproducibility_Challenge/ML_Reproducibility_Challenge 2021/ML_Reproducibility_Challenge 2021 Fall/ScfP3G73CY/Initial_manuscript_md/Initial_manuscript.md +392 -0
- papers/ML_Reproducibility_Challenge/ML_Reproducibility_Challenge 2021/ML_Reproducibility_Challenge 2021 Fall/Sczshz7h0K/Initial_manuscript_md/Initial_manuscript.md +179 -0
- papers/ML_Reproducibility_Challenge/ML_Reproducibility_Challenge 2021/ML_Reproducibility_Challenge 2021 Fall/Sczshz7h0K/Initial_manuscript_tex/Initial_manuscript.tex +167 -0
- papers/ML_Reproducibility_Challenge/ML_Reproducibility_Challenge 2021/ML_Reproducibility_Challenge 2021 Fall/StblE2MQ3AY/Initial_manuscript_md/Initial_manuscript.md +232 -0
- papers/ML_Reproducibility_Challenge/ML_Reproducibility_Challenge 2021/ML_Reproducibility_Challenge 2021 Fall/StblE2MQ3AY/Initial_manuscript_tex/Initial_manuscript.tex +190 -0
- papers/MPM/MPM 2022/MPM 2022 Workshop/4WZdqAolwCa/Initial_manuscript_md/Initial_manuscript.md +19 -0
- papers/MPM/MPM 2022/MPM 2022 Workshop/4WZdqAolwCa/Initial_manuscript_tex/Initial_manuscript.tex +15 -0
- papers/MPM/MPM 2022/MPM 2022 Workshop/6opKSxYmlwl/Initial_manuscript_md/Initial_manuscript.md +15 -0
- papers/MPM/MPM 2022/MPM 2022 Workshop/6opKSxYmlwl/Initial_manuscript_tex/Initial_manuscript.tex +15 -0
- papers/MPM/MPM 2022/MPM 2022 Workshop/IDCAmWl27e/Initial_manuscript_md/Initial_manuscript.md +11 -0
- papers/MPM/MPM 2022/MPM 2022 Workshop/IDCAmWl27e/Initial_manuscript_tex/Initial_manuscript.tex +11 -0
- papers/MPM/MPM 2022/MPM 2022 Workshop/OmE6pREhjYC/Initial_manuscript_md/Initial_manuscript.md +7 -0
- papers/MPM/MPM 2022/MPM 2022 Workshop/OmE6pREhjYC/Initial_manuscript_tex/Initial_manuscript.tex +7 -0
- papers/MPM/MPM 2022/MPM 2022 Workshop/WmhHlPmRq_J/Initial_manuscript_md/Initial_manuscript.md +19 -0
- papers/MPM/MPM 2022/MPM 2022 Workshop/WmhHlPmRq_J/Initial_manuscript_tex/Initial_manuscript.tex +11 -0
- papers/MPM/MPM 2022/MPM 2022 Workshop/iDoRmiH7OPF/Initial_manuscript_md/Initial_manuscript.md +11 -0
- papers/MPM/MPM 2022/MPM 2022 Workshop/iDoRmiH7OPF/Initial_manuscript_tex/Initial_manuscript.tex +11 -0
- papers/MPM/MPM 2022/MPM 2022 Workshop/mBUv4nLTYf/Initial_manuscript_md/Initial_manuscript.md +5 -0
- papers/MPM/MPM 2022/MPM 2022 Workshop/mBUv4nLTYf/Initial_manuscript_tex/Initial_manuscript.tex +5 -0
- papers/MPM/MPM 2022/MPM 2022 Workshop/rL11psvzvO/Initial_manuscript_md/Initial_manuscript.md +37 -0
- papers/MPM/MPM 2022/MPM 2022 Workshop/rL11psvzvO/Initial_manuscript_tex/Initial_manuscript.tex +23 -0
- papers/MobiCom/MobiCom 2020/MobiCom 2020 Workshop/MobiCom 2020 Workshop CPS-IoTBench/BSZPNEUHiS2/Initial_manuscript_md/Initial_manuscript.md +193 -0
- papers/MobiCom/MobiCom 2020/MobiCom 2020 Workshop/MobiCom 2020 Workshop CPS-IoTBench/BSZPNEUHiS2/Initial_manuscript_tex/Initial_manuscript.tex +207 -0
- papers/MobiCom/MobiCom 2020/MobiCom 2020 Workshop/MobiCom 2020 Workshop CPS-IoTBench/D8K3TY96hrz/Initial_manuscript_md/Initial_manuscript.md +193 -0
- papers/MobiCom/MobiCom 2020/MobiCom 2020 Workshop/MobiCom 2020 Workshop CPS-IoTBench/D8K3TY96hrz/Initial_manuscript_tex/Initial_manuscript.tex +195 -0
- papers/MobiCom/MobiCom 2020/MobiCom 2020 Workshop/MobiCom 2020 Workshop CPS-IoTBench/H6a78knAFnY/Initial_manuscript_md/Initial_manuscript.md +261 -0
- papers/MobiCom/MobiCom 2020/MobiCom 2020 Workshop/MobiCom 2020 Workshop CPS-IoTBench/H6a78knAFnY/Initial_manuscript_tex/Initial_manuscript.tex +293 -0
- papers/MobiCom/MobiCom 2020/MobiCom 2020 Workshop/MobiCom 2020 Workshop CPS-IoTBench/PXwOPetJ2bF/Initial_manuscript_md/Initial_manuscript.md +279 -0
- papers/MobiCom/MobiCom 2020/MobiCom 2020 Workshop/MobiCom 2020 Workshop CPS-IoTBench/PXwOPetJ2bF/Initial_manuscript_tex/Initial_manuscript.tex +187 -0
- papers/NeurIPS/NeurIPS 2019/NeurIPS 2019 Workshop/NeurIPS 2019 Workshop Neuro_AI/B1Mo4XFL8H/Initial_manuscript_md/Initial_manuscript.md +97 -0
papers/ML_Reproducibility_Challenge/ML_Reproducibility_Challenge 2021/ML_Reproducibility_Challenge 2021 Fall/SIQEl6f7h0Y/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,333 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Reproducibility Report: Contrastive Learning of Socially-aware Motion Representations
|
| 2 |
+
|
| 3 |
+
Anonymous Author(s)
|
| 4 |
+
|
| 5 |
+
Affiliation
|
| 6 |
+
|
| 7 |
+
Address
|
| 8 |
+
|
| 9 |
+
email 1 Reproducibility Summary
|
| 10 |
+
|
| 11 |
+
The following paper is a reproducibility report for "Social NCE: Contrastive Learning of Socially-aware Motion 3 Representations" [1] published in ICCV 2021 as part of the ML Reproducibility Challenge 2021. The original code was 4 made available by the author1. We attempted to verify the results claimed by the authors and reimplemented their code in PyTorch Lightning.
|
| 12 |
+
|
| 13 |
+
## 6 Scope of Reproducibility
|
| 14 |
+
|
| 15 |
+
The central claim of the paper is that the consideration of negative (collision) cases in trajectory prediction models through a socially contrastive loss function Social-NCE will improve the robustness of the models. We verify their claim on various models, with special focus on improvements in the human trajectory prediction models Social-STGCNN and Trajectron++ and on robot navigation through an imitation learning model.
|
| 16 |
+
|
| 17 |
+
## Methodology
|
| 18 |
+
|
| 19 |
+
We used the codebase made publicly available by the authors for our work. We trained the models used in the paper from scratch and reimplemented the code in PyTorch Lightning. We evaluated both, and compared them with the results in the original paper. Further, we attempted additional experiments to find suitable hyperparameters in the Trajectron++ and Social-STGCNN models.
|
| 20 |
+
|
| 21 |
+
## Results
|
| 22 |
+
|
| 23 |
+
17 We were able to reproduce majority of the results claimed in the paper except the Social-LSTM and Directional-LSTM models due to lack of time, and got a maximum of $2\%$ deviation from that of the original paper.
|
| 24 |
+
|
| 25 |
+
## 9 What was easy
|
| 26 |
+
|
| 27 |
+
The publicly available codebases were well documented and easy to follow. The authors have also mentioned sources for the processed datasets that they have used. The simulation data generation code for the imitation learning model was also shared.
|
| 28 |
+
|
| 29 |
+
## 23 What was difficult
|
| 30 |
+
|
| 31 |
+
The proposed contrastive loss was implemented on different trajectory prediction models, the understanding of which was required to reimplement the code from PyTorch to PyTorch Lightning. Experiments on the entire ETH and UCY dataset on restricted computational resources took a considerable amount of time and we had to restrict our ablation study to one model.
|
| 32 |
+
|
| 33 |
+
## Communication with original authors
|
| 34 |
+
|
| 35 |
+
We contacted the authors with some queries on their implementation and on the importance of some hyperparameters. They replied promptly and their input was pivotal while conducting experiments.
|
| 36 |
+
|
| 37 |
+
Submitted to ML Reproducibility Challenge 2021. Do not distribute.
|
| 38 |
+
|
| 39 |
+
---
|
| 40 |
+
|
| 41 |
+
https://github.com/vita-epfl/social-nce
|
| 42 |
+
|
| 43 |
+
---
|
| 44 |
+
|
| 45 |
+
## 1 Introduction
|
| 46 |
+
|
| 47 |
+
Humans tend to develop a strong intuition towards predicting future motions of other people, while navigating in crowded spaces. This is essential for carrying out daily tasks without any discomfort and to maintain a safe distance from others while moving around. However, building neural models that can replicate similar nature of accurate predictions is often challenging, even with a large training set.
|
| 48 |
+
|
| 49 |
+
Multi-agent problems such as trajectory forecasting and robot navigation, require the model to learn socially aware motion representations. Previously, several papers have proposed neural network based models to achieve these tasks. However, these models still fail to generalize well with different scenarios, often outputting colliding trajectories. The original authors aim to tackle this issue by feeding explicit negative examples into the network, while teaching the model to differentiate between the two using a newly proposed Social Contrastive Loss.
|
| 50 |
+
|
| 51 |
+
We exhaustively carry out all the experiments done in the paper and verify all claims and tables. We then review the results and present an assessment. We further ported the code to the PyTorch Lightning framework. This allowed us to train the code flexibly over different platforms and automate the optimization process. We also expect this to help in future implementation or reproduction of the codebase. Then we proceed to present a few ablations in the original code, especially hyperparameter tuning.
|
| 52 |
+
|
| 53 |
+
## 2 Scope of reproducibility
|
| 54 |
+
|
| 55 |
+
Existing work on multi-agent trajectory prediction problems sometimes output colliding trajectories which makes them unsuitable for deployment. The authors claim that this is due to the bias in existing datasets which only consist of safe trajectories and no collision scenarios, giving the models no negative cases to train on. The original paper proposes a modified contrastive loss (Social-NCE) which incorporates ground truth knowledge to generate negative cases to reduce collision rates on several benchmarks. The details of this loss have been discussed later (in Methodology section) in the report. The key claims that we aim to verify in our reproducibility report are:
|
| 56 |
+
|
| 57 |
+
1. Addition of the Social-NCE loss in human trajectory forecasting models significantly decreases collision rate while maintaining similar final displacement error.
|
| 58 |
+
|
| 59 |
+
2. Addition of the Social-NCE loss in imitation learning models for robot navigation in crowded environments significantly decreases the collision rate.
|
| 60 |
+
|
| 61 |
+
3. Addition of the Social-NCE loss in reinforcement learning models increases sample efficiency, and they obtain a collision-free policy quickly.
|
| 62 |
+
|
| 63 |
+
## 593 Methodology
|
| 64 |
+
|
| 65 |
+
The authors have a detailed public repository ${}^{2}$ on the addition of Social-NCE on Trajectron++ [2], Social-STGCNN [3], models for human trajectory prediction and on an existing imitation learning model [4] for robot navigation. Further, we contacted the authors and they gave us their implementation of Social-NCE in reinforcement learning using Rainbow DQN [5] as the baseline. We reproduced the findings of the paper based on these repositories. We focused primarily on the human trajectory prediction models Social-STGCNN and Trajectron++ and attempted ablations on Social-NCE hyperparameters in the Trajectron++ model to improve its performance. Lastly we ported the codebase for Trajectron++, Social-STGCNN and the imitation learning model to PyTorch Lightning [6].
|
| 66 |
+
|
| 67 |
+
---
|
| 68 |
+
|
| 69 |
+
${}^{2}$ https://github.com/YuejiangLIU/social-nce-trajectron-plus-plus
|
| 70 |
+
|
| 71 |
+
---
|
| 72 |
+
|
| 73 |
+
### 3.1 Social-NCE Loss and Negative Data Augmentation
|
| 74 |
+
|
| 75 |
+
Consider M agents with index $i \in \{ 1\ldots M\}$ , the state of agent $i$ at time $t$ is given by ${s}_{t}^{i} = \left( {{x}_{t}^{i},{y}_{t}^{i}}\right)$ which are its position coordinates. State of all agents combined is given by ${s}_{t} = \left\{ {{s}_{t}^{1},{s}_{t}^{2}\ldots {s}_{t}^{M}}\right\}$ . Given ${s}_{1 : t}$ the model predicts ${s}_{t + 1 : T}$ .
|
| 76 |
+
|
| 77 |
+
Encoder $f\left( \cdot \right)$ :
|
| 78 |
+
|
| 79 |
+
Gives vector encoding $\left( {h}_{t}^{i}\right)$ for agent $i$ at time $t$ given state of all agents till time $t$ and index of agent:
|
| 80 |
+
|
| 81 |
+
$$
|
| 82 |
+
{h}_{t}^{i} = f\left( {{s}_{1 : t}, i}\right) \tag{1}
|
| 83 |
+
$$
|
| 84 |
+
|
| 85 |
+
Encoder has two sub-modules: sequential ${f}_{s}\left( \text{.}\right) {andinteraction}{f}_{i}\left( \text{.}\right) {modules}$ to make encoding of one agent dependent on the state of other agents.
|
| 86 |
+
|
| 87 |
+
Decoder $g\left( \cdot \right)$ :
|
| 88 |
+
|
| 89 |
+
Returns predicted state from vector encoding
|
| 90 |
+
|
| 91 |
+
$$
|
| 92 |
+
{s}_{t + 1 : T}^{i} = g\left( {h}_{t}^{i}\right) \tag{2}
|
| 93 |
+
$$
|
| 94 |
+
|
| 95 |
+
#### 3.1.1 Social-NCE loss
|
| 96 |
+
|
| 97 |
+
## Embedding Models
|
| 98 |
+
|
| 99 |
+
- Query: Projection head that embeds the vector encoding of the agent $i$ till time $t$
|
| 100 |
+
|
| 101 |
+
$$
|
| 102 |
+
q = \psi \left( {h}_{t}^{i}\right) \tag{3}
|
| 103 |
+
$$
|
| 104 |
+
|
| 105 |
+
- Key: Encoder that embeds the future state of agent i at time $t + {\delta t}$ where ${\delta t}$ is the sampling horizon in a given
|
| 106 |
+
|
| 107 |
+
range
|
| 108 |
+
|
| 109 |
+
$$
|
| 110 |
+
k = \phi \left( {{s}_{t + {\delta t}}^{i},{\delta t}}\right) \tag{4}
|
| 111 |
+
$$
|
| 112 |
+
|
| 113 |
+
Both the query and key are 2-layer MLPs which return 8-dimensional encoded vectors.
|
| 114 |
+
|
| 115 |
+
Loss
|
| 116 |
+
|
| 117 |
+
The InfoNCE Loss [7] is given by:
|
| 118 |
+
|
| 119 |
+
$$
|
| 120 |
+
{L}_{NCE} = - \log \frac{\exp \left( {\operatorname{sim}\left( {q,{k}^{ + }}\right) /\tau }\right) }{\mathop{\sum }\limits_{{n = 0}}^{N}\exp \left( {\operatorname{sim}\left( {q,{k}_{n}}\right) /\tau }\right) } \tag{5}
|
| 121 |
+
$$
|
| 122 |
+
|
| 123 |
+
In standard InfoNCE loss the similarity function $\operatorname{sim}\left( {q,\mathrm{k}}\right)$ is the cosine similarity between the two vectors. In the Social-NCE variation this similarity function has been modified to the dot product of the two embedded vectors returned from the encoders. The Social-NCE Loss is given by:
|
| 124 |
+
|
| 125 |
+
$$
|
| 126 |
+
{L}_{\text{Social-NCE }} = - \log \frac{\exp \left( \left( {\psi \left( {h}^{i}\right) \cdot \phi \left( {{s}_{t + {\delta t}}^{i, + },{\delta t}}\right) /\tau }\right) \right) }{\mathop{\sum }\limits_{{{\delta t} \in \Lambda }}\mathop{\sum }\limits_{{n = 0}}^{N}\exp \left( \left( {\psi \left( {h}^{i}\right) \cdot \phi \left( {{s}_{t + {\delta t}}^{i, n},{\delta t}}\right) /\tau }\right) \right) } \tag{6}
|
| 127 |
+
$$
|
| 128 |
+
|
| 129 |
+
The three encoders $f\left( \cdot \right) ,\psi \left( \cdot \right)$ and $\phi \left( \cdot \right)$ are jointly trained such that the query is encoded closer to the positive key and further from the negative keys. The keys are made through data augmentation as discussed next. The final loss for a specific model would be given by the weighted sum of the model task loss and the Social-NCE loss.
|
| 130 |
+
|
| 131 |
+
#### 3.1.2 Data Augmentation
|
| 132 |
+
|
| 133 |
+
Negative samples: The state of the agent $\mathrm{i}$ at time $t + {\delta t}$ cannot be same as the state of any of the other agents at time $t + {\delta t}$ , so the states of the $M - 1$ elements other than the agent $\mathrm{i}$ can be used as negative keys for it.
|
| 134 |
+
|
| 135 |
+
For each agent $j \in \{ 1,\ldots M\} - \{ i\}$ , 8 points are taken uniformly from a circle with radius of minimum distance of comfort around the agent $\mathrm{j}$ as negative keys for agent $i$
|
| 136 |
+
|
| 137 |
+
$$
|
| 138 |
+
{s}_{t + {\delta t}}^{i, n - } = {s}_{t + {\delta t}}^{j} + \Delta {s}_{p} + \epsilon \tag{7}
|
| 139 |
+
$$
|
| 140 |
+
|
| 141 |
+
$\Delta {s}_{p} = \left( {\rho \cos {\theta }_{p},\rho \sin {\theta }_{p}}\right) ,\rho$ being minimum distance of comfort and ${\theta }_{p} = {0.25p\pi }, p \in \{ 0,1,\ldots ,7\}$
|
| 142 |
+
|
| 143 |
+
$\epsilon$ is a normally distributed added noise
|
| 144 |
+
|
| 145 |
+
Each agent $i$ thus has $8\left( {M - 1}\right)$ negative keys.
|
| 146 |
+
|
| 147 |
+
Positive samples: Single positive key is taken from state of agent $i$ at time $t + {\delta t}$ after adding normally distributed noise $\epsilon$
|
| 148 |
+
|
| 149 |
+
101
|
| 150 |
+
|
| 151 |
+
$$
|
| 152 |
+
{s}_{t + {\delta t}}^{i, + } = {s}_{t + {\delta t}}^{i} + \epsilon \tag{8}
|
| 153 |
+
$$
|
| 154 |
+
|
| 155 |
+
2 The data augmentation is made clearer by the following diagram given by the authors [1]. For an agent $i$ (in blue) the areas of Collision and Discomfort as shown are used as negative samples.
|
| 156 |
+
|
| 157 |
+

|
| 158 |
+
|
| 159 |
+
Figure 1: Social Negative Augmentation
|
| 160 |
+
|
| 161 |
+
103
|
| 162 |
+
|
| 163 |
+
### 3.2 Datasets
|
| 164 |
+
|
| 165 |
+
The human trajectory prediction models were run on a processed version of ETH and UCY datasets. The original dataset is a collection of 5 video segments of pedestrian trajectories from which the states of each agent per frame id had been stored and the dataset had been pre-divided into train, test and validation sets to maintain uniformity in accuracy comparison. The processed ETH and UCY datasets are are available in the repository linked ${}^{3}$ .
|
| 166 |
+
|
| 167 |
+
The imitation and reinforcement learning models used pedestrian data from an open-source simulator based on OpenAI gym library 4. The dataset consisted of 5000 simulated situations in which the position of 5 random agents are stored for each time step. A validation split of 0.3 was taken.
|
| 168 |
+
|
| 169 |
+
### 3.3 Hyperparameters
|
| 170 |
+
|
| 171 |
+
Apart from the hyperparameters required for regular network training, the Social-NCE included three additional hyperparameters, specific to the model. These were: the temperature hyperparameter $\tau$ , the sampling horizon ${\delta t}$ and the contrastive weight $\lambda$ . In the original paper, there values were set by default. We improvised upon previous work by performing a thorough random search for these hyperparameters using WandB [8]. We further do a sensitivity analysis, and check whether hyperparameter tuning offers any significant benefit. The details of the search can be summarised as follows:
|
| 172 |
+
|
| 173 |
+
Table 1: Search on model hyperparameters
|
| 174 |
+
|
| 175 |
+
<table><tr><td>Hyperparameter</td><td>Original Value</td><td>Method of Search</td><td>Range of Search</td></tr><tr><td>Temperature $\tau$</td><td>0.1</td><td>Random</td><td>0.1 - 0.5</td></tr><tr><td>Sampling Horizon ${\delta t}$</td><td>4</td><td>Grid</td><td>1,2,3,4,5</td></tr><tr><td>Contrastive Weight $\lambda$</td><td>2</td><td>Random</td><td>0 - 50</td></tr></table>
|
| 176 |
+
|
| 177 |
+
## 119 Details on the loss hyperparameters:
|
| 178 |
+
|
| 179 |
+
---
|
| 180 |
+
|
| 181 |
+
${}^{3}$ https://github.com/StanfordASL/Trajectron-plus/tree/master/experiments/pedestrians/raw ${}^{4}$ https://github.com/vita-epfl/CrowdNav/blob/master/crow ${\mathrm{d}}_{s}$ im/README.md ${}^{5}$ https://drive.google.com/uc?id=1D2guAx ${\mathrm{D}}_{E}{grKnJFMcLSBkf10SOagz0mr}$
|
| 182 |
+
|
| 183 |
+
---
|
| 184 |
+
|
| 185 |
+
- Temperature(τ): Part of the Social-NCE loss which controls the weight of the penalty and reward for negative and postive samples respectively.
|
| 186 |
+
|
| 187 |
+
- Sampling Horizon $\left( {\delta t}\right)$ : The future time step till which the negative samples are considered for data augmentation
|
| 188 |
+
|
| 189 |
+
- Contrastive Weight $\left( \lambda \right)$ : The weight between the main loss of the model and the Social-NCE Loss
|
| 190 |
+
|
| 191 |
+
A similar search was performed separately for the hyperparameters pertaining to data augmentation, with the default values for the hyperparameters discussed in the previous section. These hyperparameters were: Minimum Separation, Maximum Separation and the weight between maximum separation and noise. The details of this search can be summarised as follows:
|
| 192 |
+
|
| 193 |
+
Table 2: Search on hyperparameters of data augmentation
|
| 194 |
+
|
| 195 |
+
<table><tr><td>Hyperparameter</td><td>Original Value</td><td>Method of Search</td><td>Range of Search</td></tr><tr><td>Minimum Separation</td><td>0.2</td><td>Random</td><td>0.1 - 0.5</td></tr><tr><td>Maximum Separation</td><td>2.5</td><td>Random</td><td>2.2 - 2.8</td></tr><tr><td>Weight between maximum separation and noise</td><td>0.2</td><td>Random</td><td>0 - 0.5</td></tr></table>
|
| 196 |
+
|
| 197 |
+
Details on the augmentation hyperparmeters:
|
| 198 |
+
|
| 199 |
+
- Minimum Separation: Minimum admissible value of $\rho$ in negative augmentation which is the minimum comfortable distance between two agents
|
| 200 |
+
|
| 201 |
+
- Maximum Separation: Maximum admissible value of $\rho$ in negative augmentation which is the maximum distance after which agents can pass each other with collision.
|
| 202 |
+
|
| 203 |
+
- Weight between maximum separation and noise: The weight between the added normal noise and the position of the augmented sample.
|
| 204 |
+
|
| 205 |
+
### 3.4 Experimental setup and code
|
| 206 |
+
|
| 207 |
+
The encoder models were trained with Adam Optimizer. For the training of the Trajectron++, Social-STGCNN and imitation learning models 300,500 and 200 epochs were used respectively. There were two runs of the reinforcement learning model on 2000 and 5000 episodes respectively. As mentioned in the original paper, the models were evaluated on the following metrics:
|
| 208 |
+
|
| 209 |
+
- Final displacement error (FDE): the Euclidean distance between the predicted output and the ground truth at the last time step.
|
| 210 |
+
|
| 211 |
+
- Collision rate (COL): the percentage of test cases where the predicted trajectories of agents run into collisions.
|
| 212 |
+
|
| 213 |
+
A lower FDE is preferred, however the current reproduction mainly aims to see the decrease in collision rate. The code was also integrated with WandB to conduct further experiments. This process involved constructing a config dictionary, which included the list of all possible hyperparameters and the values it could potentially take. The main function was modified with WandB initialisation and the logging function to log the value of the Loss after training is complete. The function was then passed to the WandB agent to carry out sweeps. The code can be found in this link 6 .
|
| 214 |
+
|
| 215 |
+
### 3.5 Computational requirements
|
| 216 |
+
|
| 217 |
+
The training code for Trajectron++ and Social-STGCNN was run on Kaggle with GPU (Tesla P100-PCIE-16GB) and CPU (13GB RAM + 2-core of Intel Xeon).The average training runtimes are listed in the tables below. It can be seen clearly that porting to lightning has not caused any increase in training time.
|
| 218 |
+
|
| 219 |
+
---
|
| 220 |
+
|
| 221 |
+
${}^{6}$ https://anonymous.4open.science/r/social-nce-stgcnn-62D5/README.md
|
| 222 |
+
|
| 223 |
+
---
|
| 224 |
+
|
| 225 |
+
Table 3: Training Runtimes
|
| 226 |
+
|
| 227 |
+
<table><tr><td>Codebase</td><td>Original Codebase Training Time</td><td>Ported Codebase Training Time</td></tr><tr><td>Trajectron++</td><td>4hr 28mins</td><td>4hrs 24mins</td></tr><tr><td>Social-STGCNN</td><td>8hr 32mins</td><td>8hr 30mins</td></tr><tr><td>Imitation Learning</td><td>53min</td><td>50mins</td></tr></table>
|
| 228 |
+
|
| 229 |
+
## 153 4 Results
|
| 230 |
+
|
| 231 |
+
The following experiments support the claims made by the authors. We compared the results from training the model from scratch(Reproduced) and reimplementing the model in PyTorch Lightning (Ported Code) with the results given by the authors (Original Paper).
|
| 232 |
+
|
| 233 |
+
### 4.1 Results reproducing original paper
|
| 234 |
+
|
| 235 |
+
A comparison of the FDE (Final Displacement Error) and COL (Collision Rate) for the addition of Social-NCE to Trajectron++ and Social-STGCNN models in original paper, reproduced and ported code.
|
| 236 |
+
|
| 237 |
+
Table 4: Social-STGCNN
|
| 238 |
+
|
| 239 |
+
<table><tr><td rowspan="2">Dataset</td><td colspan="2">Ported Code</td><td colspan="2">Reproduced</td><td colspan="2">Original Paper</td></tr><tr><td>FDE</td><td>COL</td><td>FDE</td><td>COL</td><td>FDE</td><td>COL</td></tr><tr><td>ETH</td><td>1.442</td><td>0.53</td><td>1.249</td><td>1.11</td><td>1.224</td><td>0.61</td></tr><tr><td>Hotel</td><td>0.598</td><td>3.49</td><td>0.681</td><td>3.25</td><td>0.678</td><td>3.35</td></tr><tr><td>Univ</td><td>0.856</td><td>6.39</td><td>0.878</td><td>6.44</td><td>0.879</td><td>6.44</td></tr><tr><td>Zara1</td><td>0.492</td><td>1.29</td><td>0.515</td><td>1.02</td><td>0.515</td><td>1.02</td></tr><tr><td>Zara2</td><td>0.453</td><td>3.58</td><td>0.481</td><td>3.26</td><td>0.482</td><td>3.37</td></tr><tr><td>Average</td><td>0.768</td><td>3.05</td><td>0.761</td><td>3.02</td><td>0.756</td><td>2.96</td></tr></table>
|
| 240 |
+
|
| 241 |
+
Table 5: Trajectron++
|
| 242 |
+
|
| 243 |
+
<table><tr><td rowspan="2">Dataset</td><td>Ported Code</td><td/><td colspan="2">Reproduced</td><td colspan="2">Original Paper</td></tr><tr><td>FDE</td><td>COL</td><td>FDE</td><td>COL</td><td>FDE</td><td>COL</td></tr><tr><td>ETH</td><td>0.632</td><td>0.00</td><td>0.791</td><td>0.00</td><td>0.791</td><td>0.00</td></tr><tr><td>Hotel</td><td>0.193</td><td>0.29</td><td>0.163</td><td>0.32</td><td>0.177</td><td>0.38</td></tr><tr><td>Univ</td><td>0.426</td><td>2.95</td><td>0.442</td><td>3.29</td><td>0.435</td><td>3.08</td></tr><tr><td>Zara1</td><td>0.439</td><td>0.18</td><td>0.338</td><td>0.14</td><td>0.330</td><td>0.18</td></tr><tr><td>Zara2</td><td>0.452</td><td>0.95</td><td>0.281</td><td>1.02</td><td>0.255</td><td>0.99</td></tr><tr><td>Average</td><td>0.428</td><td>0.88</td><td>0.403</td><td>0.95</td><td>0.398</td><td>0.93</td></tr></table>
|
| 244 |
+
|
| 245 |
+
60 A comparison of the collision rate and time taken for the robot to reach destination for the addition of Social-NCE in the imitation learning model in original paper, reproduced and ported code.
|
| 246 |
+
|
| 247 |
+
<table><tr><td colspan="3">Table 6: Imitation Learning</td></tr><tr><td>Code</td><td>Time(s)</td><td>Collision(%)</td></tr><tr><td>Original</td><td>10.33</td><td>3.40</td></tr><tr><td>Reproduced</td><td>10.49</td><td>3.36</td></tr><tr><td>Ported</td><td>10.28</td><td>3.45</td></tr></table>
|
| 248 |
+
|
| 249 |
+
162 A table of reward vs number of episodes trained for the implementation of Social-NCE on the reinforcement learning 163 model.
|
| 250 |
+
|
| 251 |
+
Table 7: Reinforcement Learning
|
| 252 |
+
|
| 253 |
+
<table><tr><td>Episodes</td><td>0</td><td>1000</td><td>2000</td><td>3000</td><td>4000</td><td>5000</td></tr><tr><td>Reward</td><td>-0.10</td><td>0.42</td><td>0.61</td><td>0.63</td><td>0.64</td><td>0.64</td></tr></table>
|
| 254 |
+
|
| 255 |
+
### 4.2 Hyperparameter tuning
|
| 256 |
+
|
| 257 |
+
The performance of any model, critically depends on the choice of hyperparameters. In the original paper, the values of these hyperparameters were set by default. We identified critical hyperparameters, specific to the Social-NCE, and conducted a thorough hyperparameter search, to find out the best possible combination of hyperparameters. Due to lack of time, this was done only on the Social STGCNN model and trained over the ETH dataset. The following table summarises the results of the search:
|
| 258 |
+
|
| 259 |
+
Table 8: Hyperparameter Search
|
| 260 |
+
|
| 261 |
+
<table><tr><td>Hyperparameter</td><td>Original Value</td><td>Best Value</td></tr><tr><td>Temperature $\tau$</td><td>0.1</td><td>0.1412</td></tr><tr><td>Sampling Horizon ${\delta t}$</td><td>4</td><td>1</td></tr><tr><td>Contrastive Weight $\lambda$</td><td>2</td><td>16</td></tr></table>
|
| 262 |
+
|
| 263 |
+
170 A similar search was performed separately for the hyperparameters pertaining to Data Augmentation:
|
| 264 |
+
|
| 265 |
+
Table 9: Hyperparameter Search
|
| 266 |
+
|
| 267 |
+
<table><tr><td>Hyperparameter</td><td>Default Value</td><td>Best Value</td></tr><tr><td>Minimum Separation</td><td>0.2</td><td>0.22</td></tr><tr><td>Maximum Separation</td><td>2.5</td><td>3.1</td></tr><tr><td>Weight between maximum separation and noise</td><td>0.2</td><td>0.24</td></tr></table>
|
| 268 |
+
|
| 269 |
+
171 The FDE and collision rate after training the Social-STGCNN model for 400 epochs on the original (Original Parameters) and tuned hyperparameters(Tuned Parameters) are: 172
|
| 270 |
+
|
| 271 |
+
Table 10: Metrics on Original and Tuned Hypeparameters
|
| 272 |
+
|
| 273 |
+
<table><tr><td>Model</td><td>FDE</td><td>COL</td></tr><tr><td>Tuned Parameters</td><td>0.674</td><td>3.45</td></tr><tr><td>Original Parameters</td><td>0.678</td><td>3.54</td></tr></table>
|
| 274 |
+
|
| 275 |
+
## 5 Discussion
|
| 276 |
+
|
| 277 |
+
Our results support the authors' claim that modelling of social knowledge through the addition of negative test cases reduce the collision rate of trajectory prediction models. In both training from scratch in the original code and in the ported code, the results have remained consistent with that of the original paper.
|
| 278 |
+
|
| 279 |
+
1. In human trajectory forecasting, the addition of Social-NCE to the models Trajectron++ and social-STGCNN showed a 35.7% and 35.1% decrease in collision rate on average respectively(Table 4 and Table 5) in our reproduced results. The Final Displacement Error(FDE) showed deviation of less than 1%, showing that addition of Social-NCE adds to the robustness of the models without affecting it's accuracy.
|
| 280 |
+
|
| 281 |
+
2. In the imitation learning model the collision rate decreased by ${68.9}\%$ on average in our reproduced results with the time taken showing little deviation (Table 6).
|
| 282 |
+
|
| 283 |
+
3. The Social-NCE addition to the Rainbow-DQN based reinforcement learning model, as in the original paper, achieves a reward of 0.6 in 2000 episodes in comparison to 4000 episodes of the original Reinforcement Learning model(Table 7).
|
| 284 |
+
|
| 285 |
+
The hyperparameter tuning conducted was also vastly helpful and lead to an increase of accuracy by 0.91%. The loss hyperparameters, determined the sensitivity of the model. The contrastive weight, determined emphasis of the Social-NCE loss. The more the emphasis, the better the model learnt to differentiate between a positive and negative sample, but at the expense of loss of proximity to the actual training examples. It remains difficult to analytically understand the effect of change in the temperature hyperparameter.
|
| 286 |
+
|
| 287 |
+
Hyperparameter search, even though tedious, can lead to a great increase in accuracy. The tuning of hyperparameters involved in the model, lead to an overall increase in accuracy. In task like trajectory prediction and motion forecasting, it might be crucial to try and increase the accuracy as much as possible. However, one thing to be noted is that the Social-STGCNN had a huge running time, and one sweep took a huge amount of time.
|
| 288 |
+
|
| 289 |
+
The effect of the Data Augmentation hyperparameters, seem to be highly variable. It is natural that results are likely to vary greatly with the choice of dataset, and the nature of the problem statement. This is because these hyperparameters are physical constraints put on the model, and hence might lead to different results for different datasets.
|
| 290 |
+
|
| 291 |
+
Further, it was found that best results were found when the value of the contrastive weight to be 16, while the default value was 2. The values might differ distinctly, but this reinforces confidence in the proposed Social-NCE loss.
|
| 292 |
+
|
| 293 |
+
### 5.1 What was easy
|
| 294 |
+
|
| 295 |
+
The authors have provided a detailed public codebase on the implementation of Social-NCE on Trajectron++, Social-STGCNN and imitation learning model. Further, they shared the codebase for the reinforcement learning model. All the codebases have instructions on how to set up the environment and logs the important metrics, which proved to be helpful in reproduction.
|
| 296 |
+
|
| 297 |
+
### 5.2 What was difficult
|
| 298 |
+
|
| 299 |
+
The porting of the implementation of Social-NCE in the Trajectron++ and Social-STGCNN models from PyTorch to PyTorch Lightning required an understanding of those models and their original codebase which required additional time. Training of the models from scratch required large computation power. All the training was done over cloud GPUs with limited runtimes which often fell short of the time required for training.
|
| 300 |
+
|
| 301 |
+
### 5.3 Communication with original authors
|
| 302 |
+
|
| 303 |
+
We mailed the authors listing down some of the queries we had on their code implementation. We also had some queries regarding the important hyperparameters that we could tune to improve model performance. The authors gave a prompt reply to our questions. They shared the codebase for the reinforcement learning model as well. Their contribution has helped us with some crucial points in the report.
|
| 304 |
+
|
| 305 |
+
## 215 6 Future Work
|
| 306 |
+
|
| 307 |
+
We originally planned to perform the following additional experiments which we couldn't finish due to lack of time. They have been listed down below and we believe that future work on this paper can be done in this direction.
|
| 308 |
+
|
| 309 |
+
- A best hyperparameter search on Social-NCE in Trajectron++, the imitation learning model and the Rainbow-DQN based Reinforcement Learning model as well and comparison of the variation in results for different models.
|
| 310 |
+
|
| 311 |
+
- Implementation of Social-NCE on the Social-LSTM and Directional-LSTM models on the Trajnet++ benchmark, the results for which have been given in the original paper.
|
| 312 |
+
|
| 313 |
+
- Implementation of Social-NCE on state of the art models in other benchmarks such as on the PGP model [9] for the nuScences dataset.
|
| 314 |
+
|
| 315 |
+
## References
|
| 316 |
+
|
| 317 |
+
[1] Y. Liu, Q. Yan, and A. Alahi, "Social nce: Contrastive learning of socially-aware motion representations," arXiv preprint arXiv:2012.11717, 2020.
|
| 318 |
+
|
| 319 |
+
[2] T. Salzmann, B. Ivanovic, P. Chakravarty, and M. Pavone, "Trajectron++: Multi-agent generative trajectory forecasting with heterogeneous data for control," 2020.
|
| 320 |
+
|
| 321 |
+
[3] A. Mohamed, K. Qian, M. Elhoseiny, and C. Claudel, "Social-stgcnn: A social spatio-temporal graph convolutional neural network for human trajectory prediction," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 14424-14432.
|
| 322 |
+
|
| 323 |
+
[4] C. Chen, Y. Liu, S. Kreiss, and A. Alahi, "Crowd-robot interaction: Crowd-aware robot navigation with attention-based deep reinforcement learning," in ICRA, 2019.
|
| 324 |
+
|
| 325 |
+
[5] M. Hessel, J. Modayil, H. van Hasselt, T. Schaul, G. Ostrovski, W. Dabney, D. Horgan, B. Piot, M. Azar, and D. Silver, "Rainbow: Combining improvements in deep reinforcement learning," 2017, cite arxiv:1710.02298Comment: Under review as a conference paper at AAAI 2018. [Online]. Available: http://arxiv.org/abs/1710.02298
|
| 326 |
+
|
| 327 |
+
[6] W. Falcon et al., "Pytorch lightning," GitHub. Note: https://github.com/PyTorchLightning/pytorch-lightning, vol. 3, 2019.
|
| 328 |
+
|
| 329 |
+
[7] M. Gutmann and A. Hyvärinen, "Noise-contrastive estimation: A new estimation principle for unnormalized statistical models," in Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, ser. Proceedings of Machine Learning Research, Y. W. Teh and M. Titterington, Eds., vol. 9. Chia Laguna Resort, Sardinia, Italy: PMLR, 13-15 May 2010, pp. 297-304. [Online]. Available: https://proceedings.mlr.press/v9/gutmann10a.html
|
| 330 |
+
|
| 331 |
+
[8] L. Biewald, "Experiment tracking with weights and biases," 2020, software available from wandb.com. [Online]. Available: https://www.wandb.com/
|
| 332 |
+
|
| 333 |
+
[9] N. Deo, E. Wolff, and O. Beijbom, "Multimodal trajectory prediction conditioned on lane-graph traversals," in 5th Annual Conference on Robot Learning, 2021. [Online]. Available: https://openreview.net/forum?id=hu7b7MPCqiC
|
papers/ML_Reproducibility_Challenge/ML_Reproducibility_Challenge 2021/ML_Reproducibility_Challenge 2021 Fall/SIQEl6f7h0Y/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,445 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ REPRODUCIBILITY REPORT: CONTRASTIVE LEARNING OF SOCIALLY-AWARE MOTION REPRESENTATIONS
|
| 2 |
+
|
| 3 |
+
Anonymous Author(s)
|
| 4 |
+
|
| 5 |
+
Affiliation
|
| 6 |
+
|
| 7 |
+
Address
|
| 8 |
+
|
| 9 |
+
email 1 Reproducibility Summary
|
| 10 |
+
|
| 11 |
+
The following paper is a reproducibility report for "Social NCE: Contrastive Learning of Socially-aware Motion 3 Representations" [1] published in ICCV 2021 as part of the ML Reproducibility Challenge 2021. The original code was 4 made available by the author1. We attempted to verify the results claimed by the authors and reimplemented their code in PyTorch Lightning.
|
| 12 |
+
|
| 13 |
+
§ 6 SCOPE OF REPRODUCIBILITY
|
| 14 |
+
|
| 15 |
+
The central claim of the paper is that the consideration of negative (collision) cases in trajectory prediction models through a socially contrastive loss function Social-NCE will improve the robustness of the models. We verify their claim on various models, with special focus on improvements in the human trajectory prediction models Social-STGCNN and Trajectron++ and on robot navigation through an imitation learning model.
|
| 16 |
+
|
| 17 |
+
§ METHODOLOGY
|
| 18 |
+
|
| 19 |
+
We used the codebase made publicly available by the authors for our work. We trained the models used in the paper from scratch and reimplemented the code in PyTorch Lightning. We evaluated both, and compared them with the results in the original paper. Further, we attempted additional experiments to find suitable hyperparameters in the Trajectron++ and Social-STGCNN models.
|
| 20 |
+
|
| 21 |
+
§ RESULTS
|
| 22 |
+
|
| 23 |
+
17 We were able to reproduce majority of the results claimed in the paper except the Social-LSTM and Directional-LSTM models due to lack of time, and got a maximum of $2\%$ deviation from that of the original paper.
|
| 24 |
+
|
| 25 |
+
§ 9 WHAT WAS EASY
|
| 26 |
+
|
| 27 |
+
The publicly available codebases were well documented and easy to follow. The authors have also mentioned sources for the processed datasets that they have used. The simulation data generation code for the imitation learning model was also shared.
|
| 28 |
+
|
| 29 |
+
§ 23 WHAT WAS DIFFICULT
|
| 30 |
+
|
| 31 |
+
The proposed contrastive loss was implemented on different trajectory prediction models, the understanding of which was required to reimplement the code from PyTorch to PyTorch Lightning. Experiments on the entire ETH and UCY dataset on restricted computational resources took a considerable amount of time and we had to restrict our ablation study to one model.
|
| 32 |
+
|
| 33 |
+
§ COMMUNICATION WITH ORIGINAL AUTHORS
|
| 34 |
+
|
| 35 |
+
We contacted the authors with some queries on their implementation and on the importance of some hyperparameters. They replied promptly and their input was pivotal while conducting experiments.
|
| 36 |
+
|
| 37 |
+
Submitted to ML Reproducibility Challenge 2021. Do not distribute.
|
| 38 |
+
|
| 39 |
+
https://github.com/vita-epfl/social-nce
|
| 40 |
+
|
| 41 |
+
§ 1 INTRODUCTION
|
| 42 |
+
|
| 43 |
+
Humans tend to develop a strong intuition towards predicting future motions of other people, while navigating in crowded spaces. This is essential for carrying out daily tasks without any discomfort and to maintain a safe distance from others while moving around. However, building neural models that can replicate similar nature of accurate predictions is often challenging, even with a large training set.
|
| 44 |
+
|
| 45 |
+
Multi-agent problems such as trajectory forecasting and robot navigation, require the model to learn socially aware motion representations. Previously, several papers have proposed neural network based models to achieve these tasks. However, these models still fail to generalize well with different scenarios, often outputting colliding trajectories. The original authors aim to tackle this issue by feeding explicit negative examples into the network, while teaching the model to differentiate between the two using a newly proposed Social Contrastive Loss.
|
| 46 |
+
|
| 47 |
+
We exhaustively carry out all the experiments done in the paper and verify all claims and tables. We then review the results and present an assessment. We further ported the code to the PyTorch Lightning framework. This allowed us to train the code flexibly over different platforms and automate the optimization process. We also expect this to help in future implementation or reproduction of the codebase. Then we proceed to present a few ablations in the original code, especially hyperparameter tuning.
|
| 48 |
+
|
| 49 |
+
§ 2 SCOPE OF REPRODUCIBILITY
|
| 50 |
+
|
| 51 |
+
Existing work on multi-agent trajectory prediction problems sometimes output colliding trajectories which makes them unsuitable for deployment. The authors claim that this is due to the bias in existing datasets which only consist of safe trajectories and no collision scenarios, giving the models no negative cases to train on. The original paper proposes a modified contrastive loss (Social-NCE) which incorporates ground truth knowledge to generate negative cases to reduce collision rates on several benchmarks. The details of this loss have been discussed later (in Methodology section) in the report. The key claims that we aim to verify in our reproducibility report are:
|
| 52 |
+
|
| 53 |
+
1. Addition of the Social-NCE loss in human trajectory forecasting models significantly decreases collision rate while maintaining similar final displacement error.
|
| 54 |
+
|
| 55 |
+
2. Addition of the Social-NCE loss in imitation learning models for robot navigation in crowded environments significantly decreases the collision rate.
|
| 56 |
+
|
| 57 |
+
3. Addition of the Social-NCE loss in reinforcement learning models increases sample efficiency, and they obtain a collision-free policy quickly.
|
| 58 |
+
|
| 59 |
+
§ 593 METHODOLOGY
|
| 60 |
+
|
| 61 |
+
The authors have a detailed public repository ${}^{2}$ on the addition of Social-NCE on Trajectron++ [2], Social-STGCNN [3], models for human trajectory prediction and on an existing imitation learning model [4] for robot navigation. Further, we contacted the authors and they gave us their implementation of Social-NCE in reinforcement learning using Rainbow DQN [5] as the baseline. We reproduced the findings of the paper based on these repositories. We focused primarily on the human trajectory prediction models Social-STGCNN and Trajectron++ and attempted ablations on Social-NCE hyperparameters in the Trajectron++ model to improve its performance. Lastly we ported the codebase for Trajectron++, Social-STGCNN and the imitation learning model to PyTorch Lightning [6].
|
| 62 |
+
|
| 63 |
+
${}^{2}$ https://github.com/YuejiangLIU/social-nce-trajectron-plus-plus
|
| 64 |
+
|
| 65 |
+
§ 3.1 SOCIAL-NCE LOSS AND NEGATIVE DATA AUGMENTATION
|
| 66 |
+
|
| 67 |
+
Consider M agents with index $i \in \{ 1\ldots M\}$ , the state of agent $i$ at time $t$ is given by ${s}_{t}^{i} = \left( {{x}_{t}^{i},{y}_{t}^{i}}\right)$ which are its position coordinates. State of all agents combined is given by ${s}_{t} = \left\{ {{s}_{t}^{1},{s}_{t}^{2}\ldots {s}_{t}^{M}}\right\}$ . Given ${s}_{1 : t}$ the model predicts ${s}_{t + 1 : T}$ .
|
| 68 |
+
|
| 69 |
+
Encoder $f\left( \cdot \right)$ :
|
| 70 |
+
|
| 71 |
+
Gives vector encoding $\left( {h}_{t}^{i}\right)$ for agent $i$ at time $t$ given state of all agents till time $t$ and index of agent:
|
| 72 |
+
|
| 73 |
+
$$
|
| 74 |
+
{h}_{t}^{i} = f\left( {{s}_{1 : t},i}\right) \tag{1}
|
| 75 |
+
$$
|
| 76 |
+
|
| 77 |
+
Encoder has two sub-modules: sequential ${f}_{s}\left( \text{ . }\right) {andinteraction}{f}_{i}\left( \text{ . }\right) {modules}$ to make encoding of one agent dependent on the state of other agents.
|
| 78 |
+
|
| 79 |
+
Decoder $g\left( \cdot \right)$ :
|
| 80 |
+
|
| 81 |
+
Returns predicted state from vector encoding
|
| 82 |
+
|
| 83 |
+
$$
|
| 84 |
+
{s}_{t + 1 : T}^{i} = g\left( {h}_{t}^{i}\right) \tag{2}
|
| 85 |
+
$$
|
| 86 |
+
|
| 87 |
+
§ 3.1.1 SOCIAL-NCE LOSS
|
| 88 |
+
|
| 89 |
+
§ EMBEDDING MODELS
|
| 90 |
+
|
| 91 |
+
* Query: Projection head that embeds the vector encoding of the agent $i$ till time $t$
|
| 92 |
+
|
| 93 |
+
$$
|
| 94 |
+
q = \psi \left( {h}_{t}^{i}\right) \tag{3}
|
| 95 |
+
$$
|
| 96 |
+
|
| 97 |
+
* Key: Encoder that embeds the future state of agent i at time $t + {\delta t}$ where ${\delta t}$ is the sampling horizon in a given
|
| 98 |
+
|
| 99 |
+
range
|
| 100 |
+
|
| 101 |
+
$$
|
| 102 |
+
k = \phi \left( {{s}_{t + {\delta t}}^{i},{\delta t}}\right) \tag{4}
|
| 103 |
+
$$
|
| 104 |
+
|
| 105 |
+
Both the query and key are 2-layer MLPs which return 8-dimensional encoded vectors.
|
| 106 |
+
|
| 107 |
+
Loss
|
| 108 |
+
|
| 109 |
+
The InfoNCE Loss [7] is given by:
|
| 110 |
+
|
| 111 |
+
$$
|
| 112 |
+
{L}_{NCE} = - \log \frac{\exp \left( {\operatorname{sim}\left( {q,{k}^{ + }}\right) /\tau }\right) }{\mathop{\sum }\limits_{{n = 0}}^{N}\exp \left( {\operatorname{sim}\left( {q,{k}_{n}}\right) /\tau }\right) } \tag{5}
|
| 113 |
+
$$
|
| 114 |
+
|
| 115 |
+
In standard InfoNCE loss the similarity function $\operatorname{sim}\left( {q,\mathrm{k}}\right)$ is the cosine similarity between the two vectors. In the Social-NCE variation this similarity function has been modified to the dot product of the two embedded vectors returned from the encoders. The Social-NCE Loss is given by:
|
| 116 |
+
|
| 117 |
+
$$
|
| 118 |
+
{L}_{\text{ Social-NCE }} = - \log \frac{\exp \left( \left( {\psi \left( {h}^{i}\right) \cdot \phi \left( {{s}_{t + {\delta t}}^{i, + },{\delta t}}\right) /\tau }\right) \right) }{\mathop{\sum }\limits_{{{\delta t} \in \Lambda }}\mathop{\sum }\limits_{{n = 0}}^{N}\exp \left( \left( {\psi \left( {h}^{i}\right) \cdot \phi \left( {{s}_{t + {\delta t}}^{i,n},{\delta t}}\right) /\tau }\right) \right) } \tag{6}
|
| 119 |
+
$$
|
| 120 |
+
|
| 121 |
+
The three encoders $f\left( \cdot \right) ,\psi \left( \cdot \right)$ and $\phi \left( \cdot \right)$ are jointly trained such that the query is encoded closer to the positive key and further from the negative keys. The keys are made through data augmentation as discussed next. The final loss for a specific model would be given by the weighted sum of the model task loss and the Social-NCE loss.
|
| 122 |
+
|
| 123 |
+
§ 3.1.2 DATA AUGMENTATION
|
| 124 |
+
|
| 125 |
+
Negative samples: The state of the agent $\mathrm{i}$ at time $t + {\delta t}$ cannot be same as the state of any of the other agents at time $t + {\delta t}$ , so the states of the $M - 1$ elements other than the agent $\mathrm{i}$ can be used as negative keys for it.
|
| 126 |
+
|
| 127 |
+
For each agent $j \in \{ 1,\ldots M\} - \{ i\}$ , 8 points are taken uniformly from a circle with radius of minimum distance of comfort around the agent $\mathrm{j}$ as negative keys for agent $i$
|
| 128 |
+
|
| 129 |
+
$$
|
| 130 |
+
{s}_{t + {\delta t}}^{i,n - } = {s}_{t + {\delta t}}^{j} + \Delta {s}_{p} + \epsilon \tag{7}
|
| 131 |
+
$$
|
| 132 |
+
|
| 133 |
+
$\Delta {s}_{p} = \left( {\rho \cos {\theta }_{p},\rho \sin {\theta }_{p}}\right) ,\rho$ being minimum distance of comfort and ${\theta }_{p} = {0.25p\pi },p \in \{ 0,1,\ldots ,7\}$
|
| 134 |
+
|
| 135 |
+
$\epsilon$ is a normally distributed added noise
|
| 136 |
+
|
| 137 |
+
Each agent $i$ thus has $8\left( {M - 1}\right)$ negative keys.
|
| 138 |
+
|
| 139 |
+
Positive samples: Single positive key is taken from state of agent $i$ at time $t + {\delta t}$ after adding normally distributed noise $\epsilon$
|
| 140 |
+
|
| 141 |
+
101
|
| 142 |
+
|
| 143 |
+
$$
|
| 144 |
+
{s}_{t + {\delta t}}^{i, + } = {s}_{t + {\delta t}}^{i} + \epsilon \tag{8}
|
| 145 |
+
$$
|
| 146 |
+
|
| 147 |
+
2 The data augmentation is made clearer by the following diagram given by the authors [1]. For an agent $i$ (in blue) the areas of Collision and Discomfort as shown are used as negative samples.
|
| 148 |
+
|
| 149 |
+
< g r a p h i c s >
|
| 150 |
+
|
| 151 |
+
Figure 1: Social Negative Augmentation
|
| 152 |
+
|
| 153 |
+
103
|
| 154 |
+
|
| 155 |
+
§ 3.2 DATASETS
|
| 156 |
+
|
| 157 |
+
The human trajectory prediction models were run on a processed version of ETH and UCY datasets. The original dataset is a collection of 5 video segments of pedestrian trajectories from which the states of each agent per frame id had been stored and the dataset had been pre-divided into train, test and validation sets to maintain uniformity in accuracy comparison. The processed ETH and UCY datasets are are available in the repository linked ${}^{3}$ .
|
| 158 |
+
|
| 159 |
+
The imitation and reinforcement learning models used pedestrian data from an open-source simulator based on OpenAI gym library 4. The dataset consisted of 5000 simulated situations in which the position of 5 random agents are stored for each time step. A validation split of 0.3 was taken.
|
| 160 |
+
|
| 161 |
+
§ 3.3 HYPERPARAMETERS
|
| 162 |
+
|
| 163 |
+
Apart from the hyperparameters required for regular network training, the Social-NCE included three additional hyperparameters, specific to the model. These were: the temperature hyperparameter $\tau$ , the sampling horizon ${\delta t}$ and the contrastive weight $\lambda$ . In the original paper, there values were set by default. We improvised upon previous work by performing a thorough random search for these hyperparameters using WandB [8]. We further do a sensitivity analysis, and check whether hyperparameter tuning offers any significant benefit. The details of the search can be summarised as follows:
|
| 164 |
+
|
| 165 |
+
Table 1: Search on model hyperparameters
|
| 166 |
+
|
| 167 |
+
max width=
|
| 168 |
+
|
| 169 |
+
Hyperparameter Original Value Method of Search Range of Search
|
| 170 |
+
|
| 171 |
+
1-4
|
| 172 |
+
Temperature $\tau$ 0.1 Random 0.1 - 0.5
|
| 173 |
+
|
| 174 |
+
1-4
|
| 175 |
+
Sampling Horizon ${\delta t}$ 4 Grid 1,2,3,4,5
|
| 176 |
+
|
| 177 |
+
1-4
|
| 178 |
+
Contrastive Weight $\lambda$ 2 Random 0 - 50
|
| 179 |
+
|
| 180 |
+
1-4
|
| 181 |
+
|
| 182 |
+
§ 119 DETAILS ON THE LOSS HYPERPARAMETERS:
|
| 183 |
+
|
| 184 |
+
${}^{3}$ https://github.com/StanfordASL/Trajectron-plus/tree/master/experiments/pedestrians/raw ${}^{4}$ https://github.com/vita-epfl/CrowdNav/blob/master/crow ${\mathrm{d}}_{s}$ im/README.md ${}^{5}$ https://drive.google.com/uc?id=1D2guAx ${\mathrm{D}}_{E}{grKnJFMcLSBkf10SOagz0mr}$
|
| 185 |
+
|
| 186 |
+
* Temperature(τ): Part of the Social-NCE loss which controls the weight of the penalty and reward for negative and postive samples respectively.
|
| 187 |
+
|
| 188 |
+
* Sampling Horizon $\left( {\delta t}\right)$ : The future time step till which the negative samples are considered for data augmentation
|
| 189 |
+
|
| 190 |
+
* Contrastive Weight $\left( \lambda \right)$ : The weight between the main loss of the model and the Social-NCE Loss
|
| 191 |
+
|
| 192 |
+
A similar search was performed separately for the hyperparameters pertaining to data augmentation, with the default values for the hyperparameters discussed in the previous section. These hyperparameters were: Minimum Separation, Maximum Separation and the weight between maximum separation and noise. The details of this search can be summarised as follows:
|
| 193 |
+
|
| 194 |
+
Table 2: Search on hyperparameters of data augmentation
|
| 195 |
+
|
| 196 |
+
max width=
|
| 197 |
+
|
| 198 |
+
Hyperparameter Original Value Method of Search Range of Search
|
| 199 |
+
|
| 200 |
+
1-4
|
| 201 |
+
Minimum Separation 0.2 Random 0.1 - 0.5
|
| 202 |
+
|
| 203 |
+
1-4
|
| 204 |
+
Maximum Separation 2.5 Random 2.2 - 2.8
|
| 205 |
+
|
| 206 |
+
1-4
|
| 207 |
+
Weight between maximum separation and noise 0.2 Random 0 - 0.5
|
| 208 |
+
|
| 209 |
+
1-4
|
| 210 |
+
|
| 211 |
+
Details on the augmentation hyperparmeters:
|
| 212 |
+
|
| 213 |
+
* Minimum Separation: Minimum admissible value of $\rho$ in negative augmentation which is the minimum comfortable distance between two agents
|
| 214 |
+
|
| 215 |
+
* Maximum Separation: Maximum admissible value of $\rho$ in negative augmentation which is the maximum distance after which agents can pass each other with collision.
|
| 216 |
+
|
| 217 |
+
* Weight between maximum separation and noise: The weight between the added normal noise and the position of the augmented sample.
|
| 218 |
+
|
| 219 |
+
§ 3.4 EXPERIMENTAL SETUP AND CODE
|
| 220 |
+
|
| 221 |
+
The encoder models were trained with Adam Optimizer. For the training of the Trajectron++, Social-STGCNN and imitation learning models 300,500 and 200 epochs were used respectively. There were two runs of the reinforcement learning model on 2000 and 5000 episodes respectively. As mentioned in the original paper, the models were evaluated on the following metrics:
|
| 222 |
+
|
| 223 |
+
* Final displacement error (FDE): the Euclidean distance between the predicted output and the ground truth at the last time step.
|
| 224 |
+
|
| 225 |
+
* Collision rate (COL): the percentage of test cases where the predicted trajectories of agents run into collisions.
|
| 226 |
+
|
| 227 |
+
A lower FDE is preferred, however the current reproduction mainly aims to see the decrease in collision rate. The code was also integrated with WandB to conduct further experiments. This process involved constructing a config dictionary, which included the list of all possible hyperparameters and the values it could potentially take. The main function was modified with WandB initialisation and the logging function to log the value of the Loss after training is complete. The function was then passed to the WandB agent to carry out sweeps. The code can be found in this link 6 .
|
| 228 |
+
|
| 229 |
+
§ 3.5 COMPUTATIONAL REQUIREMENTS
|
| 230 |
+
|
| 231 |
+
The training code for Trajectron++ and Social-STGCNN was run on Kaggle with GPU (Tesla P100-PCIE-16GB) and CPU (13GB RAM + 2-core of Intel Xeon).The average training runtimes are listed in the tables below. It can be seen clearly that porting to lightning has not caused any increase in training time.
|
| 232 |
+
|
| 233 |
+
${}^{6}$ https://anonymous.4open.science/r/social-nce-stgcnn-62D5/README.md
|
| 234 |
+
|
| 235 |
+
Table 3: Training Runtimes
|
| 236 |
+
|
| 237 |
+
max width=
|
| 238 |
+
|
| 239 |
+
Codebase Original Codebase Training Time Ported Codebase Training Time
|
| 240 |
+
|
| 241 |
+
1-3
|
| 242 |
+
Trajectron++ 4hr 28mins 4hrs 24mins
|
| 243 |
+
|
| 244 |
+
1-3
|
| 245 |
+
Social-STGCNN 8hr 32mins 8hr 30mins
|
| 246 |
+
|
| 247 |
+
1-3
|
| 248 |
+
Imitation Learning 53min 50mins
|
| 249 |
+
|
| 250 |
+
1-3
|
| 251 |
+
|
| 252 |
+
§ 153 4 RESULTS
|
| 253 |
+
|
| 254 |
+
The following experiments support the claims made by the authors. We compared the results from training the model from scratch(Reproduced) and reimplementing the model in PyTorch Lightning (Ported Code) with the results given by the authors (Original Paper).
|
| 255 |
+
|
| 256 |
+
§ 4.1 RESULTS REPRODUCING ORIGINAL PAPER
|
| 257 |
+
|
| 258 |
+
A comparison of the FDE (Final Displacement Error) and COL (Collision Rate) for the addition of Social-NCE to Trajectron++ and Social-STGCNN models in original paper, reproduced and ported code.
|
| 259 |
+
|
| 260 |
+
Table 4: Social-STGCNN
|
| 261 |
+
|
| 262 |
+
max width=
|
| 263 |
+
|
| 264 |
+
2*Dataset 2|c|Ported Code 2|c|Reproduced 2|c|Original Paper
|
| 265 |
+
|
| 266 |
+
2-7
|
| 267 |
+
FDE COL FDE COL FDE COL
|
| 268 |
+
|
| 269 |
+
1-7
|
| 270 |
+
ETH 1.442 0.53 1.249 1.11 1.224 0.61
|
| 271 |
+
|
| 272 |
+
1-7
|
| 273 |
+
Hotel 0.598 3.49 0.681 3.25 0.678 3.35
|
| 274 |
+
|
| 275 |
+
1-7
|
| 276 |
+
Univ 0.856 6.39 0.878 6.44 0.879 6.44
|
| 277 |
+
|
| 278 |
+
1-7
|
| 279 |
+
Zara1 0.492 1.29 0.515 1.02 0.515 1.02
|
| 280 |
+
|
| 281 |
+
1-7
|
| 282 |
+
Zara2 0.453 3.58 0.481 3.26 0.482 3.37
|
| 283 |
+
|
| 284 |
+
1-7
|
| 285 |
+
Average 0.768 3.05 0.761 3.02 0.756 2.96
|
| 286 |
+
|
| 287 |
+
1-7
|
| 288 |
+
|
| 289 |
+
Table 5: Trajectron++
|
| 290 |
+
|
| 291 |
+
max width=
|
| 292 |
+
|
| 293 |
+
2*Dataset Ported Code X 2|c|Reproduced 2|c|Original Paper
|
| 294 |
+
|
| 295 |
+
2-7
|
| 296 |
+
FDE COL FDE COL FDE COL
|
| 297 |
+
|
| 298 |
+
1-7
|
| 299 |
+
ETH 0.632 0.00 0.791 0.00 0.791 0.00
|
| 300 |
+
|
| 301 |
+
1-7
|
| 302 |
+
Hotel 0.193 0.29 0.163 0.32 0.177 0.38
|
| 303 |
+
|
| 304 |
+
1-7
|
| 305 |
+
Univ 0.426 2.95 0.442 3.29 0.435 3.08
|
| 306 |
+
|
| 307 |
+
1-7
|
| 308 |
+
Zara1 0.439 0.18 0.338 0.14 0.330 0.18
|
| 309 |
+
|
| 310 |
+
1-7
|
| 311 |
+
Zara2 0.452 0.95 0.281 1.02 0.255 0.99
|
| 312 |
+
|
| 313 |
+
1-7
|
| 314 |
+
Average 0.428 0.88 0.403 0.95 0.398 0.93
|
| 315 |
+
|
| 316 |
+
1-7
|
| 317 |
+
|
| 318 |
+
60 A comparison of the collision rate and time taken for the robot to reach destination for the addition of Social-NCE in the imitation learning model in original paper, reproduced and ported code.
|
| 319 |
+
|
| 320 |
+
max width=
|
| 321 |
+
|
| 322 |
+
3|c|Table 6: Imitation Learning
|
| 323 |
+
|
| 324 |
+
1-3
|
| 325 |
+
Code Time(s) Collision(%)
|
| 326 |
+
|
| 327 |
+
1-3
|
| 328 |
+
Original 10.33 3.40
|
| 329 |
+
|
| 330 |
+
1-3
|
| 331 |
+
Reproduced 10.49 3.36
|
| 332 |
+
|
| 333 |
+
1-3
|
| 334 |
+
Ported 10.28 3.45
|
| 335 |
+
|
| 336 |
+
1-3
|
| 337 |
+
|
| 338 |
+
162 A table of reward vs number of episodes trained for the implementation of Social-NCE on the reinforcement learning 163 model.
|
| 339 |
+
|
| 340 |
+
Table 7: Reinforcement Learning
|
| 341 |
+
|
| 342 |
+
max width=
|
| 343 |
+
|
| 344 |
+
Episodes 0 1000 2000 3000 4000 5000
|
| 345 |
+
|
| 346 |
+
1-7
|
| 347 |
+
Reward -0.10 0.42 0.61 0.63 0.64 0.64
|
| 348 |
+
|
| 349 |
+
1-7
|
| 350 |
+
|
| 351 |
+
§ 4.2 HYPERPARAMETER TUNING
|
| 352 |
+
|
| 353 |
+
The performance of any model, critically depends on the choice of hyperparameters. In the original paper, the values of these hyperparameters were set by default. We identified critical hyperparameters, specific to the Social-NCE, and conducted a thorough hyperparameter search, to find out the best possible combination of hyperparameters. Due to lack of time, this was done only on the Social STGCNN model and trained over the ETH dataset. The following table summarises the results of the search:
|
| 354 |
+
|
| 355 |
+
Table 8: Hyperparameter Search
|
| 356 |
+
|
| 357 |
+
max width=
|
| 358 |
+
|
| 359 |
+
Hyperparameter Original Value Best Value
|
| 360 |
+
|
| 361 |
+
1-3
|
| 362 |
+
Temperature $\tau$ 0.1 0.1412
|
| 363 |
+
|
| 364 |
+
1-3
|
| 365 |
+
Sampling Horizon ${\delta t}$ 4 1
|
| 366 |
+
|
| 367 |
+
1-3
|
| 368 |
+
Contrastive Weight $\lambda$ 2 16
|
| 369 |
+
|
| 370 |
+
1-3
|
| 371 |
+
|
| 372 |
+
170 A similar search was performed separately for the hyperparameters pertaining to Data Augmentation:
|
| 373 |
+
|
| 374 |
+
Table 9: Hyperparameter Search
|
| 375 |
+
|
| 376 |
+
max width=
|
| 377 |
+
|
| 378 |
+
Hyperparameter Default Value Best Value
|
| 379 |
+
|
| 380 |
+
1-3
|
| 381 |
+
Minimum Separation 0.2 0.22
|
| 382 |
+
|
| 383 |
+
1-3
|
| 384 |
+
Maximum Separation 2.5 3.1
|
| 385 |
+
|
| 386 |
+
1-3
|
| 387 |
+
Weight between maximum separation and noise 0.2 0.24
|
| 388 |
+
|
| 389 |
+
1-3
|
| 390 |
+
|
| 391 |
+
171 The FDE and collision rate after training the Social-STGCNN model for 400 epochs on the original (Original Parameters) and tuned hyperparameters(Tuned Parameters) are: 172
|
| 392 |
+
|
| 393 |
+
Table 10: Metrics on Original and Tuned Hypeparameters
|
| 394 |
+
|
| 395 |
+
max width=
|
| 396 |
+
|
| 397 |
+
Model FDE COL
|
| 398 |
+
|
| 399 |
+
1-3
|
| 400 |
+
Tuned Parameters 0.674 3.45
|
| 401 |
+
|
| 402 |
+
1-3
|
| 403 |
+
Original Parameters 0.678 3.54
|
| 404 |
+
|
| 405 |
+
1-3
|
| 406 |
+
|
| 407 |
+
§ 5 DISCUSSION
|
| 408 |
+
|
| 409 |
+
Our results support the authors' claim that modelling of social knowledge through the addition of negative test cases reduce the collision rate of trajectory prediction models. In both training from scratch in the original code and in the ported code, the results have remained consistent with that of the original paper.
|
| 410 |
+
|
| 411 |
+
1. In human trajectory forecasting, the addition of Social-NCE to the models Trajectron++ and social-STGCNN showed a 35.7% and 35.1% decrease in collision rate on average respectively(Table 4 and Table 5) in our reproduced results. The Final Displacement Error(FDE) showed deviation of less than 1%, showing that addition of Social-NCE adds to the robustness of the models without affecting it's accuracy.
|
| 412 |
+
|
| 413 |
+
2. In the imitation learning model the collision rate decreased by ${68.9}\%$ on average in our reproduced results with the time taken showing little deviation (Table 6).
|
| 414 |
+
|
| 415 |
+
3. The Social-NCE addition to the Rainbow-DQN based reinforcement learning model, as in the original paper, achieves a reward of 0.6 in 2000 episodes in comparison to 4000 episodes of the original Reinforcement Learning model(Table 7).
|
| 416 |
+
|
| 417 |
+
The hyperparameter tuning conducted was also vastly helpful and lead to an increase of accuracy by 0.91%. The loss hyperparameters, determined the sensitivity of the model. The contrastive weight, determined emphasis of the Social-NCE loss. The more the emphasis, the better the model learnt to differentiate between a positive and negative sample, but at the expense of loss of proximity to the actual training examples. It remains difficult to analytically understand the effect of change in the temperature hyperparameter.
|
| 418 |
+
|
| 419 |
+
Hyperparameter search, even though tedious, can lead to a great increase in accuracy. The tuning of hyperparameters involved in the model, lead to an overall increase in accuracy. In task like trajectory prediction and motion forecasting, it might be crucial to try and increase the accuracy as much as possible. However, one thing to be noted is that the Social-STGCNN had a huge running time, and one sweep took a huge amount of time.
|
| 420 |
+
|
| 421 |
+
The effect of the Data Augmentation hyperparameters, seem to be highly variable. It is natural that results are likely to vary greatly with the choice of dataset, and the nature of the problem statement. This is because these hyperparameters are physical constraints put on the model, and hence might lead to different results for different datasets.
|
| 422 |
+
|
| 423 |
+
Further, it was found that best results were found when the value of the contrastive weight to be 16, while the default value was 2. The values might differ distinctly, but this reinforces confidence in the proposed Social-NCE loss.
|
| 424 |
+
|
| 425 |
+
§ 5.1 WHAT WAS EASY
|
| 426 |
+
|
| 427 |
+
The authors have provided a detailed public codebase on the implementation of Social-NCE on Trajectron++, Social-STGCNN and imitation learning model. Further, they shared the codebase for the reinforcement learning model. All the codebases have instructions on how to set up the environment and logs the important metrics, which proved to be helpful in reproduction.
|
| 428 |
+
|
| 429 |
+
§ 5.2 WHAT WAS DIFFICULT
|
| 430 |
+
|
| 431 |
+
The porting of the implementation of Social-NCE in the Trajectron++ and Social-STGCNN models from PyTorch to PyTorch Lightning required an understanding of those models and their original codebase which required additional time. Training of the models from scratch required large computation power. All the training was done over cloud GPUs with limited runtimes which often fell short of the time required for training.
|
| 432 |
+
|
| 433 |
+
§ 5.3 COMMUNICATION WITH ORIGINAL AUTHORS
|
| 434 |
+
|
| 435 |
+
We mailed the authors listing down some of the queries we had on their code implementation. We also had some queries regarding the important hyperparameters that we could tune to improve model performance. The authors gave a prompt reply to our questions. They shared the codebase for the reinforcement learning model as well. Their contribution has helped us with some crucial points in the report.
|
| 436 |
+
|
| 437 |
+
§ 215 6 FUTURE WORK
|
| 438 |
+
|
| 439 |
+
We originally planned to perform the following additional experiments which we couldn't finish due to lack of time. They have been listed down below and we believe that future work on this paper can be done in this direction.
|
| 440 |
+
|
| 441 |
+
* A best hyperparameter search on Social-NCE in Trajectron++, the imitation learning model and the Rainbow-DQN based Reinforcement Learning model as well and comparison of the variation in results for different models.
|
| 442 |
+
|
| 443 |
+
* Implementation of Social-NCE on the Social-LSTM and Directional-LSTM models on the Trajnet++ benchmark, the results for which have been given in the original paper.
|
| 444 |
+
|
| 445 |
+
* Implementation of Social-NCE on state of the art models in other benchmarks such as on the PGP model [9] for the nuScences dataset.
|
papers/ML_Reproducibility_Challenge/ML_Reproducibility_Challenge 2021/ML_Reproducibility_Challenge 2021 Fall/SK8gAhfX2AK/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,320 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Reproducibility study for "Explaining in Style: Training a GAN to explain a classifier in StyleSpace"
|
| 2 |
+
|
| 3 |
+
Anonymous Author(s)
|
| 4 |
+
|
| 5 |
+
Affiliation
|
| 6 |
+
|
| 7 |
+
Address
|
| 8 |
+
|
| 9 |
+
email 1
|
| 10 |
+
|
| 11 |
+
## Reproducibility Summary
|
| 12 |
+
|
| 13 |
+
## 2 Scope of Reproducibility
|
| 14 |
+
|
| 15 |
+
This work aims to reproduce Lang et al.'s StylEx [9] which proposes a novel approach to explain how a classifier makes 4 its decision. They claim that StylEx creates a post-hoc counterfactual explanation whose principal attributes correspond 5 to properties that are intuitive to humans. The paper boasts a large range of real-world practicality. However, StylEx 6 proves difficult to reproduce due to its time complexity and holes in the information provided. This paper tries to fill in 7 these holes by: i) re-implementation of StylEx in a different framework, ii) creating a low resource training benchmark.
|
| 16 |
+
|
| 17 |
+
## 8 Methodology
|
| 18 |
+
|
| 19 |
+
9 We use their provided python notebook to confirm their AttFind algorithm. However, to test the authors' claims, we - reverse engineer their architecture and completely re-implement their train algorithm. Due to the computational cost of 11 training, we use their pre-trained weights to test our reconstruction. To expedite training, a smaller resolution dataset is 2 used. The training took 9 hours for 50,000 iterations on a Google Colab Nvidia K80 GPU. The hyperparameters are listed in the proceedings.
|
| 20 |
+
|
| 21 |
+
## 14 Results
|
| 22 |
+
|
| 23 |
+
5 We reproduce the StyIEx model in a different framework and test the AttFind algorithm, verifying the original paper's - results for the perceived age classifier. However, we could not reproduce the results for the other classifiers used, due to time limitations in training and the absence of their pre-trained models. In addition, we verify the paper's claim of providing human-interpretable explanations, by reproducing the two user studies outlined in the original paper.
|
| 24 |
+
|
| 25 |
+
## 19 What was easy
|
| 26 |
+
|
| 27 |
+
The notebook supplied by the authors loads their pre-trained models and reproduces part of the results in the paper. Furthermore, their algorithm for discovering classifier-related attributes, AttFind, is well outlined in their paper making the notebook easy to follow. Lastly, the authors were responsive to our inquiries.
|
| 28 |
+
|
| 29 |
+
## 23 What was difficult
|
| 30 |
+
|
| 31 |
+
A major difficulty was that the authors provide only a single pre-trained model, which makes most of the main claims - require training code to verify. Moreover, the paper leaves out information about their design choices and experimental 26 setup. In addition, the authors do not provide an implementation of the models' architecture or training. Finally, the 27 practical audience is limited by the resource requirements.
|
| 32 |
+
|
| 33 |
+
## 28 Communication with original authors
|
| 34 |
+
|
| 35 |
+
29 We had modest communication with the original author, Oran Lang. Our discussion was limited to inquiries about 30 design choices not mentioned in the paper. They were able to clarify the encoder architecture and some of their 31 experimental setup. However, their training code could not be made available due to internal dependencies. 1 Introduction
|
| 36 |
+
|
| 37 |
+
As the field of machine learning (ML) develops and its algorithms become more prevalent in society, concerns on the explainability of black-box models become pivotal. For problems that have a high societal impact, there is understandable apprehension towards trusting models that do not provide justification. For applications such as medical imaging and autonomous driving, there is a need for some level of human supervision. Even if a model has hig performance, such as neural networks, without the ability for human interpretation, its use will be limited.
|
| 38 |
+
|
| 39 |
+
In order to gain trust in systems powered by ML models, the models need to be interpretable and explainable. The two concepts are regularly used interchangeably, yet have subtle differences. Interpretability is the degree to which humans can understand the cause of a decision [10]. Deep neural networks, such as classifiers are often perceived as "black boxes" whose decisions are opaque and hard for humans to understand. Explaining the decision of classifiers can reveal model biases[8] and also provide support to downstream human decision-makers. On the other hand, explainability is linked to the internal logic of a model. It focuses on explaining the data representation within that network. Explainability implies interpretability, however, the implication is not bidirectional.
|
| 40 |
+
|
| 41 |
+
In recent years, there has been increasing attention to the field of explainability of deep network classifiers. Among the various ways of explanations, counterfactual explanations are gaining increasing attention $\left\lbrack {{11},2,3}\right\rbrack$ . To discover and visualize, the attributes used to generate counterfactual explanations, a natural candidate is generative models. In [13] they observed that StyleGAN2 [7], tends to contain a disentangled latent space (i.e., the "StyleSpace") which can be used to extract individual attributes. The authors based their proposed methodology [9] on this observation. Though [12] propose a similar architecture, Lang et al. assert that by integrating the classifier into the training of StylEx they can obtain principal attributes that are specific for the classification task. Additionally, they suggest that StylEx can be applied to a large variety of complex, real-world tasks, which makes its replicability especially intriguing.
|
| 42 |
+
|
| 43 |
+
Our work aims to reproduce the claims made by Lang et al. and confirm their results. Their paper reports in detail many experiments to justify their claims, but does not dive into their experimental setups for architecture and training. Since not all the information needed is available without contacting the authors, we argue that this paper cannot be considered fully reproducible.
|
| 44 |
+
|
| 45 |
+
To remedy the holes in reproducibility and aid future work that builds on or applies StylEx, we build their proposed architecture and training algorithm, after correspondence with the authors.
|
| 46 |
+
|
| 47 |
+
## 2 Scope of reproducibility
|
| 48 |
+
|
| 49 |
+
To determine the scope of reproduction, we quote Lang et al.'s main claims:
|
| 50 |
+
|
| 51 |
+
Claim 1 : [They] propose the StylEx model for classifier-based training of a StyleGAN2, thus driving its StyleSpace to capture classifier-specific attributes
|
| 52 |
+
|
| 53 |
+
Claim 2 : A method to discover classifier-related attributes in StyleSpace coordinates, and use these for counterfactual explanations.
|
| 54 |
+
|
| 55 |
+
Claim 3 : StylEx is applicable for explaining a large variety of classifiers and real-world complex domains. [They] show it provides explanations understood by human users.
|
| 56 |
+
|
| 57 |
+
To reproduce Claim 2, a trained model and the AttFind algorithm are sufficient; both of which are contained in the authors' notebook. Claim 1 requires a network trained conditioned on a classifier and a network trained without, while Claim 3 requires multiple networks trained on multiple domains. However, to train these models, the architecture and training code is necessary; which, as stated previously, are not open source or thoroughly documented. In addition, the computational cost to train the models is expensive. Thus, to verify these claims our goals will be to:
|
| 58 |
+
|
| 59 |
+
- Reconstruct their architecture and port the pre-trained weights in PyTorch
|
| 60 |
+
|
| 61 |
+
- Evaluate whether the principal attributes we obtain correspond to the same features using their pre-trained weights
|
| 62 |
+
|
| 63 |
+
- Retrain on datasets of smaller images and analyze the scalability of their method using fewer training steps and smaller architecture
|
| 64 |
+
|
| 65 |
+
- Conduct two user studies on visual coherence and distinctness to prove that attributes extracted are interpretable by humans
|
| 66 |
+
|
| 67 |
+
79 To ease reproduction for future work, we built the Stylex architecture into a different framework, to get a deeper 80 understanding of the model, and become more equipped to tackle training. As an addition, this contribution allows StylEx to be more accessible for classifiers trained in PyTorch.
|
| 68 |
+
|
| 69 |
+
## 3 Background
|
| 70 |
+
|
| 71 |
+
There have been many attempts to extract explanations from classifiers most of which utilize heatmaps of important features. However, heatmaps struggle to visualize features that are not spatially localized such as color or shape. Rather than identifying areas of interest, one can provide an explanation through a "what-if" example where the features are slightly altered. These forms of justification have been found to be more interpretable for non-localized features, and are known as counterfactual examples. However, it often requires domain knowledge and handcrafting examples to be appropriate. Lang et al. automate this and utilize machine learning to generate realistic counterfactual examples. This section will outline how they claim to achieve this with their two major contributions, StylEx and AttFind.
|
| 72 |
+
|
| 73 |
+
### 3.1 Stylex
|
| 74 |
+
|
| 75 |
+
The way Lang et al. generate examples is through a neural generative model they dubbed StylEx. StylEx expands on the popular generative adversarial network StyleGAN v2, which generates realistic images by creating competition between two networks.
|
| 76 |
+
|
| 77 |
+
One of these two networks, referred to as the Generator, $G$ , attempts to generate a realistic image. To this end, the generator samples from a latent space, $z \in {R}^{n}$ , with a simple probability distribution such as ${z}_{i} \sim \mathcal{N}\left( {0,1}\right)$ . The sampled vector is pushed through a series of linear layers called mapping network to create a new latent vector, $w$ , with a more complex probability distribution. This vector is used as input to a number of StyleBlocks based on the logarithmic resolution of the image. StyleBlocks consist of an affine transform and an upsampling layer. The affine transform, ${A}_{r}$ , maps $w$ to yet another vector ${s}_{r}$ , where $r$ denotes the block number or resolution of the block. This concatenation of all ${s}_{r}$ is known as the style, or attribute, vector, and the space that it spans is known as the StyleSpace. The attribute space is emphasized due to recent observations that it is less entangled than the latent space. The second network is the discriminator, $D$ . This network is trained to differentiate between fake and real images. This forces the generator to slowly improve its creation of fake images. In this way, the discriminator can be seen as an adaptive loss function.
|
| 78 |
+
|
| 79 |
+
The flaw with the direct application of StyleGAN is that it generates from a random latent space. To explain a classification, we would like to condition it on a particular image of interest, but StyleGAN has no mechanism for extracting the attributes of an image. To fix this, Lang et al. added a third, encoding network to StylEx, E. Rather than using a randomly sampled $z$ and the mapping network to obtain $w$ , StylEx uses the output of the encoder, $z = E\left( x\right)$ , where $x$ is an input image. StylEx adds an extra loss condition that the reconstructed image, ${x}^{\prime } = G\left( {E\left( x\right) }\right)$ , should be approximately $x$ . Thus, the encoder combined with the affine transformations allows us to extract the attributes of an input image.
|
| 80 |
+
|
| 81 |
+
StylEx is not unique in adding an encoder to the StyleGAN to explain a classifier. However, other methods do not include the classifier in the training of the network. StyleGAN incorporates the classifier into training by appending its output to the encoded $z$ vector. This results in another loss condition $C\left( x\right) \approx C\left( {x}^{\prime }\right)$ .
|
| 82 |
+
|
| 83 |
+
### 3.2 AttFind
|
| 84 |
+
|
| 85 |
+
Once the attributes of an image have been extracted, a counterfactual explanation can be achieved from the attributes with the most affect on a classifier's decision. Lang et al. propose attribute find (AttFind) to discover the most influential attributes. The algorithm adjusts all the attributes one at a time by a fixed amount $d$ and observes their effect on the classification $\Delta {c}_{s}$ . The $k$ attributes with the highest ${\Delta c}$ create a local explanation for an image’s classification. To approximate a global explanation, the principal attributes are determined by the mean ${\Delta c}$ across images in a set.
|
| 86 |
+
|
| 87 |
+
## 4 Reproduction approach
|
| 88 |
+
|
| 89 |
+
Reimplementing StylEx has been split into two main tasks to ease resource requirements. The first task consists of rebuilding StylEx in a different framework; the second is training the model from scratch. In this section, we discuss how we rebuilt the model architecture and training process. Additionally, we include details obtained through correspondence missing from the original paper.
|
| 90 |
+
|
| 91 |
+
### 4.1 Model descriptions
|
| 92 |
+
|
| 93 |
+
To test Claim 1 and Claim 3, at least two models are necessary. Because only one pre-trained model is available, a new model needs to be trained. However, this is computationally expensive as it builds on StyleGAN ${}^{1}$ . This led us to evaluate reproducibility in two ways. Firstly, we recreate their architecture in PyTorch, using their pre-trained weights to bypass the training limitation. Secondly, we attempt to train a model from scratch using less complex datasets with smaller resolutions to verify claims requiring multiple models. In the following sections, we explain how we reconstruct the StylEx architecture and training process.
|
| 94 |
+
|
| 95 |
+
#### 4.1.1 Rebuilding Stylex
|
| 96 |
+
|
| 97 |
+
The author's notebook includes a TensorFlow StylEx pre-trained on the FFHQ[6] dataset to find the attributes most influential in age classification.
|
| 98 |
+
|
| 99 |
+
Taking advantage of the pre-trained model's raw parameters, we reverse engineer the architecture of each component of StylEx and implement it in PyTorch. Subsequently, the pre-trained weights are transferred into the reconstructed StylEx to confirm the correct implementation of the structure. Transferring the pre-trained parameters from a TensorFlow model to a PyTorch model turned out to be challenging and non-trivial.
|
| 100 |
+
|
| 101 |
+
We start by building the architecture of the MobileNetV1 [5] classifier, as described in the summary of their model, in both TensorFlow and PyTorch. We follow this approach so that we can compare how the results of each layer differ, depending on the framework. We notice that for the 2D convolutional layers PyTorch and TensorFlow pad the images differently, leading to different results. To address this, we add a ConstantPad2D layer in our PyTorch architecture before each convolution with a stride of 2. In addition, we change the default hyperparameters of PyTorch's BatchNorm2D to match the corresponding TensorFlow defaults.
|
| 102 |
+
|
| 103 |
+
The next step is to follow the same procedure for the encoder and the StyleGAN components. We use the official StyleGAN2 implementation in PyTorch by NVlabs[7] and modify the initial architecture to align with the StylEx model. In particular, instead of only using the encoding of an image $X$ as input to the generator, we also concatenate the classifier's output logits. Additionally, their generator returns the StyleSpace which contains classifier-specific attributes. For the encoder, we use the same architecture as StyleGAN2's discriminator. Finally, we transfer the pre-trained weights, to our components.
|
| 104 |
+
|
| 105 |
+
The last step is to load the rebuilt Stylex model in the provided notebook to confirm that the conversion of the models is successful and reproduce the results provided in the notebook.
|
| 106 |
+
|
| 107 |
+
#### 4.1.2 Training the model
|
| 108 |
+
|
| 109 |
+
Lang et al. asserted that StylEx works for a wide range of classifiers and datasets. The results they show in their paper are all with high-resolution images. The high resolution comes with a high computational cost as StylEx is built on top of a StyleGAN. High-resolution StyleGANs can take over a month to train on a single GPU system. To tackle this, we train our model on a low-resolution MNIST dataset. In this way, we investigate whether their model works well on low-resolution datasets and relieve computational requirements.
|
| 110 |
+
|
| 111 |
+
The training is as outlined in their paper. The loss function for the StylEx model is broken into seven parts: ${\mathcal{L}}_{x},{\mathcal{L}}_{w}$ , ${\mathcal{L}}_{LPIPS},{\mathcal{L}}_{adv},{\mathcal{L}}_{PLR},{\mathcal{L}}_{KL}$ , and the ${\mathcal{L}}_{GP}.{\mathcal{L}}_{x}$ is the L1 loss between the real image, $x$ , and the reconstruction of that image, $G\left( {E\left( x\right) }\right) .{\mathcal{L}}_{LPIPS}$ is the Learned Perceptual Image Patch Similarity (LPIPS) of the two images. This loss is a metric other than raw pixel value error for the similarity between two images. ${\mathcal{L}}_{w}$ is the L1 loss between the encoding of the original image, $w = E\left( x\right)$ , and the encoding of the reconstructed image ${w}^{\prime } = E\left( {G\left( {E\left( x\right) }\right) }\right)$ . Collectively, these three losses make up the reconstruction loss, ${\mathcal{L}}_{\text{rec }}$ , ie,
|
| 112 |
+
|
| 113 |
+
$$
|
| 114 |
+
{\mathcal{L}}_{\text{rec }} = {\mathcal{L}}_{w} + {\mathcal{L}}_{x} + {\mathcal{L}}_{\text{LPIPS }}.
|
| 115 |
+
$$
|
| 116 |
+
|
| 117 |
+
In the implementation, each loss term in ${L}_{rec}$ had a weighting coefficient to even out the magnitude of their contributions. The weights are detailed further in Section 5.2.
|
| 118 |
+
|
| 119 |
+
${\mathcal{L}}_{KL}$ is the KL divergence loss between the classification probabilities of the original image and its reconstructed classification probabilities. ${\mathcal{L}}_{GP}$ and ${\mathcal{L}}_{PLR}$ are the gradient penalty and path length regularization losses described in the WGAN-GP[4] and StyleGAN2 paper[7] respectively. ${\mathcal{L}}_{adv}$ is the Wasserstein adversarial generator loss of ${x}^{\prime }$ . Finally, the discriminator's loss is the Wasserstein adversarial discriminator loss.
|
| 120 |
+
|
| 121 |
+
---
|
| 122 |
+
|
| 123 |
+
${}^{1}$ StyleGAN can take on the order of 40 days on one GPU for high resolutions [6]
|
| 124 |
+
|
| 125 |
+
---
|
| 126 |
+
|
| 127 |
+
## 5 Experimental setup
|
| 128 |
+
|
| 129 |
+
### 5.1 Datasets
|
| 130 |
+
|
| 131 |
+
The pre-trained models the authors offer are trained on the Flickr-Faces-HQ Dataset [6] ${}^{2}$ . The dataset contains 70,000 high-quality PNG images at 1024 $\times {1024}$ resolution with large variations in terms of age, ethnicity, and image background. They use it to find the top attributes which contribute to perceiving a person's age (young or old) or gender (male or female). They also preprocess the images by lowering the resolution to 256x256. The official dataset is unlabeled. It is not clear whether the authors' dataset is an internal, labeled Google version or an unofficially labeled dataset.
|
| 132 |
+
|
| 133 |
+
For training, the MNIST [1] dataset is used due to its simplicity. Only the examples with labels 8 or 9 are kept and the resolution is increased to ${32} \times {32}$ . MNIST was chosen because images compressed to ${16} \times {16}$ or even $8 \times 8$ tend to be recognizable for humans. Unfortunately, LPIPS relies on neural networks that have a fixed number of pooling layers. Without editing reimplementation of LPIPS, the lowest resolution possible is 32 .
|
| 134 |
+
|
| 135 |
+
### 5.2 Hyperparameters
|
| 136 |
+
|
| 137 |
+
A complete list of hyperparameters can be found in Table 2 (see Appendix C). A hyperparameter search was not performed for two reasons. First, the training time is long - even for very low resolutions, this is constraining. Second, the criteria for evaluating success is based on a human user, making automated hyperparameter tuning unintuitive.
|
| 138 |
+
|
| 139 |
+
### 5.3 Computational requirements
|
| 140 |
+
|
| 141 |
+
Most of our experiments were conducted on Google Colab along with our systems. For training our models we use Colab’s NVIDIA Tesla K80 GPU. Our code is provided in the following GitHub repository: MLRC_2021_FALL-E358.
|
| 142 |
+
|
| 143 |
+
The basic architecture of the StyleGAN2 was adapted from NVlabs' GitHub repository. As previously mentioned, we modify the basic architecture, to align with StylEx's generator and load Lang et al.'s pre-trained weights. The training code was adapted from labml.ai Annotated Paper Implementations' StyleGAN implementation.
|
| 144 |
+
|
| 145 |
+
Training the model on MNIST for 50,000 iterations takes on the order of nine hours to train on Colab. The time required for AttFind is dependent on the resolution, latent dimension, and the number of images in the dataset. Finding the attribute of a single image took approximately one minute for an image with resolution 32 and a latent space of 514.
|
| 146 |
+
|
| 147 |
+
## 6 Results
|
| 148 |
+
|
| 149 |
+
### 6.1 Rebuilding StylEx results
|
| 150 |
+
|
| 151 |
+
To support Claim 1, we recreate their pre-trained models to PyTorch and test if our results agree. In Figure 3 (see Appendix A), we compare the results from our PyTorch StylEx to their TensorFlow implementation. There are minor differences in the probabilities from the PyTorch classifier which are likely caused by differences in default values or module implementations in the two frameworks.
|
| 152 |
+
|
| 153 |
+
### 6.2 AttFind results
|
| 154 |
+
|
| 155 |
+
We are now equipped to test our PyTorch models on the AttFind method and inspect the principal attributes of the age classifier; meaning the attributes with the highest contribution to young or old classification. To this end, we compute the AttFind algorithm - with our classifier and generator as inputs - using the 250 latent variables of the FFHQ dataset As can be seen in Figures 1 and 5 (see Appendix B), our model obtains the same attributes as in the original paper.
|
| 156 |
+
|
| 157 |
+
In addition, we implement the Independent selection strategy, to generate image-specific explanations as described in the original paper. This method is a local explanation that returns the top-k attributes affecting a classifier's decision for a single image rather than the entire dataset. The results are shown in Figure 2.
|
| 158 |
+
|
| 159 |
+
These results support the author's Claim 2, that AttFind discovers significant attributes for a classifier's decision. Notably, in 1c the reported probability of the top left image is 17% in the paper, while the probability we find with our and their notebook classifier is 39%.
|
| 160 |
+
|
| 161 |
+
---
|
| 162 |
+
|
| 163 |
+
${}^{2}$ https://github.com/NVlabs/ffhq-dataset
|
| 164 |
+
|
| 165 |
+
---
|
| 166 |
+
|
| 167 |
+

|
| 168 |
+
|
| 169 |
+
Figure 1: Top 4 attributes for the perceived age classifier detected by our model. These images show how the probability of classifying a person as young or old changes based on each attribute. On the first column of each image we display the probability of the person being classified as old and on the second column the probability of them being classified as young.
|
| 170 |
+
|
| 171 |
+

|
| 172 |
+
|
| 173 |
+
Figure 2: Independent selection strategy. Top-5 detected attributes for explaining a perceived-age classifier for a specific image. The attributes obtained are different from those presented in Figure 1 which are computed based on the largest average effect over 250 images. The probabilities displayed correspond to the person being classifier as old.
|
| 174 |
+
|
| 175 |
+
### 6.3 Quantitative evaluation results
|
| 176 |
+
|
| 177 |
+
To validate the authors' Claim 3 that attributes obtained are identifiable by humans, we conduct the two user studies explained in the paper. Both studies (Classification and Verbal description) aim to prove that the top extracted attributes are distinct, visually coherent, and can be used as counterfactual explanations.
|
| 178 |
+
|
| 179 |
+
The material used for the classification study was obtained by our PyTorch StylEx model on the perceived gender classifier (top 6 attributes), and by the authors' supplementary material for the perceived age classifier (top 4 attributes). The verbal description study combines a mixture of attributes from our and the authors' models, explaining Face and Cats/Dogs classifiers. Results for both studies were provided by 30 users (different per study).
|
| 180 |
+
|
| 181 |
+
Table 1 shows that the results we obtain are within a standard deviation of their results; verifying their contribution that StylEx provides attributes that are easily distinguishable by humans.
|
| 182 |
+
|
| 183 |
+
Table 3 depicts the three most common words used, to describe the most prominent attribute that changes in the images (see Appendix D). By inspecting the results, we draw two main conclusions. First, for all coordinates except skin color (i.e. ${5}^{\text{th }}$ row in Face(age/gender) classifiers), the majority of the users use the same word in their descriptions. Second, the most common word used is different per attribute, proving that each attribute is unique. Our results agree with the results provided in the original paper.
|
| 184 |
+
|
| 185 |
+
### 6.4 Reconstruction Generalization
|
| 186 |
+
|
| 187 |
+
To further investigate the proposed model, we create new latent variables using images from the FFHQ dataset on our architectures with their pre-trained weights. Then, we use the obtained latent variables to reconstruct the images using our pre-trained generator. Finally, we follow the same process using their architecture and compare the resulting images. Our Stylex reconstructs a clearer image, compared to their model which is more blurred. This may occur because of some differences in the formatting between the frameworks.
|
| 188 |
+
|
| 189 |
+
<table><tr><td/><td>Theirs</td><td>Ours</td></tr><tr><td>Perceived Gender</td><td>${0.96}\left( {\pm {0.047}}\right)$</td><td>${0.94}\left( {\pm {0.031}}\right)$</td></tr><tr><td>Perceived Age</td><td>${0.983}\left( {\pm {0.037}}\right)$</td><td>${0.978}\left( {\pm {0.025}}\right)$</td></tr></table>
|
| 190 |
+
|
| 191 |
+
Table 1: Classification study results. Correct identification of the top-6 attributes.
|
| 192 |
+
|
| 193 |
+
### 6.5 Training
|
| 194 |
+
|
| 195 |
+
The training proved quite volatile. The ${\mathcal{L}}_{\text{rec }}$ would get stuck in local minima during training. Examples of the images reconstructed by the fully trained model (see Appendix E).
|
| 196 |
+
|
| 197 |
+
Lang et al. experimented with two training regimens. The first regimen was trained using only $E\left( x\right)$ as $w$ , the inputs to the generator, and the above loss. The second regimen alternated between using $E\left( x\right)$ and a randomly generated encoding, $\bar{w}$ . This $\bar{w}$ is created by applying a mapping network to $z$ , where $z \sim \mathcal{N}\left( {{0}^{n},{1}^{n}}\right)$ and $n$ is the dimensionality of $w$ . For this randomly generated ${\bar{x}}^{\prime } = G\left( \bar{w}\right)$ , only the adversarial loss is calculated. Training using $\bar{w}$ can be viewed as the same as training a vanilla StyleGAN. Because we are unsure which method was used for the results in their paper and notebook, we experimented with both. However, the first regimen was the only one that converged.
|
| 198 |
+
|
| 199 |
+
Though we were able to train a model, due to time constraints, we were unable to fully investigate Claim 1.
|
| 200 |
+
|
| 201 |
+
Again due to time constraints, we were unable to run AttFind on the trained model to fully test Claim 3.
|
| 202 |
+
|
| 203 |
+
## 7 Discussion
|
| 204 |
+
|
| 205 |
+
Using the definition of reproducibility ${}^{3}$ by the U.S. National Science Foundation (NSF) subcommittee on replicability in science, it is difficult to determine Lang et al.'s reproducibility. All details regarding the experimental setup, such as the hyperparameters, the hours of training, the number of steps, the labels of the datasets, etc. are omitted, thus recreating the exact materials of the original investigators is difficult. Since our definition is an implication and we cannot satisfy the first condition, we cannot determine the reproducibility.
|
| 206 |
+
|
| 207 |
+
Instead, we will use a looser definition of reproducibility. We will refer to reproducibility as the ability for another researcher to test their claims. We found that, given enough time, the StylEx is seemingly reproducible. However, given a limited time budget such as our own, the paper is not fully reproducible. We, therefore, can only provide unit tests of their claims. The following sections will discuss information from the results section 6 and to what degree they confirm reproducibility claim by claim.
|
| 208 |
+
|
| 209 |
+
### 7.1 Claim 1
|
| 210 |
+
|
| 211 |
+
The most difficult claim to investigate, given a limited time budget, is the effect of classifier-based training on the StyleSpace. The original paper trains three models, the StylEx with and without integration of the classifier in training and the StyleGAN v2. We found, once the training algorithm is implemented correctly, just training all three models will take at least 24 hours for 50,000 epochs on one GPU even for the simple MNIST dataset. The authors stated that it took approximately a week to train StylEx with 8GPUs. Over two weeks of training time is beyond our time constraints.
|
| 212 |
+
|
| 213 |
+
In addition, we observed that training is volatile. ${}^{4}$ The reconstruction error stagnates in a local minimum before suddenly dipping. However, the model was not always able to escape the local minima within 50,000 iterations. This suggests that, though their results are likely replicable, their replicability may be stochastic. This again hinders reproducibility when time is limited.
|
| 214 |
+
|
| 215 |
+
### 7.2 Claim 2
|
| 216 |
+
|
| 217 |
+
The claim that the authors document the most was Claim 2, their AttFind method. Because the method was implemented in the notebook provided, testing reproducibility was easy. We were able to verify that for the perceived age classifier, our model obtains the same top attributes. We conclude that their method can discover the most influential classifier-related attributes.
|
| 218 |
+
|
| 219 |
+
---
|
| 220 |
+
|
| 221 |
+
${}^{3}$ "reproducibility refers to the ability of a researcher to duplicate the results of a prior study using the same materials as were used by the original investigator"
|
| 222 |
+
|
| 223 |
+
${}^{4}$ An example of successful training can be found here and one where the model failed to converge here
|
| 224 |
+
|
| 225 |
+
---
|
| 226 |
+
|
| 227 |
+
In addition to their notebook, we modified the AttFind method to find the principal attributes of a single image as shown in Figure 2. This validated the sub-claim of AttFind that StylEx can provide image-specific explanations. Rather than finding the globally important attributes, the model can find the locally important attributes for a particular image.
|
| 228 |
+
|
| 229 |
+
### 7.3 Claim 3
|
| 230 |
+
|
| 231 |
+
The authors claim that StylEx is applicable to a variety of real-world problems. Applicability can be interpreted in two different ways. One can interpret it as being possible to apply StylEx to a variety of domains, or as practical to apply Stylex to a variety of domains. From what we have seen in Figures 1, 2, it is possible to use StylEx for explaining an age classifier, thus it can explain a real-world problem. From Figure 6 (see Appendix E), we found that the StylEx can be trained to, at minimum, reconstruct MNIST data, thus multiple domains.
|
| 232 |
+
|
| 233 |
+
Though we have found that it is possible, we have also found that it is seemingly impractical. Every domain requires the model to be retrained, meaning every domain requires days or weeks of training.
|
| 234 |
+
|
| 235 |
+
### 7.4 What was easy
|
| 236 |
+
|
| 237 |
+
The open-source notebook is very well structured, which combined with the pseudo-code outlined in Algorithm 1 of their paper, made the AttFind method easy to replicate. In addition, the provided pre-trained models helped to derive some of the vague components of StylEx model.
|
| 238 |
+
|
| 239 |
+
### 7.5 What was difficult
|
| 240 |
+
|
| 241 |
+
As we already emphasized, there are many difficulties in reproducing this paper. StylEx is built on top of several previous papers making the knowledge needed for implementation substantial. Lang et al. proposed a model without providing code, that is computationally expensive, and with volatile training behavior. In addition, that is sensitive to hyperparameters, which in our case were unknown. Even when scaling down the complexity of the model using smaller resolutions, the time cost of training exceeds what was feasible with our time constraints.
|
| 242 |
+
|
| 243 |
+
Taking shortcuts to subvert these difficulties had a multitude of challenges. We found loading weights from TensorFlow to PyTorch deceptively complex and far from trivial due to differences between the frameworks. Even evaluating their notebook came with difficulties as the dataset they trained on FFHQ does not officially have labels, so the details of their dataset were unknown.
|
| 244 |
+
|
| 245 |
+
### 7.6 Future Work
|
| 246 |
+
|
| 247 |
+
The primary goal of this paper was to reproduce the work of Lang et al., however, through reimplementing their code, we found two open avenues for future research. Firstly, the paper focused on general image explanations but did not show examples of misclassified data. It would be interesting to see what insights can be obtained through StylEx. Secondly, the paper compared StylEx only with StyleGAN v2 models. AttFind seems applicable to general autoencoders, and not specific to GANs. Viewing StylEx as an autoencoder, rather than a GAN seems like a promising 297 angle for scalability to a similar counterfactual generator.
|
| 248 |
+
|
| 249 |
+
## References
|
| 250 |
+
|
| 251 |
+
[1] L. Deng. The mnist database of handwritten digit images for machine learning research. IEEE Signal Processing
|
| 252 |
+
|
| 253 |
+
Magazine, 29(6):141-142, 2012.
|
| 254 |
+
|
| 255 |
+
[2] Y. Goyal, Z. Wu, J. Ernst, D. Batra, D. Parikh, and S. Lee. Counterfactual visual explanations. In International Conference on Machine Learning, pages 2376-2384. PMLR, 2019.
|
| 256 |
+
|
| 257 |
+
[3] Y. Goyal, Z. Wu, J. Ernst, D. Batra, D. Parikh, and S. Lee. Counterfactual visual explanations. In International Conference on Machine Learning, pages 2376-2384. PMLR, 2019.
|
| 258 |
+
|
| 259 |
+
[4] I. Gulrajani, F. Ahmed, M. Arjovsky, V. Dumoulin, and A. C. Courville. Improved training of wasserstein gans. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc., 2017.
|
| 260 |
+
|
| 261 |
+
[5] A. G. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto, and H. Adam. Mobilenets: Efficient convolutional neural networks for mobile vision applications, 2017.
|
| 262 |
+
|
| 263 |
+
[6] T. Karras, S. Laine, and T. Aila. A style-based generator architecture for generative adversarial networks, 2019.
|
| 264 |
+
|
| 265 |
+
[7] T. Karras, S. Laine, M. Aittala, J. Hellsten, J. Lehtinen, and T. Aila. Analyzing and improving the image quality of stylegan. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 8107-8116, 2020.
|
| 266 |
+
|
| 267 |
+
[8] B. Kim, M. Wattenberg, J. Gilmer, C. Cai, J. Wexler, F. Viegas, et al. Interpretability beyond feature attribution: Quantitative testing with concept activation vectors (tcav). In International conference on machine learning, pages 2668-2677. PMLR, 2018.
|
| 268 |
+
|
| 269 |
+
[9] O. Lang, Y. Gandelsman, M. Yarom, Y. Wald, G. Elidan, A. Hassidim, W. T. Freeman, P. Isola, A. Globerson, M. Irani, and I. Mosseri. Explaining in style: Training a gan to explain a classifier in stylespace. ArXiv, abs/2104.13369, 2021.
|
| 270 |
+
|
| 271 |
+
[10] T. Miller. Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence, 267:1-38, 2019.
|
| 272 |
+
|
| 273 |
+
[11] R. K. Mothilal, A. Sharma, and C. Tan. Explaining machine learning classifiers through diverse counterfactual explanations. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, pages 607-617, 2020.
|
| 274 |
+
|
| 275 |
+
[12] K. Schutte, O. Moindrot, P. Hérent, J.-B. Schiratti, and S. Jégou. Using stylegan for visual interpretability of deep learning models on medical images. arXiv preprint arXiv:2101.07563, 2021.
|
| 276 |
+
|
| 277 |
+
[13] Z. Wu, D. Lischinski, and E. Shechtman. Stylespace analysis: Disentangled controls for stylegan image generation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 12863-12872, 2021.
|
| 278 |
+
|
| 279 |
+

|
| 280 |
+
|
| 281 |
+
Figure 3: Comparison of StylEx models results. The probabilities shown correspond to being classifier as young.
|
| 282 |
+
|
| 283 |
+

|
| 284 |
+
|
| 285 |
+
Figure 4: Comparison of StylEx models encoding and then reconstructing an image. Both models use their encoder and classifier to produce the latent variable. Then using their generator the image is reconstructed from the latent variable.
|
| 286 |
+
|
| 287 |
+
31 B AttFind Lang et al.'s top attributes
|
| 288 |
+
|
| 289 |
+

|
| 290 |
+
|
| 291 |
+
Figure 5: Top 4 attributes for the perceived age classifier detected by Lang et al.'s pre-trained model. These images show how the probability of classifying a person as young or old changes based on each attribute. On the first column of each image, we display the probability of the person being classified as old and on the second column the probability of them being classified as young.
|
| 292 |
+
|
| 293 |
+
<table><tr><td/><td>Our StylEx</td><td>Lang et al's StylEx</td></tr><tr><td>Step Size</td><td>1e-3</td><td>2e-4</td></tr><tr><td>Number of Steps</td><td>50,000</td><td>250,000</td></tr><tr><td>Total Loss Weights $\left( {{\mathcal{L}}_{rec},{\mathcal{L}}_{adv},{\mathcal{L}}_{c},{\mathcal{L}}_{PL}}\right)$</td><td>1,1,1,1</td><td>$1,1,1,?$</td></tr><tr><td>Reconstruction Loss Weights $\left( {{\mathcal{L}}_{w},{\mathcal{L}}_{x},{\mathcal{L}}_{LPIPS}}\right)$</td><td>.1, 1, .1</td><td>.1, 1, .1</td></tr><tr><td>Latent Dimension</td><td>32</td><td>512</td></tr><tr><td>Number of Classes</td><td>2</td><td>2 (depending on data)</td></tr><tr><td>Image Resolution</td><td>32</td><td>256</td></tr><tr><td>Classifier Structure</td><td>DenseNet121</td><td>MobileNet</td></tr><tr><td>Optimizer</td><td>Adam</td><td>?</td></tr></table>
|
| 294 |
+
|
| 295 |
+
Table 2: Training hyperparameters
|
| 296 |
+
|
| 297 |
+
Cats/Dogs
|
| 298 |
+
|
| 299 |
+
<table><tr><td colspan="4">Face</td></tr><tr><td> <img src="https://cdn.noedgeai.com/01963f8c-865d-72fe-86d9-c7cb833d105c_10.jpg?x=909&y=1235&w=113&h=99&r=0"/> <img src="https://cdn.noedgeai.com/01963f8c-865d-72fe-86d9-c7cb833d105c_10.jpg?x=1048&y=1236&w=102&h=97&r=0"/> </td><td>eyebrow: 0.90</td><td>thick: 0.17</td><td>brow: 0.07</td></tr><tr><td> <img src="https://cdn.noedgeai.com/01963f8c-865d-72fe-86d9-c7cb833d105c_10.jpg?x=919&y=1340&w=99&h=95&r=0"/> <img src="https://cdn.noedgeai.com/01963f8c-865d-72fe-86d9-c7cb833d105c_10.jpg?x=1048&y=1338&w=99&h=97&r=0"/> </td><td>tooth: 0.30</td><td>lip: 0.10</td><td>disappear: 0.07</td></tr><tr><td> <img src="https://cdn.noedgeai.com/01963f8c-865d-72fe-86d9-c7cb833d105c_10.jpg?x=921&y=1440&w=98&h=96&r=0"/> <img src="https://cdn.noedgeai.com/01963f8c-865d-72fe-86d9-c7cb833d105c_10.jpg?x=1050&y=1439&w=98&h=97&r=0"/> </td><td>glass: 0.90</td><td>size: 0.13</td><td>bigger: 0.10</td></tr><tr><td> <img src="https://cdn.noedgeai.com/01963f8c-865d-72fe-86d9-c7cb833d105c_10.jpg?x=920&y=1542&w=99&h=96&r=0"/> <img src="https://cdn.noedgeai.com/01963f8c-865d-72fe-86d9-c7cb833d105c_10.jpg?x=1048&y=1540&w=100&h=98&r=0"/> </td><td>mouth: 0.70</td><td>open: 0.40</td><td>lip: 0.10</td></tr><tr><td> <img src="https://cdn.noedgeai.com/01963f8c-865d-72fe-86d9-c7cb833d105c_10.jpg?x=917&y=1643&w=103&h=97&r=0"/> <img src="https://cdn.noedgeai.com/01963f8c-865d-72fe-86d9-c7cb833d105c_10.jpg?x=1047&y=1644&w=101&h=97&r=0"/> </td><td>bright: 0.37</td><td>skin: 0.30</td><td>light: 0.27</td></tr><tr><td> <img src="https://cdn.noedgeai.com/01963f8c-865d-72fe-86d9-c7cb833d105c_10.jpg?x=916&y=1744&w=103&h=98&r=0"/> <img src="https://cdn.noedgeai.com/01963f8c-865d-72fe-86d9-c7cb833d105c_10.jpg?x=1047&y=1745&w=102&h=98&r=0"/> </td><td>mustache 0.93</td><td>facial: 0.07</td><td>hair: 0.07</td></tr><tr><td> <img src="https://cdn.noedgeai.com/01963f8c-865d-72fe-86d9-c7cb833d105c_10.jpg?x=913&y=1847&w=107&h=104&r=0"/> <img src="https://cdn.noedgeai.com/01963f8c-865d-72fe-86d9-c7cb833d105c_10.jpg?x=1046&y=1846&w=104&h=104&r=0"/> </td><td>eye: 0.77</td><td>color: 0.47</td><td>eyelash: 0.13</td></tr><tr><td colspan="4">(b)</td></tr></table>
|
| 300 |
+
|
| 301 |
+
eye: pupil: shape:
|
| 302 |
+
|
| 303 |
+
0.73 0.16 0.1
|
| 304 |
+
|
| 305 |
+
mouth: open: tongue:
|
| 306 |
+
|
| 307 |
+
0.73 0.3 0.16
|
| 308 |
+
|
| 309 |
+
ear: right: become:
|
| 310 |
+
|
| 311 |
+
0.90 0.06 0.06
|
| 312 |
+
|
| 313 |
+
(a)
|
| 314 |
+
|
| 315 |
+
Table 3: Verbal description study results. The 3 most common words used in user descriptions for the Cat/Dogs (a) and Face (age/gender) (b) classifiers. This user study proves the distinctness of each attribute since the most common word used to describe each attribute change is different per classifier.
|
| 316 |
+
|
| 317 |
+

|
| 318 |
+
|
| 319 |
+
Figure 6: An example of image reconstruction on the MNIST dataset. The StylEx had converged however, it was trained conditioned on a classifier that always predicted 8 , thus was effectively trained without a classifier. It's loss curves can be found here.
|
| 320 |
+
|
papers/ML_Reproducibility_Challenge/ML_Reproducibility_Challenge 2021/ML_Reproducibility_Challenge 2021 Fall/SK8gAhfX2AK/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,245 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ REPRODUCIBILITY STUDY FOR "EXPLAINING IN STYLE: TRAINING A GAN TO EXPLAIN A CLASSIFIER IN STYLESPACE"
|
| 2 |
+
|
| 3 |
+
Anonymous Author(s)
|
| 4 |
+
|
| 5 |
+
Affiliation
|
| 6 |
+
|
| 7 |
+
Address
|
| 8 |
+
|
| 9 |
+
email 1
|
| 10 |
+
|
| 11 |
+
§ REPRODUCIBILITY SUMMARY
|
| 12 |
+
|
| 13 |
+
§ 2 SCOPE OF REPRODUCIBILITY
|
| 14 |
+
|
| 15 |
+
This work aims to reproduce Lang et al.'s StylEx [9] which proposes a novel approach to explain how a classifier makes 4 its decision. They claim that StylEx creates a post-hoc counterfactual explanation whose principal attributes correspond 5 to properties that are intuitive to humans. The paper boasts a large range of real-world practicality. However, StylEx 6 proves difficult to reproduce due to its time complexity and holes in the information provided. This paper tries to fill in 7 these holes by: i) re-implementation of StylEx in a different framework, ii) creating a low resource training benchmark.
|
| 16 |
+
|
| 17 |
+
§ 8 METHODOLOGY
|
| 18 |
+
|
| 19 |
+
9 We use their provided python notebook to confirm their AttFind algorithm. However, to test the authors' claims, we - reverse engineer their architecture and completely re-implement their train algorithm. Due to the computational cost of 11 training, we use their pre-trained weights to test our reconstruction. To expedite training, a smaller resolution dataset is 2 used. The training took 9 hours for 50,000 iterations on a Google Colab Nvidia K80 GPU. The hyperparameters are listed in the proceedings.
|
| 20 |
+
|
| 21 |
+
§ 14 RESULTS
|
| 22 |
+
|
| 23 |
+
5 We reproduce the StyIEx model in a different framework and test the AttFind algorithm, verifying the original paper's - results for the perceived age classifier. However, we could not reproduce the results for the other classifiers used, due to time limitations in training and the absence of their pre-trained models. In addition, we verify the paper's claim of providing human-interpretable explanations, by reproducing the two user studies outlined in the original paper.
|
| 24 |
+
|
| 25 |
+
§ 19 WHAT WAS EASY
|
| 26 |
+
|
| 27 |
+
The notebook supplied by the authors loads their pre-trained models and reproduces part of the results in the paper. Furthermore, their algorithm for discovering classifier-related attributes, AttFind, is well outlined in their paper making the notebook easy to follow. Lastly, the authors were responsive to our inquiries.
|
| 28 |
+
|
| 29 |
+
§ 23 WHAT WAS DIFFICULT
|
| 30 |
+
|
| 31 |
+
A major difficulty was that the authors provide only a single pre-trained model, which makes most of the main claims - require training code to verify. Moreover, the paper leaves out information about their design choices and experimental 26 setup. In addition, the authors do not provide an implementation of the models' architecture or training. Finally, the 27 practical audience is limited by the resource requirements.
|
| 32 |
+
|
| 33 |
+
§ 28 COMMUNICATION WITH ORIGINAL AUTHORS
|
| 34 |
+
|
| 35 |
+
29 We had modest communication with the original author, Oran Lang. Our discussion was limited to inquiries about 30 design choices not mentioned in the paper. They were able to clarify the encoder architecture and some of their 31 experimental setup. However, their training code could not be made available due to internal dependencies. 1 Introduction
|
| 36 |
+
|
| 37 |
+
As the field of machine learning (ML) develops and its algorithms become more prevalent in society, concerns on the explainability of black-box models become pivotal. For problems that have a high societal impact, there is understandable apprehension towards trusting models that do not provide justification. For applications such as medical imaging and autonomous driving, there is a need for some level of human supervision. Even if a model has hig performance, such as neural networks, without the ability for human interpretation, its use will be limited.
|
| 38 |
+
|
| 39 |
+
In order to gain trust in systems powered by ML models, the models need to be interpretable and explainable. The two concepts are regularly used interchangeably, yet have subtle differences. Interpretability is the degree to which humans can understand the cause of a decision [10]. Deep neural networks, such as classifiers are often perceived as "black boxes" whose decisions are opaque and hard for humans to understand. Explaining the decision of classifiers can reveal model biases[8] and also provide support to downstream human decision-makers. On the other hand, explainability is linked to the internal logic of a model. It focuses on explaining the data representation within that network. Explainability implies interpretability, however, the implication is not bidirectional.
|
| 40 |
+
|
| 41 |
+
In recent years, there has been increasing attention to the field of explainability of deep network classifiers. Among the various ways of explanations, counterfactual explanations are gaining increasing attention $\left\lbrack {{11},2,3}\right\rbrack$ . To discover and visualize, the attributes used to generate counterfactual explanations, a natural candidate is generative models. In [13] they observed that StyleGAN2 [7], tends to contain a disentangled latent space (i.e., the "StyleSpace") which can be used to extract individual attributes. The authors based their proposed methodology [9] on this observation. Though [12] propose a similar architecture, Lang et al. assert that by integrating the classifier into the training of StylEx they can obtain principal attributes that are specific for the classification task. Additionally, they suggest that StylEx can be applied to a large variety of complex, real-world tasks, which makes its replicability especially intriguing.
|
| 42 |
+
|
| 43 |
+
Our work aims to reproduce the claims made by Lang et al. and confirm their results. Their paper reports in detail many experiments to justify their claims, but does not dive into their experimental setups for architecture and training. Since not all the information needed is available without contacting the authors, we argue that this paper cannot be considered fully reproducible.
|
| 44 |
+
|
| 45 |
+
To remedy the holes in reproducibility and aid future work that builds on or applies StylEx, we build their proposed architecture and training algorithm, after correspondence with the authors.
|
| 46 |
+
|
| 47 |
+
§ 2 SCOPE OF REPRODUCIBILITY
|
| 48 |
+
|
| 49 |
+
To determine the scope of reproduction, we quote Lang et al.'s main claims:
|
| 50 |
+
|
| 51 |
+
Claim 1 : [They] propose the StylEx model for classifier-based training of a StyleGAN2, thus driving its StyleSpace to capture classifier-specific attributes
|
| 52 |
+
|
| 53 |
+
Claim 2 : A method to discover classifier-related attributes in StyleSpace coordinates, and use these for counterfactual explanations.
|
| 54 |
+
|
| 55 |
+
Claim 3 : StylEx is applicable for explaining a large variety of classifiers and real-world complex domains. [They] show it provides explanations understood by human users.
|
| 56 |
+
|
| 57 |
+
To reproduce Claim 2, a trained model and the AttFind algorithm are sufficient; both of which are contained in the authors' notebook. Claim 1 requires a network trained conditioned on a classifier and a network trained without, while Claim 3 requires multiple networks trained on multiple domains. However, to train these models, the architecture and training code is necessary; which, as stated previously, are not open source or thoroughly documented. In addition, the computational cost to train the models is expensive. Thus, to verify these claims our goals will be to:
|
| 58 |
+
|
| 59 |
+
* Reconstruct their architecture and port the pre-trained weights in PyTorch
|
| 60 |
+
|
| 61 |
+
* Evaluate whether the principal attributes we obtain correspond to the same features using their pre-trained weights
|
| 62 |
+
|
| 63 |
+
* Retrain on datasets of smaller images and analyze the scalability of their method using fewer training steps and smaller architecture
|
| 64 |
+
|
| 65 |
+
* Conduct two user studies on visual coherence and distinctness to prove that attributes extracted are interpretable by humans
|
| 66 |
+
|
| 67 |
+
79 To ease reproduction for future work, we built the Stylex architecture into a different framework, to get a deeper 80 understanding of the model, and become more equipped to tackle training. As an addition, this contribution allows StylEx to be more accessible for classifiers trained in PyTorch.
|
| 68 |
+
|
| 69 |
+
§ 3 BACKGROUND
|
| 70 |
+
|
| 71 |
+
There have been many attempts to extract explanations from classifiers most of which utilize heatmaps of important features. However, heatmaps struggle to visualize features that are not spatially localized such as color or shape. Rather than identifying areas of interest, one can provide an explanation through a "what-if" example where the features are slightly altered. These forms of justification have been found to be more interpretable for non-localized features, and are known as counterfactual examples. However, it often requires domain knowledge and handcrafting examples to be appropriate. Lang et al. automate this and utilize machine learning to generate realistic counterfactual examples. This section will outline how they claim to achieve this with their two major contributions, StylEx and AttFind.
|
| 72 |
+
|
| 73 |
+
§ 3.1 STYLEX
|
| 74 |
+
|
| 75 |
+
The way Lang et al. generate examples is through a neural generative model they dubbed StylEx. StylEx expands on the popular generative adversarial network StyleGAN v2, which generates realistic images by creating competition between two networks.
|
| 76 |
+
|
| 77 |
+
One of these two networks, referred to as the Generator, $G$ , attempts to generate a realistic image. To this end, the generator samples from a latent space, $z \in {R}^{n}$ , with a simple probability distribution such as ${z}_{i} \sim \mathcal{N}\left( {0,1}\right)$ . The sampled vector is pushed through a series of linear layers called mapping network to create a new latent vector, $w$ , with a more complex probability distribution. This vector is used as input to a number of StyleBlocks based on the logarithmic resolution of the image. StyleBlocks consist of an affine transform and an upsampling layer. The affine transform, ${A}_{r}$ , maps $w$ to yet another vector ${s}_{r}$ , where $r$ denotes the block number or resolution of the block. This concatenation of all ${s}_{r}$ is known as the style, or attribute, vector, and the space that it spans is known as the StyleSpace. The attribute space is emphasized due to recent observations that it is less entangled than the latent space. The second network is the discriminator, $D$ . This network is trained to differentiate between fake and real images. This forces the generator to slowly improve its creation of fake images. In this way, the discriminator can be seen as an adaptive loss function.
|
| 78 |
+
|
| 79 |
+
The flaw with the direct application of StyleGAN is that it generates from a random latent space. To explain a classification, we would like to condition it on a particular image of interest, but StyleGAN has no mechanism for extracting the attributes of an image. To fix this, Lang et al. added a third, encoding network to StylEx, E. Rather than using a randomly sampled $z$ and the mapping network to obtain $w$ , StylEx uses the output of the encoder, $z = E\left( x\right)$ , where $x$ is an input image. StylEx adds an extra loss condition that the reconstructed image, ${x}^{\prime } = G\left( {E\left( x\right) }\right)$ , should be approximately $x$ . Thus, the encoder combined with the affine transformations allows us to extract the attributes of an input image.
|
| 80 |
+
|
| 81 |
+
StylEx is not unique in adding an encoder to the StyleGAN to explain a classifier. However, other methods do not include the classifier in the training of the network. StyleGAN incorporates the classifier into training by appending its output to the encoded $z$ vector. This results in another loss condition $C\left( x\right) \approx C\left( {x}^{\prime }\right)$ .
|
| 82 |
+
|
| 83 |
+
§ 3.2 ATTFIND
|
| 84 |
+
|
| 85 |
+
Once the attributes of an image have been extracted, a counterfactual explanation can be achieved from the attributes with the most affect on a classifier's decision. Lang et al. propose attribute find (AttFind) to discover the most influential attributes. The algorithm adjusts all the attributes one at a time by a fixed amount $d$ and observes their effect on the classification $\Delta {c}_{s}$ . The $k$ attributes with the highest ${\Delta c}$ create a local explanation for an image’s classification. To approximate a global explanation, the principal attributes are determined by the mean ${\Delta c}$ across images in a set.
|
| 86 |
+
|
| 87 |
+
§ 4 REPRODUCTION APPROACH
|
| 88 |
+
|
| 89 |
+
Reimplementing StylEx has been split into two main tasks to ease resource requirements. The first task consists of rebuilding StylEx in a different framework; the second is training the model from scratch. In this section, we discuss how we rebuilt the model architecture and training process. Additionally, we include details obtained through correspondence missing from the original paper.
|
| 90 |
+
|
| 91 |
+
§ 4.1 MODEL DESCRIPTIONS
|
| 92 |
+
|
| 93 |
+
To test Claim 1 and Claim 3, at least two models are necessary. Because only one pre-trained model is available, a new model needs to be trained. However, this is computationally expensive as it builds on StyleGAN ${}^{1}$ . This led us to evaluate reproducibility in two ways. Firstly, we recreate their architecture in PyTorch, using their pre-trained weights to bypass the training limitation. Secondly, we attempt to train a model from scratch using less complex datasets with smaller resolutions to verify claims requiring multiple models. In the following sections, we explain how we reconstruct the StylEx architecture and training process.
|
| 94 |
+
|
| 95 |
+
§ 4.1.1 REBUILDING STYLEX
|
| 96 |
+
|
| 97 |
+
The author's notebook includes a TensorFlow StylEx pre-trained on the FFHQ[6] dataset to find the attributes most influential in age classification.
|
| 98 |
+
|
| 99 |
+
Taking advantage of the pre-trained model's raw parameters, we reverse engineer the architecture of each component of StylEx and implement it in PyTorch. Subsequently, the pre-trained weights are transferred into the reconstructed StylEx to confirm the correct implementation of the structure. Transferring the pre-trained parameters from a TensorFlow model to a PyTorch model turned out to be challenging and non-trivial.
|
| 100 |
+
|
| 101 |
+
We start by building the architecture of the MobileNetV1 [5] classifier, as described in the summary of their model, in both TensorFlow and PyTorch. We follow this approach so that we can compare how the results of each layer differ, depending on the framework. We notice that for the 2D convolutional layers PyTorch and TensorFlow pad the images differently, leading to different results. To address this, we add a ConstantPad2D layer in our PyTorch architecture before each convolution with a stride of 2. In addition, we change the default hyperparameters of PyTorch's BatchNorm2D to match the corresponding TensorFlow defaults.
|
| 102 |
+
|
| 103 |
+
The next step is to follow the same procedure for the encoder and the StyleGAN components. We use the official StyleGAN2 implementation in PyTorch by NVlabs[7] and modify the initial architecture to align with the StylEx model. In particular, instead of only using the encoding of an image $X$ as input to the generator, we also concatenate the classifier's output logits. Additionally, their generator returns the StyleSpace which contains classifier-specific attributes. For the encoder, we use the same architecture as StyleGAN2's discriminator. Finally, we transfer the pre-trained weights, to our components.
|
| 104 |
+
|
| 105 |
+
The last step is to load the rebuilt Stylex model in the provided notebook to confirm that the conversion of the models is successful and reproduce the results provided in the notebook.
|
| 106 |
+
|
| 107 |
+
§ 4.1.2 TRAINING THE MODEL
|
| 108 |
+
|
| 109 |
+
Lang et al. asserted that StylEx works for a wide range of classifiers and datasets. The results they show in their paper are all with high-resolution images. The high resolution comes with a high computational cost as StylEx is built on top of a StyleGAN. High-resolution StyleGANs can take over a month to train on a single GPU system. To tackle this, we train our model on a low-resolution MNIST dataset. In this way, we investigate whether their model works well on low-resolution datasets and relieve computational requirements.
|
| 110 |
+
|
| 111 |
+
The training is as outlined in their paper. The loss function for the StylEx model is broken into seven parts: ${\mathcal{L}}_{x},{\mathcal{L}}_{w}$ , ${\mathcal{L}}_{LPIPS},{\mathcal{L}}_{adv},{\mathcal{L}}_{PLR},{\mathcal{L}}_{KL}$ , and the ${\mathcal{L}}_{GP}.{\mathcal{L}}_{x}$ is the L1 loss between the real image, $x$ , and the reconstruction of that image, $G\left( {E\left( x\right) }\right) .{\mathcal{L}}_{LPIPS}$ is the Learned Perceptual Image Patch Similarity (LPIPS) of the two images. This loss is a metric other than raw pixel value error for the similarity between two images. ${\mathcal{L}}_{w}$ is the L1 loss between the encoding of the original image, $w = E\left( x\right)$ , and the encoding of the reconstructed image ${w}^{\prime } = E\left( {G\left( {E\left( x\right) }\right) }\right)$ . Collectively, these three losses make up the reconstruction loss, ${\mathcal{L}}_{\text{ rec }}$ , ie,
|
| 112 |
+
|
| 113 |
+
$$
|
| 114 |
+
{\mathcal{L}}_{\text{ rec }} = {\mathcal{L}}_{w} + {\mathcal{L}}_{x} + {\mathcal{L}}_{\text{ LPIPS }}.
|
| 115 |
+
$$
|
| 116 |
+
|
| 117 |
+
In the implementation, each loss term in ${L}_{rec}$ had a weighting coefficient to even out the magnitude of their contributions. The weights are detailed further in Section 5.2.
|
| 118 |
+
|
| 119 |
+
${\mathcal{L}}_{KL}$ is the KL divergence loss between the classification probabilities of the original image and its reconstructed classification probabilities. ${\mathcal{L}}_{GP}$ and ${\mathcal{L}}_{PLR}$ are the gradient penalty and path length regularization losses described in the WGAN-GP[4] and StyleGAN2 paper[7] respectively. ${\mathcal{L}}_{adv}$ is the Wasserstein adversarial generator loss of ${x}^{\prime }$ . Finally, the discriminator's loss is the Wasserstein adversarial discriminator loss.
|
| 120 |
+
|
| 121 |
+
${}^{1}$ StyleGAN can take on the order of 40 days on one GPU for high resolutions [6]
|
| 122 |
+
|
| 123 |
+
§ 5 EXPERIMENTAL SETUP
|
| 124 |
+
|
| 125 |
+
§ 5.1 DATASETS
|
| 126 |
+
|
| 127 |
+
The pre-trained models the authors offer are trained on the Flickr-Faces-HQ Dataset [6] ${}^{2}$ . The dataset contains 70,000 high-quality PNG images at 1024 $\times {1024}$ resolution with large variations in terms of age, ethnicity, and image background. They use it to find the top attributes which contribute to perceiving a person's age (young or old) or gender (male or female). They also preprocess the images by lowering the resolution to 256x256. The official dataset is unlabeled. It is not clear whether the authors' dataset is an internal, labeled Google version or an unofficially labeled dataset.
|
| 128 |
+
|
| 129 |
+
For training, the MNIST [1] dataset is used due to its simplicity. Only the examples with labels 8 or 9 are kept and the resolution is increased to ${32} \times {32}$ . MNIST was chosen because images compressed to ${16} \times {16}$ or even $8 \times 8$ tend to be recognizable for humans. Unfortunately, LPIPS relies on neural networks that have a fixed number of pooling layers. Without editing reimplementation of LPIPS, the lowest resolution possible is 32 .
|
| 130 |
+
|
| 131 |
+
§ 5.2 HYPERPARAMETERS
|
| 132 |
+
|
| 133 |
+
A complete list of hyperparameters can be found in Table 2 (see Appendix C). A hyperparameter search was not performed for two reasons. First, the training time is long - even for very low resolutions, this is constraining. Second, the criteria for evaluating success is based on a human user, making automated hyperparameter tuning unintuitive.
|
| 134 |
+
|
| 135 |
+
§ 5.3 COMPUTATIONAL REQUIREMENTS
|
| 136 |
+
|
| 137 |
+
Most of our experiments were conducted on Google Colab along with our systems. For training our models we use Colab’s NVIDIA Tesla K80 GPU. Our code is provided in the following GitHub repository: MLRC_2021_FALL-E358.
|
| 138 |
+
|
| 139 |
+
The basic architecture of the StyleGAN2 was adapted from NVlabs' GitHub repository. As previously mentioned, we modify the basic architecture, to align with StylEx's generator and load Lang et al.'s pre-trained weights. The training code was adapted from labml.ai Annotated Paper Implementations' StyleGAN implementation.
|
| 140 |
+
|
| 141 |
+
Training the model on MNIST for 50,000 iterations takes on the order of nine hours to train on Colab. The time required for AttFind is dependent on the resolution, latent dimension, and the number of images in the dataset. Finding the attribute of a single image took approximately one minute for an image with resolution 32 and a latent space of 514.
|
| 142 |
+
|
| 143 |
+
§ 6 RESULTS
|
| 144 |
+
|
| 145 |
+
§ 6.1 REBUILDING STYLEX RESULTS
|
| 146 |
+
|
| 147 |
+
To support Claim 1, we recreate their pre-trained models to PyTorch and test if our results agree. In Figure 3 (see Appendix A), we compare the results from our PyTorch StylEx to their TensorFlow implementation. There are minor differences in the probabilities from the PyTorch classifier which are likely caused by differences in default values or module implementations in the two frameworks.
|
| 148 |
+
|
| 149 |
+
§ 6.2 ATTFIND RESULTS
|
| 150 |
+
|
| 151 |
+
We are now equipped to test our PyTorch models on the AttFind method and inspect the principal attributes of the age classifier; meaning the attributes with the highest contribution to young or old classification. To this end, we compute the AttFind algorithm - with our classifier and generator as inputs - using the 250 latent variables of the FFHQ dataset As can be seen in Figures 1 and 5 (see Appendix B), our model obtains the same attributes as in the original paper.
|
| 152 |
+
|
| 153 |
+
In addition, we implement the Independent selection strategy, to generate image-specific explanations as described in the original paper. This method is a local explanation that returns the top-k attributes affecting a classifier's decision for a single image rather than the entire dataset. The results are shown in Figure 2.
|
| 154 |
+
|
| 155 |
+
These results support the author's Claim 2, that AttFind discovers significant attributes for a classifier's decision. Notably, in 1c the reported probability of the top left image is 17% in the paper, while the probability we find with our and their notebook classifier is 39%.
|
| 156 |
+
|
| 157 |
+
${}^{2}$ https://github.com/NVlabs/ffhq-dataset
|
| 158 |
+
|
| 159 |
+
< g r a p h i c s >
|
| 160 |
+
|
| 161 |
+
Figure 1: Top 4 attributes for the perceived age classifier detected by our model. These images show how the probability of classifying a person as young or old changes based on each attribute. On the first column of each image we display the probability of the person being classified as old and on the second column the probability of them being classified as young.
|
| 162 |
+
|
| 163 |
+
< g r a p h i c s >
|
| 164 |
+
|
| 165 |
+
Figure 2: Independent selection strategy. Top-5 detected attributes for explaining a perceived-age classifier for a specific image. The attributes obtained are different from those presented in Figure 1 which are computed based on the largest average effect over 250 images. The probabilities displayed correspond to the person being classifier as old.
|
| 166 |
+
|
| 167 |
+
§ 6.3 QUANTITATIVE EVALUATION RESULTS
|
| 168 |
+
|
| 169 |
+
To validate the authors' Claim 3 that attributes obtained are identifiable by humans, we conduct the two user studies explained in the paper. Both studies (Classification and Verbal description) aim to prove that the top extracted attributes are distinct, visually coherent, and can be used as counterfactual explanations.
|
| 170 |
+
|
| 171 |
+
The material used for the classification study was obtained by our PyTorch StylEx model on the perceived gender classifier (top 6 attributes), and by the authors' supplementary material for the perceived age classifier (top 4 attributes). The verbal description study combines a mixture of attributes from our and the authors' models, explaining Face and Cats/Dogs classifiers. Results for both studies were provided by 30 users (different per study).
|
| 172 |
+
|
| 173 |
+
Table 1 shows that the results we obtain are within a standard deviation of their results; verifying their contribution that StylEx provides attributes that are easily distinguishable by humans.
|
| 174 |
+
|
| 175 |
+
Table 3 depicts the three most common words used, to describe the most prominent attribute that changes in the images (see Appendix D). By inspecting the results, we draw two main conclusions. First, for all coordinates except skin color (i.e. ${5}^{\text{ th }}$ row in Face(age/gender) classifiers), the majority of the users use the same word in their descriptions. Second, the most common word used is different per attribute, proving that each attribute is unique. Our results agree with the results provided in the original paper.
|
| 176 |
+
|
| 177 |
+
§ 6.4 RECONSTRUCTION GENERALIZATION
|
| 178 |
+
|
| 179 |
+
To further investigate the proposed model, we create new latent variables using images from the FFHQ dataset on our architectures with their pre-trained weights. Then, we use the obtained latent variables to reconstruct the images using our pre-trained generator. Finally, we follow the same process using their architecture and compare the resulting images. Our Stylex reconstructs a clearer image, compared to their model which is more blurred. This may occur because of some differences in the formatting between the frameworks.
|
| 180 |
+
|
| 181 |
+
max width=
|
| 182 |
+
|
| 183 |
+
X Theirs Ours
|
| 184 |
+
|
| 185 |
+
1-3
|
| 186 |
+
Perceived Gender ${0.96}\left( {\pm {0.047}}\right)$ ${0.94}\left( {\pm {0.031}}\right)$
|
| 187 |
+
|
| 188 |
+
1-3
|
| 189 |
+
Perceived Age ${0.983}\left( {\pm {0.037}}\right)$ ${0.978}\left( {\pm {0.025}}\right)$
|
| 190 |
+
|
| 191 |
+
1-3
|
| 192 |
+
|
| 193 |
+
Table 1: Classification study results. Correct identification of the top-6 attributes.
|
| 194 |
+
|
| 195 |
+
§ 6.5 TRAINING
|
| 196 |
+
|
| 197 |
+
The training proved quite volatile. The ${\mathcal{L}}_{\text{ rec }}$ would get stuck in local minima during training. Examples of the images reconstructed by the fully trained model (see Appendix E).
|
| 198 |
+
|
| 199 |
+
Lang et al. experimented with two training regimens. The first regimen was trained using only $E\left( x\right)$ as $w$ , the inputs to the generator, and the above loss. The second regimen alternated between using $E\left( x\right)$ and a randomly generated encoding, $\bar{w}$ . This $\bar{w}$ is created by applying a mapping network to $z$ , where $z \sim \mathcal{N}\left( {{0}^{n},{1}^{n}}\right)$ and $n$ is the dimensionality of $w$ . For this randomly generated ${\bar{x}}^{\prime } = G\left( \bar{w}\right)$ , only the adversarial loss is calculated. Training using $\bar{w}$ can be viewed as the same as training a vanilla StyleGAN. Because we are unsure which method was used for the results in their paper and notebook, we experimented with both. However, the first regimen was the only one that converged.
|
| 200 |
+
|
| 201 |
+
Though we were able to train a model, due to time constraints, we were unable to fully investigate Claim 1.
|
| 202 |
+
|
| 203 |
+
Again due to time constraints, we were unable to run AttFind on the trained model to fully test Claim 3.
|
| 204 |
+
|
| 205 |
+
§ 7 DISCUSSION
|
| 206 |
+
|
| 207 |
+
Using the definition of reproducibility ${}^{3}$ by the U.S. National Science Foundation (NSF) subcommittee on replicability in science, it is difficult to determine Lang et al.'s reproducibility. All details regarding the experimental setup, such as the hyperparameters, the hours of training, the number of steps, the labels of the datasets, etc. are omitted, thus recreating the exact materials of the original investigators is difficult. Since our definition is an implication and we cannot satisfy the first condition, we cannot determine the reproducibility.
|
| 208 |
+
|
| 209 |
+
Instead, we will use a looser definition of reproducibility. We will refer to reproducibility as the ability for another researcher to test their claims. We found that, given enough time, the StylEx is seemingly reproducible. However, given a limited time budget such as our own, the paper is not fully reproducible. We, therefore, can only provide unit tests of their claims. The following sections will discuss information from the results section 6 and to what degree they confirm reproducibility claim by claim.
|
| 210 |
+
|
| 211 |
+
§ 7.1 CLAIM 1
|
| 212 |
+
|
| 213 |
+
The most difficult claim to investigate, given a limited time budget, is the effect of classifier-based training on the StyleSpace. The original paper trains three models, the StylEx with and without integration of the classifier in training and the StyleGAN v2. We found, once the training algorithm is implemented correctly, just training all three models will take at least 24 hours for 50,000 epochs on one GPU even for the simple MNIST dataset. The authors stated that it took approximately a week to train StylEx with 8GPUs. Over two weeks of training time is beyond our time constraints.
|
| 214 |
+
|
| 215 |
+
In addition, we observed that training is volatile. ${}^{4}$ The reconstruction error stagnates in a local minimum before suddenly dipping. However, the model was not always able to escape the local minima within 50,000 iterations. This suggests that, though their results are likely replicable, their replicability may be stochastic. This again hinders reproducibility when time is limited.
|
| 216 |
+
|
| 217 |
+
§ 7.2 CLAIM 2
|
| 218 |
+
|
| 219 |
+
The claim that the authors document the most was Claim 2, their AttFind method. Because the method was implemented in the notebook provided, testing reproducibility was easy. We were able to verify that for the perceived age classifier, our model obtains the same top attributes. We conclude that their method can discover the most influential classifier-related attributes.
|
| 220 |
+
|
| 221 |
+
${}^{3}$ "reproducibility refers to the ability of a researcher to duplicate the results of a prior study using the same materials as were used by the original investigator"
|
| 222 |
+
|
| 223 |
+
${}^{4}$ An example of successful training can be found here and one where the model failed to converge here
|
| 224 |
+
|
| 225 |
+
In addition to their notebook, we modified the AttFind method to find the principal attributes of a single image as shown in Figure 2. This validated the sub-claim of AttFind that StylEx can provide image-specific explanations. Rather than finding the globally important attributes, the model can find the locally important attributes for a particular image.
|
| 226 |
+
|
| 227 |
+
§ 7.3 CLAIM 3
|
| 228 |
+
|
| 229 |
+
The authors claim that StylEx is applicable to a variety of real-world problems. Applicability can be interpreted in two different ways. One can interpret it as being possible to apply StylEx to a variety of domains, or as practical to apply Stylex to a variety of domains. From what we have seen in Figures 1, 2, it is possible to use StylEx for explaining an age classifier, thus it can explain a real-world problem. From Figure 6 (see Appendix E), we found that the StylEx can be trained to, at minimum, reconstruct MNIST data, thus multiple domains.
|
| 230 |
+
|
| 231 |
+
Though we have found that it is possible, we have also found that it is seemingly impractical. Every domain requires the model to be retrained, meaning every domain requires days or weeks of training.
|
| 232 |
+
|
| 233 |
+
§ 7.4 WHAT WAS EASY
|
| 234 |
+
|
| 235 |
+
The open-source notebook is very well structured, which combined with the pseudo-code outlined in Algorithm 1 of their paper, made the AttFind method easy to replicate. In addition, the provided pre-trained models helped to derive some of the vague components of StylEx model.
|
| 236 |
+
|
| 237 |
+
§ 7.5 WHAT WAS DIFFICULT
|
| 238 |
+
|
| 239 |
+
As we already emphasized, there are many difficulties in reproducing this paper. StylEx is built on top of several previous papers making the knowledge needed for implementation substantial. Lang et al. proposed a model without providing code, that is computationally expensive, and with volatile training behavior. In addition, that is sensitive to hyperparameters, which in our case were unknown. Even when scaling down the complexity of the model using smaller resolutions, the time cost of training exceeds what was feasible with our time constraints.
|
| 240 |
+
|
| 241 |
+
Taking shortcuts to subvert these difficulties had a multitude of challenges. We found loading weights from TensorFlow to PyTorch deceptively complex and far from trivial due to differences between the frameworks. Even evaluating their notebook came with difficulties as the dataset they trained on FFHQ does not officially have labels, so the details of their dataset were unknown.
|
| 242 |
+
|
| 243 |
+
§ 7.6 FUTURE WORK
|
| 244 |
+
|
| 245 |
+
The primary goal of this paper was to reproduce the work of Lang et al., however, through reimplementing their code, we found two open avenues for future research. Firstly, the paper focused on general image explanations but did not show examples of misclassified data. It would be interesting to see what insights can be obtained through StylEx. Secondly, the paper compared StylEx only with StyleGAN v2 models. AttFind seems applicable to general autoencoders, and not specific to GANs. Viewing StylEx as an autoencoder, rather than a GAN seems like a promising 297 angle for scalability to a similar counterfactual generator.
|
papers/ML_Reproducibility_Challenge/ML_Reproducibility_Challenge 2021/ML_Reproducibility_Challenge 2021 Fall/SNeep2MXn0K/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,269 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Replication Study of "Fairness and Bias in Online Selection"
|
| 2 |
+
|
| 3 |
+
## Reproducibility Summary
|
| 4 |
+
|
| 5 |
+
## Scope of Reproducibility
|
| 6 |
+
|
| 7 |
+
In this paper, we work on reproducing the results obtained in the 'Fairness and Bias in Online Selection' paper (Correa, Cristi, et al., 2021). The goal of the reproduction study is to validate the 4 main claims made in Correa, Cristi, et al. (2021). The claims made are: (1) for the multi-color secretary problem, an optimal online algorithm is fair, (2) for the multi-color secretary problem, an optimal offline algorithm is unfair, (3) for the multi-color prophet problem, an optimal online algorithm is fair (4) for the multi-color prophet problem, an optimal online algorithm is less efficient relative to the offline algorithm.
|
| 8 |
+
|
| 9 |
+
To test if the results of the secretary algorithm generalize to other data sets, the proposed algorithms and baselines are applied to the UFRGS Entrance Exam and GPA data set (Castro da Silva, 2019).
|
| 10 |
+
|
| 11 |
+
## Methodology
|
| 12 |
+
|
| 13 |
+
The paper that has been reproduced includes a link to a repository containing $C + +$ files for the algorithms that were implemented. For our experiments, we reimplemented the code in Python. Our goal was to reproduce the code in an efficient manner without altering the core logic. Using the Python code all the experiments in the paper have been replicated including some additional experiments to verify the claims made in Correa, Cristi, et al. (2021).
|
| 14 |
+
|
| 15 |
+
## Results
|
| 16 |
+
|
| 17 |
+
The reproduced results support all claims made in Correa, Cristi, et al. (2021). However, in the case of the unfair secretary algorithm (SA), some irregular results arise in the experiments due to randomness. This irregularity is also existent in the original code.
|
| 18 |
+
|
| 19 |
+
## What was easy
|
| 20 |
+
|
| 21 |
+
The concepts behind the algorithms were straightforward. The existing code base provided a solid reference point to verify the results of the original paper by compiling and running the provided code.
|
| 22 |
+
|
| 23 |
+
## What was difficult
|
| 24 |
+
|
| 25 |
+
Implementing the prophet algorithm, in comparison to the secretary algorithm, was complex. $C + +$ is a more efficient compiler (time complexity, etc.) compared to Python. For the reproduction of the algorithms, this needed to be taken into account. While it might be possible to execute transliterated code on a powerful machine, with the available resources the code would have taken over 96 hours to run. In order to tackle this problem, some of the data structures needed to be converted to ${NumPy}$ arrays to decrease computation time.
|
| 26 |
+
|
| 27 |
+
## 1 Introduction
|
| 28 |
+
|
| 29 |
+
As more machine learning algorithms are used in decision-making circumstances, it is important to ensure that social norms are not violated. The social norm that serves as the pivot of this research is fairness. Specifically 'fairness' in the use of selection models. The importance of fairness is to avoid undesirable biases. Selection models are models that input a finite amount of agents and attempt to pick the best possible candidate (agent). The goal is to design algorithms that can fairly judge between agents regardless of any unfair bias.
|
| 30 |
+
|
| 31 |
+
In some real-life implementations of selection models, there is no clear overview of all agents. For example, in the online selection problem, the agents enter the algorithm sequentially. For every agent, a decision has to be made whether this is the best possible agent. The complexity of this task is not being able to have any knowledge on agents that might come in the future. As soon as the decision is made that an agent is the best fit, the algorithm should stop as that agent is the optimal candidate (according to the model). Multiple attempts have been made to create the most accurate algorithm for these online selection models.
|
| 32 |
+
|
| 33 |
+
For this research, we reproduce the 'Fairness and Bias in Online Selection' paper (Correa, Cristi, et al., 2021). In this paper, the authors focus on 2 main problems: the secretary problem and the prophet problem. The secretary problem is a scenario for the sequential selection problem where an attempt is made to select the candidate with the highest value without knowing the value of the candidates to come. An immediate decision has to be made on the candidate, the candidate either gets picked or gets passed on. For the prophet algorithm the same assumptions are made as for the secretary algorithm, but we know the distributions the candidate values are drawn from. The probability of the candidate is based on these distributions. In the case of both problems, the goal is to stop at the best possible candidate based on the assigned probabilities.
|
| 34 |
+
|
| 35 |
+
In order to include a form of fairness in these models, a concrete definition needs to be given to fairness in online selection models. Based on the Correa, Cristi, et al. (2021) paper, fairness is defined as an unbiased evaluation of agents in a selection model. A selection algorithm is fair if it selects the best candidate, closely following the original probability of the best candidate existing in that group. Along with fairness, efficiency has also been used as an evaluation metric in the original paper. Efficiency is a measure of how accurately the online algorithm picks the actual best candidate.
|
| 36 |
+
|
| 37 |
+
By creating a 'fair' version for these problems, the authors claim to have created a fair use of sequential single item selection models. Through categorization of the agents by color, a distinction between the agents can be made. However, the qualities these agents possess might be different enough that they could be considered incomparable. So implementing a multi-color version of the sequential selection models and picking the best possible candidate, taking color into account, an 'unfair' comparison is avoided.
|
| 38 |
+
|
| 39 |
+
## 2 Scope of reproducibility
|
| 40 |
+
|
| 41 |
+
In this reproduction study, we focus on the authors' claims that the use of a multi-color version of the secretary and prophet problem would make the use of these algorithms fair. The authors of the paper implement these algorithms on synthetic data sets and real-world data sets.
|
| 42 |
+
|
| 43 |
+
For our study, we put an effort into reproducing the results given by the paper. The goal of this reproduction is to either validate or deny the claims made in the paper. This effort has been fulfilled by re-implementing the code publicly available for the algorithm. This re-implementation is done in Python in comparison to the $C + +$ code provided by the authors. Most of the code has been written using NumPy to try and achieve about the same efficiency as the $C + +$ code. However, the setup for the experiments corresponds to that of the authors.
|
| 44 |
+
|
| 45 |
+
To show that the claims generalize well over differently distributed data sets, we run the proposed algorithms and baselines on the UFRGS Entrance Exam and GPA data set (Castro da Silva, 2019).
|
| 46 |
+
|
| 47 |
+
The claims made in the Correa, Cristi, et al. (2021) paper are:
|
| 48 |
+
|
| 49 |
+
- Claim 1: For the multi-color secretary problem, an optimal online algorithm is fair.
|
| 50 |
+
|
| 51 |
+
- Claim 2: For the multi-color secretary problem, an optimal offline algorithm is unfair.
|
| 52 |
+
|
| 53 |
+
- Claim 3: For the multi-color prophet problem, an optimal online algorithm is fair.
|
| 54 |
+
|
| 55 |
+
- Claim 4: For the multi-color prophet problem, an optimal online algorithm is less efficient relative to the offline algorithm.
|
| 56 |
+
|
| 57 |
+
To test these claims we use the algorithms mentioned above on 4 types of data sets. These data sets are further discussed in section 3.3.
|
| 58 |
+
|
| 59 |
+
## 3 Methodology
|
| 60 |
+
|
| 61 |
+
In this section, our approach to the re-implementation of the experiments will be discussed and an additional experiment will be proposed.
|
| 62 |
+
|
| 63 |
+
### 3.1 Code
|
| 64 |
+
|
| 65 |
+
The code accompanying the paper is provided in $C + +$ . As required for this study, we reproduced the work in Python, and subsequently made use of the inherent Pythonic efficiencies. The provided code allowed for a smooth initial reproduction. However, many optimisations were required to decrease computation duration.
|
| 66 |
+
|
| 67 |
+
### 3.2 Model descriptions
|
| 68 |
+
|
| 69 |
+
In the original paper, two types of single item selection models are considered: the secretary algorithm and the prophet algorithm. Candidates are partitioned into different groups which the authors refer to as colors. Every candidate has a numerical value that indicates the capabilities of that candidate. The authors refer to these indicators as values. Candidates arrive sequentially, and upon arrival, the algorithms decide whether the candidate is the best candidate overall. The best candidate is defined as the candidate with the highest value of the sequence of candidates. For clarity, the main parts of the Methodology and Results sections are divided per model.
|
| 70 |
+
|
| 71 |
+
#### 3.2.1 Secretary Algorithm
|
| 72 |
+
|
| 73 |
+
For the secretary algorithm, it is assumed that candidates arrive in uniformly random order. To verify the claims made by the author, we compare the optimal online algorithm as proposed by Correa, Cristi, et al. (2021) to two baselines. Additionally, the algorithm and its baselines are applied on different data sets, either synthetically generated or composed from real-word data sets. The optimal online algorithm proposed by the authors (Fair secretary algorithm) is denoted formally as:
|
| 74 |
+
|
| 75 |
+
Algorithm 1 GroupThresholds(t)
|
| 76 |
+
|
| 77 |
+
---
|
| 78 |
+
|
| 79 |
+
Input: $\mathbf{t} \in {\left\lbrack 0,1\right\rbrack }^{k}$ , a threshold in time for each group
|
| 80 |
+
|
| 81 |
+
Output: $i \in \left\lbrack n\right\rbrack$ , index of chosen candidate
|
| 82 |
+
|
| 83 |
+
/* assuming arrival times ${\tau }_{1} < \ldots < {\tau }_{n}\; \star$ /
|
| 84 |
+
|
| 85 |
+
for $i \leftarrow 1$ to $n$ do
|
| 86 |
+
|
| 87 |
+
if ${\tau }_{i} > {t}_{c\left( i\right) }$ then
|
| 88 |
+
|
| 89 |
+
if $i \succ \max \left\{ {{i}^{\prime } \mid {\tau }_{{i}^{\prime }} \leq {\tau }_{i}, c\left( {i}^{\prime }\right) = c\left( i\right) }\right\}$ then
|
| 90 |
+
|
| 91 |
+
return $i$
|
| 92 |
+
|
| 93 |
+
end
|
| 94 |
+
|
| 95 |
+
end
|
| 96 |
+
|
| 97 |
+
end
|
| 98 |
+
|
| 99 |
+
---
|
| 100 |
+
|
| 101 |
+
where the input $\mathbf{t} = \left( {{t}_{1},\ldots ,{t}_{k}}\right)$ is a vector of thresholds, one for each color $j \in \left\lbrack k\right\rbrack$ . The algorithm first checks if the candidate $i$ arrived after the threshold of its color ${t}_{c\left( i\right) }$ . If this condition is met, it accepts the candidate if its value exceeds the value of all previous candidates of color ${t}_{c\left( i\right) }$ , indicating that it is the best candidate for that color.
|
| 102 |
+
|
| 103 |
+
After having chosen the best candidate of each color, we are interested in selecting the best overall candidate. We denote the probabilities with which the best candidate of group $j$ is the best among all colors by ${p}_{j}$ , which results in the vector $\mathbf{p} = \left( {{p}_{1},\ldots ,{p}_{k}}\right)$ covering all colors. We use this in our experiments to verify the claims of the author using equal, and unequal values for $\mathbf{p}$ among colors.
|
| 104 |
+
|
| 105 |
+
#### 3.2.2 Prophet Algorithm
|
| 106 |
+
|
| 107 |
+
For the prophet algorithm, the same assumptions are made as for the secretary algorithm, but we know the distributions ${F}_{i}$ the candidate values are drawn from. In the paper, the authors propose two optimal online algorithms specified in figure 1, where ${q}_{1},\cdots ,{q}_{n}$ denote the marginal probabilities that the optimal fair offline algorithm picks the candidates $i = 1,\cdots , n$ . Figure 1a shows the general Fair prophet algorithm (Fair prophet algorithm). This algorithm does not make any assumptions about the underlying probability distribution, it can be different for every candidate. Figure 1b shows the Fair independent and identically distributed prophet algorithm (Fair IID prophet algorithm). This algorithm assumes that the values of the candidates are drawn from the same distribution.
|
| 108 |
+
|
| 109 |
+
Algorithm 2 Fair General Prophet
|
| 110 |
+
|
| 111 |
+
---
|
| 112 |
+
|
| 113 |
+
Input: Distributions ${F}_{1},\cdots ,{F}_{n}$ , and ${q}_{1},\cdots ,{q}_{n}$
|
| 114 |
+
|
| 115 |
+
Output: $i \in \left\lbrack n\right\rbrack$ , index of chosen candidate
|
| 116 |
+
|
| 117 |
+
$s \leftarrow 0$
|
| 118 |
+
|
| 119 |
+
for $i \leftarrow 1$ to $n$ do
|
| 120 |
+
|
| 121 |
+
if ${v}_{i} \geq {F}_{i}^{-1}\left( {1 - \frac{{q}_{i}/2}{1 - s/2}}\right)$ then
|
| 122 |
+
|
| 123 |
+
return $i$
|
| 124 |
+
|
| 125 |
+
end
|
| 126 |
+
|
| 127 |
+
$s \leftarrow s + {q}_{i}$
|
| 128 |
+
|
| 129 |
+
end
|
| 130 |
+
|
| 131 |
+
---
|
| 132 |
+
|
| 133 |
+
(a) Fair prophet algorithm
|
| 134 |
+
|
| 135 |
+
Algorithm 3 FAIR IID PROPHET
|
| 136 |
+
|
| 137 |
+
---
|
| 138 |
+
|
| 139 |
+
Input: Distributions $F$
|
| 140 |
+
|
| 141 |
+
Output: $i \in \left\lbrack n\right\rbrack$ , index of chosen candidate
|
| 142 |
+
|
| 143 |
+
for $i \leftarrow 1$ to $n$ do
|
| 144 |
+
|
| 145 |
+
if ${v}_{i} \geq {F}^{-1}\left( {1 - \frac{2/{3n}}{1 - 2\left( {i - 1}\right) /{3n}}}\right)$ then
|
| 146 |
+
|
| 147 |
+
return $i$
|
| 148 |
+
|
| 149 |
+
end
|
| 150 |
+
|
| 151 |
+
end
|
| 152 |
+
|
| 153 |
+
---
|
| 154 |
+
|
| 155 |
+
(b) Fair IID prophet algorithm
|
| 156 |
+
|
| 157 |
+
Figure 1: Fair prophet algorithms proposed by the authors.
|
| 158 |
+
|
| 159 |
+
### 3.3 Data sets
|
| 160 |
+
|
| 161 |
+
The experiments involving the SA algorithm are conducted on two synthetic data sets and two real-world data sets. The data sets and their properties are summarised below:
|
| 162 |
+
|
| 163 |
+
1. Synthetic data set, equal p values contains four different colors with 10, 100, 1000, and 10000 occurrences. The value of each element is chosen independently and uniformly at random from $\left\lbrack {0,1}\right\rbrack$ .
|
| 164 |
+
|
| 165 |
+
2. Synthetic data set, general $\mathbf{p}$ values contains a similar setup as [1], but with $p = \left( {{0.3},{0.25},{0.25},{0.2}}\right)$ .
|
| 166 |
+
|
| 167 |
+
3. Feedback maximization (Bank) contains records of direct marketing campaigns (phone calls) by a Portuguese banking institution (Moro et al., 2014). The clients are split into 5 colors by age: under 30, 31-40, 41-50, 51-60, and over 61 years old. The value of every client is the duration of the phone call. Moreover, an equal $p$ of 0.2 was used for all colors.
|
| 168 |
+
|
| 169 |
+
4. Influence maximization (Pokec) contains records of the influence of users of the Pokec social network (Takac & Zábovsky, 2012). We pre-process the data by dividing the users into 5 different colors according to their body mass index (BMI): under weighted (BMI $< {18.5}$ ), normal ( ${18.5} < =$ BMI $< {25}$ ), over weighted ( ${25.0} < =$ $\mathrm{{BMI}} < {30.0}$ ), obese type 1 $\left( {{30.0} < = \mathrm{{BMI}} < {35}}\right)$ , and obese type 2 $\left( {\mathrm{{BMI}} > = {35.0}}\right)$ . The value is computed as the number of the followers for each user. Again, an equal $p$ of 0.2 was used for all colors.
|
| 170 |
+
|
| 171 |
+
### 3.4 Experimental setup
|
| 172 |
+
|
| 173 |
+
In this subsection, the experimental evaluation performed by the authors is discussed. As before, a distinction between the two problems is made for clarity. Additionally, an extra experiment will be considered where the secretary algorithm will be evaluated on another real-world data set.
|
| 174 |
+
|
| 175 |
+
## Secretary experiments
|
| 176 |
+
|
| 177 |
+
The authors propose two different baselines to compare the Fair secretary algorithm to. Firstly, the classic secretary algorithm (SA), which does not take the colors of the candidates into account. Secondly, the single-color secretary algorithm (SCSA). This algorithm picks a color proportional to the $p$ values and then runs the classic secretary algorithm on the candidates of only that color. To evaluate the claims by the authors, the three mentioned algorithms are evaluated against the four data sets discussed earlier.
|
| 178 |
+
|
| 179 |
+
The parameters of these experiments consist of the size of the data sets and the number of repetitions. For the experiments on the Synthetic data sets (equal $p/$ general $p$ ) and the Bank data set, all available candidates were used in 20.000 repetitions. In the original paper, the authors used all $\pm {650.000}$ candidates of the Pokec data set in 1000.000 repetitions. In our experiment, we had to limit these parameters due to time constraints. We only considered the first 40.000 candidates and used 40.000 repetitions.
|
| 180 |
+
|
| 181 |
+
## Prophet experiments
|
| 182 |
+
|
| 183 |
+
For the prophet experiments, the Fair prophet algorithm and Fair IID prophet algorithm are evaluated against three baselines: the SC algorithm (Samuel-Cahn, 1984), EHKS algorithm (Marx, 2021), CFHOV algorithm (Correa, Foncea, et al., 2021) and DP algorithm (Brown, 1972). The specific works of these algorithms are described in further detail in the paper (Correa, Cristi, et al., 2021) section 4.2.
|
| 184 |
+
|
| 185 |
+
For the experiments, two settings are implemented. In the first setting, 50 samples are taken from a uniform distribution in a range of $\left\lbrack {0,1}\right\rbrack$ . These samples function as the input stream. In the second setting,1000 samples are taken from a binomial distribution with 1000 trials and a probability of a successful single trial $p = {0.5}$ . In order to compare this method with the already existing algorithms, we assume each candidate to be group of its own. For every algorithm, we repeat the experiment 50.000 times.
|
| 186 |
+
|
| 187 |
+
## Extending to other data set (UFRGS) experiments
|
| 188 |
+
|
| 189 |
+
This subsection describes an experimental extension on the work of Correa, Cristi, et al. (2021). In our work, we have concluded that the secretary results claimed in the paper are reproducible. It is shown in section 4 that the Fair algorithm significantly outperforms the SCSA baseline. However, all real-world data sets used to prove this claim contain the same distribution of values for every color. The distributions for the Bank and Pokec data sets are shown in Figures 2a 2b respectively.
|
| 190 |
+
|
| 191 |
+
Our extension investigates the effect of applying the Fair algorithm to an unequally distributed real-word data set, such as the UFRGS Entrance Exam and GPA Data (UFRGS) data set. This work will show whether the claims made by the authors generalize to these types of data sets. The UFRGS contains entrance exam scores of students applying to a university in Brazil (Federal University of Rio Grande do Sul), along with the students' GPAs during the first three semesters at university. The data set also includes the gender of every student (male or female). The distribution of the data set is shown in Figure 2c. This experiment is a duplication of the original secretary experiments but with the UFRGS data set as input. The gender of the students is used as color, their GPA score as values. The experiment is repeated 20.000 times.
|
| 192 |
+
|
| 193 |
+

|
| 194 |
+
|
| 195 |
+
Figure 2: Value distributions of the different color groups in the real-world secretary algorithm data sets.
|
| 196 |
+
|
| 197 |
+
## 4 Results
|
| 198 |
+
|
| 199 |
+
The following paragraphs will present the results for the experiments discussed in section 3.4: (1) the secretary experiments, (2) the prophet experiments, (3) our extended work.
|
| 200 |
+
|
| 201 |
+
## Secretary results
|
| 202 |
+
|
| 203 |
+
The plots in Figure 3 show our reproduction work regarding the original paper on the secretary problem over the four different data sets. We find that all results are in line with the work of Correa, Cristi, et al. (2021). Due to the nature of construction of the fair algorithm proposed by the authors, and the SCSA, we find that it picks elements from each color proportional to the vector $\mathbf{p}$ . From this, it can be concluded that the authors’ Claims 1 and 2 are valid.
|
| 204 |
+
|
| 205 |
+

|
| 206 |
+
|
| 207 |
+
Figure 3: Reproduction work regarding the original paper on the secretary problem. Comparing the Fair secretary algorithm to the aforementioned baselines SA, SCSA over four different data sets: (a) synthetic data set, equal $p$ values,(b) synthetic data set, general $p$ values,(c) feedback maximization data set (Bank), and (d) influence maximization data set (Pokec). Input denotes the number of elements from each color in the input, F-Pick and F-Max are the number of elements picked by the fair secretary algorithm and the number of them that are the maximum among the elements of that color. Similarly, U-Pick (S-Pick) and U-Max (S-Max) are the number of elements picked by SA and SCSA and the number of them that are the maximum among the elements of that color
|
| 208 |
+
|
| 209 |
+
The authors claim that the quality of the solution of their algorithm is significantly higher than the SCSA. Table 1 shows our replication of this comparison. We find that our implementation reproduces the authors' claim that their method is superior to the SCSA. Small discrepancies in the results are found, this is due to the random nature of the algorithm. However, as mentioned earlier, after scrutinizing the distributions of the used data sets, we found that all the used data sets have similar distributions in the input. Therefore, we proceed by agreeing with the claims of the author given this restriction.
|
| 210 |
+
|
| 211 |
+
<table><tr><td>Data set</td><td>Claimed Pick</td><td>Reproduction Pick</td><td>Claimed Max</td><td>Reproduction Max</td></tr><tr><td>Synthetic (Equal $p$ )</td><td>1.305 (+30.5%)</td><td>1.326 (+32.6%)</td><td>1.721 (+73.1%)</td><td>1.685 (+68.5%)</td></tr><tr><td>Synthetic (General $p$ )</td><td>1.309 (+30.9%)</td><td>1.334 (+33.4%)</td><td>1.630 (+63.0%)</td><td>1.666 (+66.6%)</td></tr><tr><td>Bank</td><td>1.347 (+34.7%)</td><td>1.377 (+37.7%)</td><td>1.760 (+76.0%)</td><td>1.812 (+81.2%)</td></tr><tr><td>Pokec</td><td>1.373 (+37.3%)</td><td>1.368 (+36.8%)</td><td>1.756 (+75.6%)</td><td>1.810 (+81.0%)</td></tr><tr><td>UFGRS</td><td>-</td><td>1.192 (+19.2%)</td><td>-</td><td>1.364 (+36.4%)</td></tr></table>
|
| 212 |
+
|
| 213 |
+
Table 1: Secretary experiment claims by the author compared to reproduced results.
|
| 214 |
+
|
| 215 |
+
## Prophet results
|
| 216 |
+
|
| 217 |
+
The patterns of the results in the original paper are reflected in our reproduction as visualized in figured 4. A major difference is that the scale of their y-axis is twice the size of our reproduction. Because the shown plots are a histogram of arrival positions, this could be attributed to a difference in bin size. The authors' report specifies using uniform distributions. Table 2 shows our replication of the average values chosen by each algorithm. While small differences exist, our reproduction mirrors the authors' results upon running their code closely.
|
| 218 |
+
|
| 219 |
+
<table><tr><td/><td colspan="2">Uniform Distribution</td><td colspan="2">Binomial Distribution</td></tr><tr><td>Algorithm</td><td>Claimed value</td><td>Reproduction value</td><td>Claimed value</td><td>Reproduction value</td></tr><tr><td>Fair PA</td><td>0.501</td><td>0.497</td><td>0.297</td><td>0.273</td></tr><tr><td>Fair IID</td><td>0.661</td><td>0.654</td><td>0.389</td><td>0.364</td></tr><tr><td>SC</td><td>0.499</td><td>0.494</td><td>0.227</td><td>0.253</td></tr><tr><td>EHKS</td><td>0.631</td><td>0.625</td><td>0.362</td><td>0.339</td></tr><tr><td>CFHOV</td><td>0.752</td><td>0.755</td><td>0.513</td><td>0.408</td></tr><tr><td>DP</td><td>0.751</td><td>0.752</td><td>0.429</td><td>0.340</td></tr></table>
|
| 220 |
+
|
| 221 |
+
Table 2: Prophet experiment claims by the author compared to reproduced results.
|
| 222 |
+
|
| 223 |
+
191
|
| 224 |
+
|
| 225 |
+

|
| 226 |
+
|
| 227 |
+
Figure 4: Reproduced results for the prophet experiments.
|
| 228 |
+
|
| 229 |
+
## Extending to other data set (UFRGS) results
|
| 230 |
+
|
| 231 |
+
Figure 5 shows the results of the experiments proposed in section 3.4. It can be noted that the pattern visible in the earlier secretary results still holds for a new, unequally distributed data set. However, when looking at Table 1, a significant decrease in performance can be detected. The Bank and Pokec data sets scored +37.7% and +36.8% for F-Pick compared to S-Pick. The UFRGS only has an increase of +19.2%. The difference is even more significant when comparing F-Max to S-Max; Pokec and Bank have an increase of +81.2% and +81.0%, UFRGS only has an improvement of +36.4%. We can conclude that the performance increase of the Fair secretary algorithm is not as significant when using an unequally distributed data set, compared to the increase mentioned in the paper.
|
| 232 |
+
|
| 233 |
+

|
| 234 |
+
|
| 235 |
+
Figure 5: Secretary experiment applied to the UFRGS data set.
|
| 236 |
+
|
| 237 |
+
## 5 Discussion
|
| 238 |
+
|
| 239 |
+
In this research, we have tried to reproduce the work of Correa, Cristi, et al. (2021) as closely as possible. However, there are a few inconsistencies in the original code and paper, which caused complications. These points and our solution to them (if required) will be briefly discussed in the following paragraph.
|
| 240 |
+
|
| 241 |
+
Firstly, as mentioned before, the BMI thresholds for the pre-processing of the Pokec dataset were missing in the authors' work. This poses a problem as slight alterations to these thresholds yield different results. This problem was solved by finding concurring values in other research. Secondly, to limit the computation time of our reproduction, the size of the Pokec data set was limited from approximately 650.000 to 40.000 elements. The number of repetitions for this experiment was also decreased from 1.000.000 to 40.000 . We opted for this solution as the distributions in the results did not change from these limits onward. Thirdly, the U-pick/U-max values in the secretary results of the original work are inconsistent due to randomness. It seems that changing the seed value of the random number generator in the $C + +$ code heavily changes the output of the SA algorithm (U-pick/U-max). The SA results could therefore be cherry-picked as no further explanation was provided by the authors. Lastly, some inconsistencies are present in the paper. From minor typos e.g. using the word desbribed instead of described, to more serious mistakes, such as claiming that an increase of 1.721 is equal to (+73.1%). A thorough reread of the paper would have resolved this.
|
| 242 |
+
|
| 243 |
+
### 5.1 Reflection on our replication study
|
| 244 |
+
|
| 245 |
+
The algorithms used in the original were clear and straightforward. The existing $C + +$ code of the authors provided a good starting point for the verification of the results.
|
| 246 |
+
|
| 247 |
+
However, our goal was to further validate these claims and generalize them to a further extent. We did this by reproducing the work of the original paper. Reproducing the work efficiently in another language, in our case Python, introduced some difficulties and took longer than expected. An execution of transliterated code resulted in an excessive run time. To tackle this problem, some of the data structures needed to be converted to ${NumPy}$ arrays to decrease computation time. This requires advanced knowledge of Numpy and the use of data structures.
|
| 248 |
+
|
| 249 |
+
### 5.2 Communication with original authors
|
| 250 |
+
|
| 251 |
+
As certain parameters and split-off values were not clearly defined in either the paper or the original code, we reached out to the authors via mail to ensure a fair assessment of the reproduction. Examples of missing split-off values are the BMI category thresholds for the pre-processing of the Pokec data set. These category values are not fixed in literature and differ depending on age and nationality. At the time of writing this report, we had not yet heard back from the authors. We resolved this by assuming certain values and explanations, which are all documented in our paper.
|
| 252 |
+
|
| 253 |
+
## References
|
| 254 |
+
|
| 255 |
+
Brown, B. (1972). Great expectations: The theory of optimal stopping. Journal of the Royal Statistical Society: Series A (General), 135(4), 610-610.
|
| 256 |
+
|
| 257 |
+
Castro da Silva, B. (2019). UFRGS Entrance Exam and GPA Data. Harvard Dataverse. Retrieved from https:// doi.org/10.7910/DVN/035FW8 doi: 10.7910/DVN/O35FW8
|
| 258 |
+
|
| 259 |
+
Correa, J., Cristi, A., Duetting, P., & Norouzi-Fard, A. (2021). Fairness and bias in online selection. In International conference on machine learning (pp. 2112-2121).
|
| 260 |
+
|
| 261 |
+
Correa, J., Foncea, P., Hoeksma, R., Oosterwijk, T., & Vredeveld, T. (2021). Posted price mechanisms and optimal threshold strategies for random arrivals. Mathematics of Operations Research.
|
| 262 |
+
|
| 263 |
+
Marx, D. (2021). Proceedings of the 2021 acm-siam symposium on discrete algorithms (soda). SIAM.
|
| 264 |
+
|
| 265 |
+
Moro, S., Cortez, P., & Rita, P. (2014). A data-driven approach to predict the success of bank telemarketing. Decision Support Systems, 62, 22-31.
|
| 266 |
+
|
| 267 |
+
Samuel-Cahn, E. (1984). Comparison of threshold stop rules and maximum for independent nonnegative random variables. the Annals of Probability, 1213-1216.
|
| 268 |
+
|
| 269 |
+
Takac, L., & Zábovský, M. (2012). Data analysis in public social networks. Trends of Innovations, 1-6.
|
papers/ML_Reproducibility_Challenge/ML_Reproducibility_Challenge 2021/ML_Reproducibility_Challenge 2021 Fall/SNeep2MXn0K/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,283 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ REPLICATION STUDY OF "FAIRNESS AND BIAS IN ONLINE SELECTION"
|
| 2 |
+
|
| 3 |
+
§ REPRODUCIBILITY SUMMARY
|
| 4 |
+
|
| 5 |
+
§ SCOPE OF REPRODUCIBILITY
|
| 6 |
+
|
| 7 |
+
In this paper, we work on reproducing the results obtained in the 'Fairness and Bias in Online Selection' paper (Correa, Cristi, et al., 2021). The goal of the reproduction study is to validate the 4 main claims made in Correa, Cristi, et al. (2021). The claims made are: (1) for the multi-color secretary problem, an optimal online algorithm is fair, (2) for the multi-color secretary problem, an optimal offline algorithm is unfair, (3) for the multi-color prophet problem, an optimal online algorithm is fair (4) for the multi-color prophet problem, an optimal online algorithm is less efficient relative to the offline algorithm.
|
| 8 |
+
|
| 9 |
+
To test if the results of the secretary algorithm generalize to other data sets, the proposed algorithms and baselines are applied to the UFRGS Entrance Exam and GPA data set (Castro da Silva, 2019).
|
| 10 |
+
|
| 11 |
+
§ METHODOLOGY
|
| 12 |
+
|
| 13 |
+
The paper that has been reproduced includes a link to a repository containing $C + +$ files for the algorithms that were implemented. For our experiments, we reimplemented the code in Python. Our goal was to reproduce the code in an efficient manner without altering the core logic. Using the Python code all the experiments in the paper have been replicated including some additional experiments to verify the claims made in Correa, Cristi, et al. (2021).
|
| 14 |
+
|
| 15 |
+
§ RESULTS
|
| 16 |
+
|
| 17 |
+
The reproduced results support all claims made in Correa, Cristi, et al. (2021). However, in the case of the unfair secretary algorithm (SA), some irregular results arise in the experiments due to randomness. This irregularity is also existent in the original code.
|
| 18 |
+
|
| 19 |
+
§ WHAT WAS EASY
|
| 20 |
+
|
| 21 |
+
The concepts behind the algorithms were straightforward. The existing code base provided a solid reference point to verify the results of the original paper by compiling and running the provided code.
|
| 22 |
+
|
| 23 |
+
§ WHAT WAS DIFFICULT
|
| 24 |
+
|
| 25 |
+
Implementing the prophet algorithm, in comparison to the secretary algorithm, was complex. $C + +$ is a more efficient compiler (time complexity, etc.) compared to Python. For the reproduction of the algorithms, this needed to be taken into account. While it might be possible to execute transliterated code on a powerful machine, with the available resources the code would have taken over 96 hours to run. In order to tackle this problem, some of the data structures needed to be converted to ${NumPy}$ arrays to decrease computation time.
|
| 26 |
+
|
| 27 |
+
§ 1 INTRODUCTION
|
| 28 |
+
|
| 29 |
+
As more machine learning algorithms are used in decision-making circumstances, it is important to ensure that social norms are not violated. The social norm that serves as the pivot of this research is fairness. Specifically 'fairness' in the use of selection models. The importance of fairness is to avoid undesirable biases. Selection models are models that input a finite amount of agents and attempt to pick the best possible candidate (agent). The goal is to design algorithms that can fairly judge between agents regardless of any unfair bias.
|
| 30 |
+
|
| 31 |
+
In some real-life implementations of selection models, there is no clear overview of all agents. For example, in the online selection problem, the agents enter the algorithm sequentially. For every agent, a decision has to be made whether this is the best possible agent. The complexity of this task is not being able to have any knowledge on agents that might come in the future. As soon as the decision is made that an agent is the best fit, the algorithm should stop as that agent is the optimal candidate (according to the model). Multiple attempts have been made to create the most accurate algorithm for these online selection models.
|
| 32 |
+
|
| 33 |
+
For this research, we reproduce the 'Fairness and Bias in Online Selection' paper (Correa, Cristi, et al., 2021). In this paper, the authors focus on 2 main problems: the secretary problem and the prophet problem. The secretary problem is a scenario for the sequential selection problem where an attempt is made to select the candidate with the highest value without knowing the value of the candidates to come. An immediate decision has to be made on the candidate, the candidate either gets picked or gets passed on. For the prophet algorithm the same assumptions are made as for the secretary algorithm, but we know the distributions the candidate values are drawn from. The probability of the candidate is based on these distributions. In the case of both problems, the goal is to stop at the best possible candidate based on the assigned probabilities.
|
| 34 |
+
|
| 35 |
+
In order to include a form of fairness in these models, a concrete definition needs to be given to fairness in online selection models. Based on the Correa, Cristi, et al. (2021) paper, fairness is defined as an unbiased evaluation of agents in a selection model. A selection algorithm is fair if it selects the best candidate, closely following the original probability of the best candidate existing in that group. Along with fairness, efficiency has also been used as an evaluation metric in the original paper. Efficiency is a measure of how accurately the online algorithm picks the actual best candidate.
|
| 36 |
+
|
| 37 |
+
By creating a 'fair' version for these problems, the authors claim to have created a fair use of sequential single item selection models. Through categorization of the agents by color, a distinction between the agents can be made. However, the qualities these agents possess might be different enough that they could be considered incomparable. So implementing a multi-color version of the sequential selection models and picking the best possible candidate, taking color into account, an 'unfair' comparison is avoided.
|
| 38 |
+
|
| 39 |
+
§ 2 SCOPE OF REPRODUCIBILITY
|
| 40 |
+
|
| 41 |
+
In this reproduction study, we focus on the authors' claims that the use of a multi-color version of the secretary and prophet problem would make the use of these algorithms fair. The authors of the paper implement these algorithms on synthetic data sets and real-world data sets.
|
| 42 |
+
|
| 43 |
+
For our study, we put an effort into reproducing the results given by the paper. The goal of this reproduction is to either validate or deny the claims made in the paper. This effort has been fulfilled by re-implementing the code publicly available for the algorithm. This re-implementation is done in Python in comparison to the $C + +$ code provided by the authors. Most of the code has been written using NumPy to try and achieve about the same efficiency as the $C + +$ code. However, the setup for the experiments corresponds to that of the authors.
|
| 44 |
+
|
| 45 |
+
To show that the claims generalize well over differently distributed data sets, we run the proposed algorithms and baselines on the UFRGS Entrance Exam and GPA data set (Castro da Silva, 2019).
|
| 46 |
+
|
| 47 |
+
The claims made in the Correa, Cristi, et al. (2021) paper are:
|
| 48 |
+
|
| 49 |
+
* Claim 1: For the multi-color secretary problem, an optimal online algorithm is fair.
|
| 50 |
+
|
| 51 |
+
* Claim 2: For the multi-color secretary problem, an optimal offline algorithm is unfair.
|
| 52 |
+
|
| 53 |
+
* Claim 3: For the multi-color prophet problem, an optimal online algorithm is fair.
|
| 54 |
+
|
| 55 |
+
* Claim 4: For the multi-color prophet problem, an optimal online algorithm is less efficient relative to the offline algorithm.
|
| 56 |
+
|
| 57 |
+
To test these claims we use the algorithms mentioned above on 4 types of data sets. These data sets are further discussed in section 3.3.
|
| 58 |
+
|
| 59 |
+
§ 3 METHODOLOGY
|
| 60 |
+
|
| 61 |
+
In this section, our approach to the re-implementation of the experiments will be discussed and an additional experiment will be proposed.
|
| 62 |
+
|
| 63 |
+
§ 3.1 CODE
|
| 64 |
+
|
| 65 |
+
The code accompanying the paper is provided in $C + +$ . As required for this study, we reproduced the work in Python, and subsequently made use of the inherent Pythonic efficiencies. The provided code allowed for a smooth initial reproduction. However, many optimisations were required to decrease computation duration.
|
| 66 |
+
|
| 67 |
+
§ 3.2 MODEL DESCRIPTIONS
|
| 68 |
+
|
| 69 |
+
In the original paper, two types of single item selection models are considered: the secretary algorithm and the prophet algorithm. Candidates are partitioned into different groups which the authors refer to as colors. Every candidate has a numerical value that indicates the capabilities of that candidate. The authors refer to these indicators as values. Candidates arrive sequentially, and upon arrival, the algorithms decide whether the candidate is the best candidate overall. The best candidate is defined as the candidate with the highest value of the sequence of candidates. For clarity, the main parts of the Methodology and Results sections are divided per model.
|
| 70 |
+
|
| 71 |
+
§ 3.2.1 SECRETARY ALGORITHM
|
| 72 |
+
|
| 73 |
+
For the secretary algorithm, it is assumed that candidates arrive in uniformly random order. To verify the claims made by the author, we compare the optimal online algorithm as proposed by Correa, Cristi, et al. (2021) to two baselines. Additionally, the algorithm and its baselines are applied on different data sets, either synthetically generated or composed from real-word data sets. The optimal online algorithm proposed by the authors (Fair secretary algorithm) is denoted formally as:
|
| 74 |
+
|
| 75 |
+
Algorithm 1 GroupThresholds(t)
|
| 76 |
+
|
| 77 |
+
Input: $\mathbf{t} \in {\left\lbrack 0,1\right\rbrack }^{k}$ , a threshold in time for each group
|
| 78 |
+
|
| 79 |
+
Output: $i \in \left\lbrack n\right\rbrack$ , index of chosen candidate
|
| 80 |
+
|
| 81 |
+
/* assuming arrival times ${\tau }_{1} < \ldots < {\tau }_{n}\; \star$ /
|
| 82 |
+
|
| 83 |
+
for $i \leftarrow 1$ to $n$ do
|
| 84 |
+
|
| 85 |
+
if ${\tau }_{i} > {t}_{c\left( i\right) }$ then
|
| 86 |
+
|
| 87 |
+
if $i \succ \max \left\{ {{i}^{\prime } \mid {\tau }_{{i}^{\prime }} \leq {\tau }_{i},c\left( {i}^{\prime }\right) = c\left( i\right) }\right\}$ then
|
| 88 |
+
|
| 89 |
+
return $i$
|
| 90 |
+
|
| 91 |
+
end
|
| 92 |
+
|
| 93 |
+
end
|
| 94 |
+
|
| 95 |
+
end
|
| 96 |
+
|
| 97 |
+
where the input $\mathbf{t} = \left( {{t}_{1},\ldots ,{t}_{k}}\right)$ is a vector of thresholds, one for each color $j \in \left\lbrack k\right\rbrack$ . The algorithm first checks if the candidate $i$ arrived after the threshold of its color ${t}_{c\left( i\right) }$ . If this condition is met, it accepts the candidate if its value exceeds the value of all previous candidates of color ${t}_{c\left( i\right) }$ , indicating that it is the best candidate for that color.
|
| 98 |
+
|
| 99 |
+
After having chosen the best candidate of each color, we are interested in selecting the best overall candidate. We denote the probabilities with which the best candidate of group $j$ is the best among all colors by ${p}_{j}$ , which results in the vector $\mathbf{p} = \left( {{p}_{1},\ldots ,{p}_{k}}\right)$ covering all colors. We use this in our experiments to verify the claims of the author using equal, and unequal values for $\mathbf{p}$ among colors.
|
| 100 |
+
|
| 101 |
+
§ 3.2.2 PROPHET ALGORITHM
|
| 102 |
+
|
| 103 |
+
For the prophet algorithm, the same assumptions are made as for the secretary algorithm, but we know the distributions ${F}_{i}$ the candidate values are drawn from. In the paper, the authors propose two optimal online algorithms specified in figure 1, where ${q}_{1},\cdots ,{q}_{n}$ denote the marginal probabilities that the optimal fair offline algorithm picks the candidates $i = 1,\cdots ,n$ . Figure 1a shows the general Fair prophet algorithm (Fair prophet algorithm). This algorithm does not make any assumptions about the underlying probability distribution, it can be different for every candidate. Figure 1b shows the Fair independent and identically distributed prophet algorithm (Fair IID prophet algorithm). This algorithm assumes that the values of the candidates are drawn from the same distribution.
|
| 104 |
+
|
| 105 |
+
Algorithm 2 Fair General Prophet
|
| 106 |
+
|
| 107 |
+
Input: Distributions ${F}_{1},\cdots ,{F}_{n}$ , and ${q}_{1},\cdots ,{q}_{n}$
|
| 108 |
+
|
| 109 |
+
Output: $i \in \left\lbrack n\right\rbrack$ , index of chosen candidate
|
| 110 |
+
|
| 111 |
+
$s \leftarrow 0$
|
| 112 |
+
|
| 113 |
+
for $i \leftarrow 1$ to $n$ do
|
| 114 |
+
|
| 115 |
+
if ${v}_{i} \geq {F}_{i}^{-1}\left( {1 - \frac{{q}_{i}/2}{1 - s/2}}\right)$ then
|
| 116 |
+
|
| 117 |
+
return $i$
|
| 118 |
+
|
| 119 |
+
end
|
| 120 |
+
|
| 121 |
+
$s \leftarrow s + {q}_{i}$
|
| 122 |
+
|
| 123 |
+
end
|
| 124 |
+
|
| 125 |
+
(a) Fair prophet algorithm
|
| 126 |
+
|
| 127 |
+
Algorithm 3 FAIR IID PROPHET
|
| 128 |
+
|
| 129 |
+
Input: Distributions $F$
|
| 130 |
+
|
| 131 |
+
Output: $i \in \left\lbrack n\right\rbrack$ , index of chosen candidate
|
| 132 |
+
|
| 133 |
+
for $i \leftarrow 1$ to $n$ do
|
| 134 |
+
|
| 135 |
+
if ${v}_{i} \geq {F}^{-1}\left( {1 - \frac{2/{3n}}{1 - 2\left( {i - 1}\right) /{3n}}}\right)$ then
|
| 136 |
+
|
| 137 |
+
return $i$
|
| 138 |
+
|
| 139 |
+
end
|
| 140 |
+
|
| 141 |
+
end
|
| 142 |
+
|
| 143 |
+
(b) Fair IID prophet algorithm
|
| 144 |
+
|
| 145 |
+
Figure 1: Fair prophet algorithms proposed by the authors.
|
| 146 |
+
|
| 147 |
+
§ 3.3 DATA SETS
|
| 148 |
+
|
| 149 |
+
The experiments involving the SA algorithm are conducted on two synthetic data sets and two real-world data sets. The data sets and their properties are summarised below:
|
| 150 |
+
|
| 151 |
+
1. Synthetic data set, equal p values contains four different colors with 10, 100, 1000, and 10000 occurrences. The value of each element is chosen independently and uniformly at random from $\left\lbrack {0,1}\right\rbrack$ .
|
| 152 |
+
|
| 153 |
+
2. Synthetic data set, general $\mathbf{p}$ values contains a similar setup as [1], but with $p = \left( {{0.3},{0.25},{0.25},{0.2}}\right)$ .
|
| 154 |
+
|
| 155 |
+
3. Feedback maximization (Bank) contains records of direct marketing campaigns (phone calls) by a Portuguese banking institution (Moro et al., 2014). The clients are split into 5 colors by age: under 30, 31-40, 41-50, 51-60, and over 61 years old. The value of every client is the duration of the phone call. Moreover, an equal $p$ of 0.2 was used for all colors.
|
| 156 |
+
|
| 157 |
+
4. Influence maximization (Pokec) contains records of the influence of users of the Pokec social network (Takac & Zábovsky, 2012). We pre-process the data by dividing the users into 5 different colors according to their body mass index (BMI): under weighted (BMI $< {18.5}$ ), normal ( ${18.5} < =$ BMI $< {25}$ ), over weighted ( ${25.0} < =$ $\mathrm{{BMI}} < {30.0}$ ), obese type 1 $\left( {{30.0} < = \mathrm{{BMI}} < {35}}\right)$ , and obese type 2 $\left( {\mathrm{{BMI}} > = {35.0}}\right)$ . The value is computed as the number of the followers for each user. Again, an equal $p$ of 0.2 was used for all colors.
|
| 158 |
+
|
| 159 |
+
§ 3.4 EXPERIMENTAL SETUP
|
| 160 |
+
|
| 161 |
+
In this subsection, the experimental evaluation performed by the authors is discussed. As before, a distinction between the two problems is made for clarity. Additionally, an extra experiment will be considered where the secretary algorithm will be evaluated on another real-world data set.
|
| 162 |
+
|
| 163 |
+
§ SECRETARY EXPERIMENTS
|
| 164 |
+
|
| 165 |
+
The authors propose two different baselines to compare the Fair secretary algorithm to. Firstly, the classic secretary algorithm (SA), which does not take the colors of the candidates into account. Secondly, the single-color secretary algorithm (SCSA). This algorithm picks a color proportional to the $p$ values and then runs the classic secretary algorithm on the candidates of only that color. To evaluate the claims by the authors, the three mentioned algorithms are evaluated against the four data sets discussed earlier.
|
| 166 |
+
|
| 167 |
+
The parameters of these experiments consist of the size of the data sets and the number of repetitions. For the experiments on the Synthetic data sets (equal $p/$ general $p$ ) and the Bank data set, all available candidates were used in 20.000 repetitions. In the original paper, the authors used all $\pm {650.000}$ candidates of the Pokec data set in 1000.000 repetitions. In our experiment, we had to limit these parameters due to time constraints. We only considered the first 40.000 candidates and used 40.000 repetitions.
|
| 168 |
+
|
| 169 |
+
§ PROPHET EXPERIMENTS
|
| 170 |
+
|
| 171 |
+
For the prophet experiments, the Fair prophet algorithm and Fair IID prophet algorithm are evaluated against three baselines: the SC algorithm (Samuel-Cahn, 1984), EHKS algorithm (Marx, 2021), CFHOV algorithm (Correa, Foncea, et al., 2021) and DP algorithm (Brown, 1972). The specific works of these algorithms are described in further detail in the paper (Correa, Cristi, et al., 2021) section 4.2.
|
| 172 |
+
|
| 173 |
+
For the experiments, two settings are implemented. In the first setting, 50 samples are taken from a uniform distribution in a range of $\left\lbrack {0,1}\right\rbrack$ . These samples function as the input stream. In the second setting,1000 samples are taken from a binomial distribution with 1000 trials and a probability of a successful single trial $p = {0.5}$ . In order to compare this method with the already existing algorithms, we assume each candidate to be group of its own. For every algorithm, we repeat the experiment 50.000 times.
|
| 174 |
+
|
| 175 |
+
§ EXTENDING TO OTHER DATA SET (UFRGS) EXPERIMENTS
|
| 176 |
+
|
| 177 |
+
This subsection describes an experimental extension on the work of Correa, Cristi, et al. (2021). In our work, we have concluded that the secretary results claimed in the paper are reproducible. It is shown in section 4 that the Fair algorithm significantly outperforms the SCSA baseline. However, all real-world data sets used to prove this claim contain the same distribution of values for every color. The distributions for the Bank and Pokec data sets are shown in Figures 2a 2b respectively.
|
| 178 |
+
|
| 179 |
+
Our extension investigates the effect of applying the Fair algorithm to an unequally distributed real-word data set, such as the UFRGS Entrance Exam and GPA Data (UFRGS) data set. This work will show whether the claims made by the authors generalize to these types of data sets. The UFRGS contains entrance exam scores of students applying to a university in Brazil (Federal University of Rio Grande do Sul), along with the students' GPAs during the first three semesters at university. The data set also includes the gender of every student (male or female). The distribution of the data set is shown in Figure 2c. This experiment is a duplication of the original secretary experiments but with the UFRGS data set as input. The gender of the students is used as color, their GPA score as values. The experiment is repeated 20.000 times.
|
| 180 |
+
|
| 181 |
+
< g r a p h i c s >
|
| 182 |
+
|
| 183 |
+
Figure 2: Value distributions of the different color groups in the real-world secretary algorithm data sets.
|
| 184 |
+
|
| 185 |
+
§ 4 RESULTS
|
| 186 |
+
|
| 187 |
+
The following paragraphs will present the results for the experiments discussed in section 3.4: (1) the secretary experiments, (2) the prophet experiments, (3) our extended work.
|
| 188 |
+
|
| 189 |
+
§ SECRETARY RESULTS
|
| 190 |
+
|
| 191 |
+
The plots in Figure 3 show our reproduction work regarding the original paper on the secretary problem over the four different data sets. We find that all results are in line with the work of Correa, Cristi, et al. (2021). Due to the nature of construction of the fair algorithm proposed by the authors, and the SCSA, we find that it picks elements from each color proportional to the vector $\mathbf{p}$ . From this, it can be concluded that the authors’ Claims 1 and 2 are valid.
|
| 192 |
+
|
| 193 |
+
< g r a p h i c s >
|
| 194 |
+
|
| 195 |
+
Figure 3: Reproduction work regarding the original paper on the secretary problem. Comparing the Fair secretary algorithm to the aforementioned baselines SA, SCSA over four different data sets: (a) synthetic data set, equal $p$ values,(b) synthetic data set, general $p$ values,(c) feedback maximization data set (Bank), and (d) influence maximization data set (Pokec). Input denotes the number of elements from each color in the input, F-Pick and F-Max are the number of elements picked by the fair secretary algorithm and the number of them that are the maximum among the elements of that color. Similarly, U-Pick (S-Pick) and U-Max (S-Max) are the number of elements picked by SA and SCSA and the number of them that are the maximum among the elements of that color
|
| 196 |
+
|
| 197 |
+
The authors claim that the quality of the solution of their algorithm is significantly higher than the SCSA. Table 1 shows our replication of this comparison. We find that our implementation reproduces the authors' claim that their method is superior to the SCSA. Small discrepancies in the results are found, this is due to the random nature of the algorithm. However, as mentioned earlier, after scrutinizing the distributions of the used data sets, we found that all the used data sets have similar distributions in the input. Therefore, we proceed by agreeing with the claims of the author given this restriction.
|
| 198 |
+
|
| 199 |
+
max width=
|
| 200 |
+
|
| 201 |
+
Data set Claimed Pick Reproduction Pick Claimed Max Reproduction Max
|
| 202 |
+
|
| 203 |
+
1-5
|
| 204 |
+
Synthetic (Equal $p$ ) 1.305 (+30.5%) 1.326 (+32.6%) 1.721 (+73.1%) 1.685 (+68.5%)
|
| 205 |
+
|
| 206 |
+
1-5
|
| 207 |
+
Synthetic (General $p$ ) 1.309 (+30.9%) 1.334 (+33.4%) 1.630 (+63.0%) 1.666 (+66.6%)
|
| 208 |
+
|
| 209 |
+
1-5
|
| 210 |
+
Bank 1.347 (+34.7%) 1.377 (+37.7%) 1.760 (+76.0%) 1.812 (+81.2%)
|
| 211 |
+
|
| 212 |
+
1-5
|
| 213 |
+
Pokec 1.373 (+37.3%) 1.368 (+36.8%) 1.756 (+75.6%) 1.810 (+81.0%)
|
| 214 |
+
|
| 215 |
+
1-5
|
| 216 |
+
UFGRS - 1.192 (+19.2%) - 1.364 (+36.4%)
|
| 217 |
+
|
| 218 |
+
1-5
|
| 219 |
+
|
| 220 |
+
Table 1: Secretary experiment claims by the author compared to reproduced results.
|
| 221 |
+
|
| 222 |
+
§ PROPHET RESULTS
|
| 223 |
+
|
| 224 |
+
The patterns of the results in the original paper are reflected in our reproduction as visualized in figured 4. A major difference is that the scale of their y-axis is twice the size of our reproduction. Because the shown plots are a histogram of arrival positions, this could be attributed to a difference in bin size. The authors' report specifies using uniform distributions. Table 2 shows our replication of the average values chosen by each algorithm. While small differences exist, our reproduction mirrors the authors' results upon running their code closely.
|
| 225 |
+
|
| 226 |
+
max width=
|
| 227 |
+
|
| 228 |
+
X 2|c|Uniform Distribution 2|c|Binomial Distribution
|
| 229 |
+
|
| 230 |
+
1-5
|
| 231 |
+
Algorithm Claimed value Reproduction value Claimed value Reproduction value
|
| 232 |
+
|
| 233 |
+
1-5
|
| 234 |
+
Fair PA 0.501 0.497 0.297 0.273
|
| 235 |
+
|
| 236 |
+
1-5
|
| 237 |
+
Fair IID 0.661 0.654 0.389 0.364
|
| 238 |
+
|
| 239 |
+
1-5
|
| 240 |
+
SC 0.499 0.494 0.227 0.253
|
| 241 |
+
|
| 242 |
+
1-5
|
| 243 |
+
EHKS 0.631 0.625 0.362 0.339
|
| 244 |
+
|
| 245 |
+
1-5
|
| 246 |
+
CFHOV 0.752 0.755 0.513 0.408
|
| 247 |
+
|
| 248 |
+
1-5
|
| 249 |
+
DP 0.751 0.752 0.429 0.340
|
| 250 |
+
|
| 251 |
+
1-5
|
| 252 |
+
|
| 253 |
+
Table 2: Prophet experiment claims by the author compared to reproduced results.
|
| 254 |
+
|
| 255 |
+
191
|
| 256 |
+
|
| 257 |
+
< g r a p h i c s >
|
| 258 |
+
|
| 259 |
+
Figure 4: Reproduced results for the prophet experiments.
|
| 260 |
+
|
| 261 |
+
§ EXTENDING TO OTHER DATA SET (UFRGS) RESULTS
|
| 262 |
+
|
| 263 |
+
Figure 5 shows the results of the experiments proposed in section 3.4. It can be noted that the pattern visible in the earlier secretary results still holds for a new, unequally distributed data set. However, when looking at Table 1, a significant decrease in performance can be detected. The Bank and Pokec data sets scored +37.7% and +36.8% for F-Pick compared to S-Pick. The UFRGS only has an increase of +19.2%. The difference is even more significant when comparing F-Max to S-Max; Pokec and Bank have an increase of +81.2% and +81.0%, UFRGS only has an improvement of +36.4%. We can conclude that the performance increase of the Fair secretary algorithm is not as significant when using an unequally distributed data set, compared to the increase mentioned in the paper.
|
| 264 |
+
|
| 265 |
+
< g r a p h i c s >
|
| 266 |
+
|
| 267 |
+
Figure 5: Secretary experiment applied to the UFRGS data set.
|
| 268 |
+
|
| 269 |
+
§ 5 DISCUSSION
|
| 270 |
+
|
| 271 |
+
In this research, we have tried to reproduce the work of Correa, Cristi, et al. (2021) as closely as possible. However, there are a few inconsistencies in the original code and paper, which caused complications. These points and our solution to them (if required) will be briefly discussed in the following paragraph.
|
| 272 |
+
|
| 273 |
+
Firstly, as mentioned before, the BMI thresholds for the pre-processing of the Pokec dataset were missing in the authors' work. This poses a problem as slight alterations to these thresholds yield different results. This problem was solved by finding concurring values in other research. Secondly, to limit the computation time of our reproduction, the size of the Pokec data set was limited from approximately 650.000 to 40.000 elements. The number of repetitions for this experiment was also decreased from 1.000.000 to 40.000 . We opted for this solution as the distributions in the results did not change from these limits onward. Thirdly, the U-pick/U-max values in the secretary results of the original work are inconsistent due to randomness. It seems that changing the seed value of the random number generator in the $C + +$ code heavily changes the output of the SA algorithm (U-pick/U-max). The SA results could therefore be cherry-picked as no further explanation was provided by the authors. Lastly, some inconsistencies are present in the paper. From minor typos e.g. using the word desbribed instead of described, to more serious mistakes, such as claiming that an increase of 1.721 is equal to (+73.1%). A thorough reread of the paper would have resolved this.
|
| 274 |
+
|
| 275 |
+
§ 5.1 REFLECTION ON OUR REPLICATION STUDY
|
| 276 |
+
|
| 277 |
+
The algorithms used in the original were clear and straightforward. The existing $C + +$ code of the authors provided a good starting point for the verification of the results.
|
| 278 |
+
|
| 279 |
+
However, our goal was to further validate these claims and generalize them to a further extent. We did this by reproducing the work of the original paper. Reproducing the work efficiently in another language, in our case Python, introduced some difficulties and took longer than expected. An execution of transliterated code resulted in an excessive run time. To tackle this problem, some of the data structures needed to be converted to ${NumPy}$ arrays to decrease computation time. This requires advanced knowledge of Numpy and the use of data structures.
|
| 280 |
+
|
| 281 |
+
§ 5.2 COMMUNICATION WITH ORIGINAL AUTHORS
|
| 282 |
+
|
| 283 |
+
As certain parameters and split-off values were not clearly defined in either the paper or the original code, we reached out to the authors via mail to ensure a fair assessment of the reproduction. Examples of missing split-off values are the BMI category thresholds for the pre-processing of the Pokec data set. These category values are not fixed in literature and differ depending on age and nationality. At the time of writing this report, we had not yet heard back from the authors. We resolved this by assuming certain values and explanations, which are all documented in our paper.
|
papers/ML_Reproducibility_Challenge/ML_Reproducibility_Challenge 2021/ML_Reproducibility_Challenge 2021 Fall/SSSGs3M7nRY/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,253 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Reproducibility report: Hate Speech Detection based on Sentiment Knowledge Sharing
|
| 2 |
+
|
| 3 |
+
Anonymous Author(s)
|
| 4 |
+
|
| 5 |
+
Affiliation
|
| 6 |
+
|
| 7 |
+
Address
|
| 8 |
+
|
| 9 |
+
email 1
|
| 10 |
+
|
| 11 |
+
## Reproducibility Summary
|
| 12 |
+
|
| 13 |
+
This report summarises our efforts to reproduce the results presented in the ACL2021 paper Hate Speech Detection 3 based on Sentiment Knowledge Sharing by Zhou et al. (2021), as part of the ML Reproducibility Challenge 2021.
|
| 14 |
+
|
| 15 |
+
## 4 Scope of Reproducibility
|
| 16 |
+
|
| 17 |
+
The main goal of this reproducibility attempt is to confirm the effectiveness of the hate speech detection framework proposed by Zhou et al. (2021). In particular, our efforts are directed at validating their main claim that sentiment 7 knowledge sharing in a multi-task learning setup improves the performance of the model in predicting hate speech. Besides reproducing their main results, we perform repeated experiments to assess the variability of the scores and 9 perform a hyperparameter search.
|
| 18 |
+
|
| 19 |
+
## 10 Methodology
|
| 20 |
+
|
| 21 |
+
The authors provide a code-base which is available at https://github.com/1783696285/SKS.We reuse the available code, modifying it where necessary and integrating it with a few additional scripts for statistics computation and data preparation. Our code, data and results are available at https://anonymous.4open.science/r/repro-SKS-A.
|
| 22 |
+
|
| 23 |
+
## 4 Results
|
| 24 |
+
|
| 25 |
+
Our findings diverge substantially from the results reported in the original paper. In particular, in our reproduction experiments, including sentiment features hurts the performance of the model in the hate speech detection task 17 (approximately 0.5 to ${2.0}\mathrm{\;F}1$ -score).
|
| 26 |
+
|
| 27 |
+
## 18 What was easy
|
| 28 |
+
|
| 29 |
+
The paper provides some broad indications with respect to the training details and the code-base is publicly available. Similarly, the data-sets are also freely available and the authors provide links to them in their repository.
|
| 30 |
+
|
| 31 |
+
## 21 What was difficult
|
| 32 |
+
|
| 33 |
+
The code-base is rather convoluted. Following the instructions included in the authors' repository resulted in a number of exceptions caused by formatting issues, missing code snippets and hard-coded values. Additionally, the lack of a clear and comprehensive documentation contributed to an arduous code review and reproducibility effort.
|
| 34 |
+
|
| 35 |
+
## 25 Communication with original authors
|
| 36 |
+
|
| 37 |
+
We managed to reach one of the authors and exchange a few messages over GitHub. However, despite multiple attempts, we did not manage to reach the authors per email and get an answer to our questions concerning some aspects of the implementation.
|
| 38 |
+
|
| 39 |
+
## 1 Introduction
|
| 40 |
+
|
| 41 |
+
Being able to quickly and reliably detect hate speech in an automatic manner is an important task. Due to the growing number of regulations concerning the use of hate speech and other forms of offensive language online, this topic has gained increasing interest both in academia and industry (Davidson et al., 2017; Schmidt and Wiegand, 2017; Basile et al., 2019; Yin and Zubiaga, 2021).
|
| 42 |
+
|
| 43 |
+
As in any supervised learning task, the availability and the size of labelled data-sets pose significant challenges. The task is made even more arduous by its multilingual and multi-domain nature. One way to alleviate such problems is to make use of additional data-sets from other, related tasks.
|
| 44 |
+
|
| 45 |
+
The study by Zhou et al. (2021) that we attempt to reproduce describes a multi-task learning framework for online hate speech detection that relies on the purportedly strong negative sentiment characterising this threatening form of communication. The model presented in the original paper, Sentiment Knowledge Sharing (SKS), is a multi-head attention network that predicts whether the input text contains hate speech or not. The main claim of the paper revolves around the fact that the model is (optionally) trained in a multi-task setting for sentiment analysis, and it incorporates information from a dictionary of derogatory words through 'category embeddings' (see Section 3.1 for further details).
|
| 46 |
+
|
| 47 |
+
Based on experiments carried out on two benchmark data-sets, Zhou et al. (2021) claim that training a model relying both on sentiment information and category embeddings would improve its performance in the task of hate speech detection.
|
| 48 |
+
|
| 49 |
+
## 2 Scope of reproducibility
|
| 50 |
+
|
| 51 |
+
The work of Zhou et al. (2021) is based on the intuition that hate speech detection and sentiment analysis are two highly correlated tasks and that hate speech is likely to arise from derogatory words. Our reproducibility attempt tries to verify the following claims:
|
| 52 |
+
|
| 53 |
+
- A model relying both on Sentiment Knowledge Sharing (SKS) and a dictionary of derogatory words scores better than several strong baselines where sentiment features are not considered.
|
| 54 |
+
|
| 55 |
+
- Ablating the sentiment knowledge component (-s) results in a poorer performance, as the model would rely solely on derogatory words features which, despite being likely indicators of hate speech, can make the model too sensitive too false positives (e.g. I'm so fucking ready!).
|
| 56 |
+
|
| 57 |
+
- A model where both sentiment knowledge and derogatory word features are ablated (-sc) scores the worst performance.
|
| 58 |
+
|
| 59 |
+
Besides trying to reproduce the original results (see Table 3 in Zhou et al. (2021)), we perform a hyperparameter search to validate the values reported by the authors. Every experiment we perform is ran multiple times to check whether any observed differences stand when the variability of the scores are taken into consideration.
|
| 60 |
+
|
| 61 |
+
## 3 Methodology
|
| 62 |
+
|
| 63 |
+
### 3.1 Model descriptions
|
| 64 |
+
|
| 65 |
+
The SKS model relies heavily on the Mixture-of-Experts layer (MoE) as introduced by Shazeer et al. (2017) and the Multi-gate Mixture-of-Experts (MMoE) model presented by Ma et al. (2018). Its overall architecture consists mainly of three macro-components: an input layer, a sentiment knowledge sharing layer and a gated attention layer.
|
| 66 |
+
|
| 67 |
+
#### 3.1.1 The input layer
|
| 68 |
+
|
| 69 |
+
In the input layer, word embeddings are used to encode words of each target sentence. Specifically, every token ${w}_{i}$ of a given sentence $S = \left\{ {{w}_{1},{w}_{2},\ldots ,{w}_{i},\ldots ,{w}_{N}}\right\}$ is transformed into a real-valued vector ${x}_{i} \in {\mathbb{R}}^{d}$ . Additionally, given that derogatory words represent a helpful marker of hate speech, each vector ${x}_{i}$ is concatenated with a category embedding vector ${c}_{i} \in {\mathbb{R}}^{{d}^{i}}$ , such that ${x}_{i}^{\prime } = {x}_{i} \oplus {c}_{i}$ .
|
| 70 |
+
|
| 71 |
+
Category embeddings are created on the basis of a dictionary of derogatory words which is used to classify sentences into two categories, either containing derogatory words or not. The result of the classification is encoded as a vector ${c}_{i}$ and appended to each word embedding ${x}_{i}$ , such that the encoded sentence is $S = \left\{ {{x}_{1}^{\prime },{x}_{2}^{\prime },\ldots ,{x}_{N}^{\prime }}\right\}$ .
|
| 72 |
+
|
| 73 |
+
#### 3.1.2 The sentiment knowledge sharing layer
|
| 74 |
+
|
| 75 |
+
The sentiment knowledge sharing component relies on a multi-task learning strategy which, according to the authors, would allow to take advantage of the high correlation between the two tasks of sentiment analysis and hate speech detection. In the proposed implementation, the two tasks share a bottom hidden layer implemented following the Mixture-of-Experts framework (MoE). The MoE layer is made up of multiple identical feature extraction units each of which, in turn, is composed of a multi-head attention layer using 4 heads and two feed forward neural networks.
|
| 76 |
+
|
| 77 |
+
Each unit relies on the idea of multi-head attention introduced by Vaswani et al. (2017), where the input matrix $\mathrm{X}$ is mapped to query $\mathrm{Q} \in {\mathbb{R}}^{\left( {n}_{1} \times {d}_{1}\right) }$ , key $\mathrm{K} \in {\mathbb{R}}^{\left( {n}_{1} \times {d}_{1}\right) }$ , and value $\mathrm{V} \in {\mathbb{R}}^{\left( {n}_{1} \times {d}_{1}\right) }$ using linear transformations. Given these three matrices the attention parameters are computed as follows:
|
| 78 |
+
|
| 79 |
+
$$
|
| 80 |
+
\operatorname{Attention}\left( {\mathrm{Q},\mathrm{K},\mathrm{V}}\right) = \operatorname{softmax}\left( \frac{{\mathrm{{QK}}}^{\top }}{{d}_{1}}\right) \mathrm{V} \tag{1}
|
| 81 |
+
$$
|
| 82 |
+
|
| 83 |
+
In the implementation proposed by Zhou et al. (2021) $\mathrm{K} = \mathrm{V}$ and ${d}_{1}$ corresponds to the number of hidden layer units. The ith output of the multi-head attention mechanism is:
|
| 84 |
+
|
| 85 |
+
$$
|
| 86 |
+
{\mathrm{M}}_{i} = \operatorname{Attention}\left( {{\mathrm{{QW}}}_{i}^{Q},{\mathrm{{KW}}}_{i}^{K},{\mathrm{{VW}}}_{i}^{V}}\right) \tag{2}
|
| 87 |
+
$$
|
| 88 |
+
|
| 89 |
+
where the parameter matrix ${\mathrm{W}}_{i}^{Q} \in {\mathbb{R}}^{{n}_{1} \times \frac{{d}_{1}}{l}},{\mathrm{\;W}}_{i}^{K} \in {\mathbb{R}}^{{n}_{1} \times \frac{{d}_{1}}{l}}$ and ${\mathrm{W}}_{i}^{V} \in {\mathbb{R}}^{{n}_{1} \times \frac{{d}_{1}}{l}}$ . All outputs are then concatenated and multiplied by ${\mathrm{W}}^{O}$ to obtain the final feature representation ${\mathrm{H}}^{s} = \operatorname{concat}\left( {{\mathrm{M}}_{1},{\mathrm{M}}_{2},\ldots ,{\mathrm{M}}_{l}}\right) {\mathrm{W}}^{O}$ .
|
| 90 |
+
|
| 91 |
+
Finally, the authors decide to use both maximum and average pooling (Shen et al., 2018) to fuse the feature representations, concatenating the two results:
|
| 92 |
+
|
| 93 |
+
$$
|
| 94 |
+
{\mathrm{P}}_{m} = \text{Pooling_max}\left( {\mathrm{H}}^{s}\right) \tag{3}
|
| 95 |
+
$$
|
| 96 |
+
|
| 97 |
+
$$
|
| 98 |
+
{\mathrm{P}}_{a} = \text{Pooling_average}\left( {\mathrm{H}}^{s}\right) \tag{4}
|
| 99 |
+
$$
|
| 100 |
+
|
| 101 |
+
$$
|
| 102 |
+
{\mathrm{P}}_{s} = \operatorname{concat}\left( {{\mathrm{P}}_{m},{\mathrm{P}}_{a}}\right) \tag{5}
|
| 103 |
+
$$
|
| 104 |
+
|
| 105 |
+
#### 3.1.3 The gated attention layer
|
| 106 |
+
|
| 107 |
+
The third macro-component is a gated attention mechanism which allows to select a subset of the feature extraction units from the previous layer. The output ${g}^{k}\left( x\right)$ of a specific gate $k$ corresponds to the probability of selecting a specific unit. The subset of units selected through this process are then weighted and summed to get the final representation ${f}^{k}\left( x\right)$ of a given sentence, which is passed to a feed-forward neural network to detect hate speech:
|
| 108 |
+
|
| 109 |
+
$$
|
| 110 |
+
{g}^{k}\left( x\right) = \operatorname{softmax}\left( {{\mathrm{W}}_{gn} * \operatorname{gate}\left( x\right) }\right) \tag{6}
|
| 111 |
+
$$
|
| 112 |
+
|
| 113 |
+
$$
|
| 114 |
+
{f}^{k}\left( x\right) = \mathop{\sum }\limits_{{i = 1}}^{n}{g}^{k}{\left( x\right) }_{i}{f}_{i}\left( x\right) \tag{7}
|
| 115 |
+
$$
|
| 116 |
+
|
| 117 |
+
$$
|
| 118 |
+
{y}_{k} = {h}^{k}{f}^{k}\left( x\right) \tag{8}
|
| 119 |
+
$$
|
| 120 |
+
|
| 121 |
+
### 3.2 Datasets
|
| 122 |
+
|
| 123 |
+
Following in the steps of Zhou et al. (2021), we test the model and report results on two public hate speech data-sets: SemEval2019 Task-5 (SE, Basile et al.,2019) ${}^{1}$ and Davidson (DV, Davidson et al.,2017). ${}^{2}$
|
| 124 |
+
|
| 125 |
+
The SE data-set contains a total of 13,000 tweets and is divided into training-, validation- and test-set, consisting of 9,000,1,000and 3,000 samples, respectively. The training-set contains 3,783 instances of hate speech and 5,217 instances that are not. In the validation-set 427 samples are classified as hate speech and 573 as non-hate speech. The test-set is split into 1,260 hate speech samples and 1,740 non-hate speech ones.
|
| 126 |
+
|
| 127 |
+
The DV data-set contains a total of 24,783 manually labelled tweets. Each tweet is assigned to either one of three classes: hate speech(1,430), offensive language(19,190)or neither(4,163). Zhou et al. (2021) merge the last two classes together and obtain 1,430 tweets classified as hate speech and 23,353 classified as non-hate speech.
|
| 128 |
+
|
| 129 |
+
Finally, the model relies also on a sentiment data-set obtained from Kaggle. ${}^{3}$ The original authors only use the training-set which contains 31,962 tweets, 2,242 of which are classified as having a negative sentiment, while the remaining29,720a positive one.
|
| 130 |
+
|
| 131 |
+
### 3.3 Hyperparameters
|
| 132 |
+
|
| 133 |
+
We begin our reproducibility attempt, relying solely on the hyperparameters reported in the original paper. Our results are summarised in Table 1.
|
| 134 |
+
|
| 135 |
+
In the input layer, all word vectors are initialised using Glove Common Crawl Embeddings (840B Token) with a dimension of 300 , while category embeddings are randomly initialised and have a dimension of 100 .
|
| 136 |
+
|
| 137 |
+
In the sentiment knowledge sharing layer, the multi-head attention mechanism is implemented using 4 heads. The two feed-forward networks in each expert unit have one layer with 400 units and two layers with 150 units, respectively. However, contrary to what we see in the implementation, it is worth noting that the original paper reports 200 units for the second network. After each layer a dropout rate of 0.1 is used.
|
| 138 |
+
|
| 139 |
+
The model is trained by mini-batches of 512 instances for 15 epochs, using the RMSprop optimiser and a learning rate of 0.001 . The original authors report using learning rate decay and early stopping to avoid overfitting.
|
| 140 |
+
|
| 141 |
+
#### 3.3.1 Hyperparameters tuning
|
| 142 |
+
|
| 143 |
+
The original work does not provide any details regarding hyperparameters tuning and upon contacting the authors to inquire about it we received no answers. Thus, we attempt to tune learning rate $\left( {10}^{-6}\right.$ to ${10}^{-1}$ , on a $\log$ scale), batch size (from 32 to 1024, on a ${\log }_{2}$ scale) and dropout rate (0.0 to 0.4 with increments of 0.1 ) on the SE data-set using grid-search with 60 epochs and find that the respective optimal values are 0.001, 256 and 0.0 . However, the values indicated in the original paper performs similarly. Considering the model variation (see Table 1 and Figure 1), the differences can easily be attributed to the model variance, due to random initialization.
|
| 144 |
+
|
| 145 |
+
### 3.4 Experimental setup and code
|
| 146 |
+
|
| 147 |
+
We try to reproduce the results presented in Table 3 of the original paper (Zhou et al., 2021). For both data-sets the authors train three models: SKS, which relies both on sentiment knowledge sharing and category embeddings; -s, a model where the sentiment knowledge sharing component is ablated; -sc, a model that does without both sentiment knowledge sharing and category embeddings. We rely largely on the Tensorflow implementation (Abadi et al., 2015) made available by the authors, modifying it where necessary and integrating it with a few additional scripts for statistics computation and data preparation.
|
| 148 |
+
|
| 149 |
+
For each result reported in the original paper we repeat the corresponding experiment 10 times. Specifically, for each repetition the model is reinitialised and trained over 15 epochs. We keep the results from the best epoch of each repetition and then compute the average and the standard deviation for the originally employed measures i.e. accuracy and macro-F1 score for the SE data-set and accuracy and weighted-F1 score for the DV data-set.
|
| 150 |
+
|
| 151 |
+
---
|
| 152 |
+
|
| 153 |
+
${}^{1}$ http://hatespeech.di.unito.it/hateval.html
|
| 154 |
+
|
| 155 |
+
${}^{2}$ https://github.com/t-davidson/hate-speech-and-offensive-language/tree/master/data
|
| 156 |
+
|
| 157 |
+
${}^{3}$ https://www.kaggle.com/dv1453/twitter-sentiment-analysis-analytics-vidya
|
| 158 |
+
|
| 159 |
+
---
|
| 160 |
+
|
| 161 |
+
Given that the DV data-set is highly unbalanced, the authors use a 5 -fold cross-validation approach to measure the performance of each model. We follow the same approach and adopt the 10 times repetition strategy for each fold.
|
| 162 |
+
|
| 163 |
+
Our code, data as well as the final and intermediate per-iteration results are available at https://anonymous.4open.science/r/repro-SKS-A.
|
| 164 |
+
|
| 165 |
+
### 3.5 Computational requirements
|
| 166 |
+
|
| 167 |
+
To run our experiments we use a NVIDIA TITAN Xp with a 12 GB memory. Training the models on the SE data-set took approximately 24 minutes for the SKS model and 7 minutes both for the - sc and -s model. On the DV data-set the training took approximately 3 hours for the SKS model and 2 hours both for the - sc and -s model. The hyperparameters tuning step on the SE data-set took approximately 33 hours.
|
| 168 |
+
|
| 169 |
+
## 4 Results
|
| 170 |
+
|
| 171 |
+
We report the original results along the ones we obtained using the specified hyperparameters in Table 1. Comparing our findings with those reported by the original authors we observe a discrepancy in all three measures, Accuracy, macro-F1 and weighted-F1 score, for both data-sets. In the SE data-set, the most notable differences concern the results of the SKS and -s models. In the DV data-set, there are some noteworthy discrepancies only with respect to the SKS model.
|
| 172 |
+
|
| 173 |
+
Looking at the mean scores we obtain on the SE data-set, the SKS model does not outperform both ablated versions -s and - sc, thus contradicting the first and second claim in Section 2. In fact, while SKS obtains an Accuracy of 61.04 and a macro-F1 score of 60.88, the -s model outperforms it, reaching an Accuracy and a macro-F1 score of 64.17 and 63.05, respectively. On the other hand, the third claim appears to hold. With an Accuracy of 60.52 and a macro-F1 score of 60.47 the - sc model is the one registering the worst performance.
|
| 174 |
+
|
| 175 |
+
Turning to the DV data-set, none of the claims appear to be substantiated by our findings either. The SKS models scores the lowest with an Accuracy of 93.63 and a weighted-F1 score of 93.62, while the ablated versions -s and -sc register similar values for both metrics, with an Accuracy of 93.99 and 93.98 and a weighted-F1 score of 94.11 and 94.12, respectively.
|
| 176 |
+
|
| 177 |
+
<table><tr><td rowspan="3">Model</td><td colspan="4">DV</td><td colspan="4">SE</td></tr><tr><td colspan="2">Acc</td><td colspan="2">F1 (weighted)</td><td colspan="2">Acc</td><td colspan="2">F1 (macro)</td></tr><tr><td>Orig.</td><td>Repro.</td><td>Orig.</td><td>Repro.</td><td>Orig.</td><td>Repro.</td><td>Orig.</td><td>Repro.</td></tr><tr><td>-sc</td><td>94.0</td><td>93.98 (±1.61)</td><td>94.0</td><td>94.12 (±1.73)</td><td>59.6</td><td>60.52 (±1.44)</td><td>59.3</td><td>60.47 (±1.40)</td></tr><tr><td>-s</td><td>94.5</td><td>93.99 (±1.49)</td><td>94.3</td><td>94.11 (±1.58)</td><td>61.3</td><td>64.17 (±0.99)</td><td>61.3</td><td>63.05 (±0.63)</td></tr><tr><td>SKS</td><td>95.1</td><td>${93.63}\left( {\pm {2.09}}\right)$</td><td>96.3</td><td>93.62 (±2.37)</td><td>65.9</td><td>${61.04}\left( {\pm {1.81}}\right)$</td><td>65.2</td><td>${60.88}\left( {\pm {1.64}}\right)$</td></tr></table>
|
| 178 |
+
|
| 179 |
+
Table 1: For each data-set and performance measure we report each model's original (Orig) results on the left and the reproduced (Repro) ones on the right, including the standard deviation of the reproduced score.
|
| 180 |
+
|
| 181 |
+
For a visual inspection of the results presented in Table 1 we also plot box plots of the scores obtained in multiple reproduction attempts in Figure 1. Despite some overlap in the range of the obtained scores, the median scores of the SKS model is lower than those of the ablated versions. The figure also shows that the scores reported in the original paper fall within the range $\pm {1.5}$ standard deviation from the mean of the scores of the multiple reproduction experiments. However, for both data-sets, the original scores of SKS is substantially above this range.
|
| 182 |
+
|
| 183 |
+
### 4.1 Alternative metrics
|
| 184 |
+
|
| 185 |
+
The original paper reports macro- or weighted-averaged F1 scores, with the motivation of comparability to earlier research on these data-sets. However, the task at hand is a binary classification task with a clear positive class. Incorporating the negative class score through averaging does not allow to assess the success of the classifier on the task. Furthermore, relying on weighted averaging without having a justified set of weights, but using weights proportional to the support of each class, rewards classifiers with majority bias even further.
|
| 186 |
+
|
| 187 |
+

|
| 188 |
+
|
| 189 |
+
Figure 1: Box plots of (a) F1-weighted on the DV data-set and (b) F1-macro on the SE data-set, from repeated experiments with different initialisations. Circles represent the scores reported in the original article. Red square in (b) indicates the single outlier for the - s option on this data-sets. The rest of the scores are equal to the median. Note that the $y$ -axes do not have the same scale.
|
| 190 |
+
|
| 191 |
+

|
| 192 |
+
|
| 193 |
+
Figure 2: Box plots of binary precision (blue), recall (orange), and F1-scores on (a) the DV data-set and (b) the SE data-set, from repeated experiments with different initialisations. Note that the y-axes do not have the same scale.
|
| 194 |
+
|
| 195 |
+
To present a more interpretable impression of the success of each model and provide further insight into the differences based on model ablation and alternations, Figure 2 depicts the distribution of precision and recall for the different reproduction experiments carried out on the two data-sets.
|
| 196 |
+
|
| 197 |
+
The plots indicate that, despite a large overlap, jointly learning sentiment analysis (SKS) improves the precision of the hate speech detection on the DV data-set. Despite having a negative impact on the recall, this also yields a slightly better median F1 score. The effect of the sentiment task is mostly negative on the SE data.
|
| 198 |
+
|
| 199 |
+
## 5 Discussion
|
| 200 |
+
|
| 201 |
+
The reproducibility results from Section 4 do not support the claims outlined in Section 2 for either data-sets. In particular, our findings seem to suggest that the multi-task learning approach implemented by the authors to allow the SKS model to extract sentiment features and apply them to hate speech detection does not yield the expected results. However, considering the lack of a comprehensive documentation, the convoluted structure of the code-base and the insufficient communication with the original authors it is hard to draw definitive conclusions. In fact, there are a number of plausible explanations as to why our findings diverge from those reported in the original paper.
|
| 202 |
+
|
| 203 |
+
For instance, considering the slight difference between the optimal hyperparameters we found and those reported by Zhou et al. (2021), and large variation of the model scores, one could speculate that, at least for part of the experiments, the authors employed some parameters which have not been reported. This would also explain the difference between some of the values indicated in the paper those used in the provided implementation.
|
| 204 |
+
|
| 205 |
+
Another explanation could lie in the fact that we inadvertently deviated from the original implementation while trying to fix some of the issues we faced in running the code-base. Whenever information was missing or not completely clear assumptions had to be made, and we tried to approximate the original results by trial and error. This was the case for the - sc model where the procedure to ablate the category embeddings component was not given and the answer we received from the authors did not help us overcome the problem.
|
| 206 |
+
|
| 207 |
+
The main intuition behind the original study is the fact that hate speech typically carries a negative sentiment. Hence, this relation between the tasks would help the model to identify hate speech better (arguably by increasing recall). However, a manual inspection of the data-set, on the other hand, suggests that it may actually be surprising for a classifier informed by sentiment analysis to help hate speech detection. Both data-sets are collected using keywords that are likely to contain hate speech, and negative classes constitute posts that are either offensive (but not hate speech), or people counteracting earlier offensive content. Hence, the sentiment on the negative class is not necessarily positive and helpful for discriminating hate speech in these data-sets. However, in a more realistic environment, the author's proposal may be correct. Given more 'normal' negative class instances, learning sentiment analysis jointly is likely to inform the hate speech detection. ${}^{4}$ The binary evaluation metrics presented in Figure 2 indicates that at least on the DV data-set, the addition of sentiment may have some positive effects. Understanding the reasons for these differences, and improving the joint learning model is a possible direction for the future research.
|
| 208 |
+
|
| 209 |
+
### 5.1 What was easy
|
| 210 |
+
|
| 211 |
+
The paper provides some broad indications with respect to the training details and both the data-sets and the code-base are open-sourced.
|
| 212 |
+
|
| 213 |
+
### 5.2 What was difficult
|
| 214 |
+
|
| 215 |
+
The lack of a comprehensive documentation, the convoluted structure of the code-base and the insufficient communication with the original authors contributed to an arduous code review and reproducibility effort.
|
| 216 |
+
|
| 217 |
+
### 5.3 Communication with original authors
|
| 218 |
+
|
| 219 |
+
We first tried to review and run the provided code-base by ourselves. However, after stumbling on a number of issues related to how the data-sets were being processed and how to run the -sc model ablating category embeddings, we decided to reach out to the authors through GitHub. One of the corresponding authors provided some indications which, unfortunately, did not help us overcome the problems at hand.
|
| 220 |
+
|
| 221 |
+
We also tried to contact the authors per email twice, inquiring about some aspects of the model implementation as well as the procedure they followed to tune the hyper-parameters. However, we never received an answer.
|
| 222 |
+
|
| 223 |
+
## References
|
| 224 |
+
|
| 225 |
+
Abadi, M., A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean, M. Devin, S. Ghemawat, I. Goodfellow, A. Harp, G. Irving, M. Isard, Y. Jia, R. Jozefowicz, L. Kaiser, M. Kudlur, J. Levenberg, 218
|
| 226 |
+
|
| 227 |
+
---
|
| 228 |
+
|
| 229 |
+
${}^{4}$ We leave testing this assumption to future work, possibly the final version of this paper if it is accepted for publication.
|
| 230 |
+
|
| 231 |
+
---
|
| 232 |
+
|
| 233 |
+
D. Mané, R. Monga, S. Moore, D. Murray, C. Olah, M. Schuster, J. Shlens, B. Steiner, I. Sutskever, K. Talwar, P. Tucker, V. Vanhoucke, V. Vasudevan, F. Viégas, O. Vinyals, P. Warden, M. Wattenberg, M. Wicke, Y. Yu, and X. Zheng (2015). TensorFlow: Large-scale machine learning on heterogeneous systems. Software available from tensorflow.org.
|
| 234 |
+
|
| 235 |
+
Basile, V., C. Bosco, E. Fersini, D. Nozza, V. Patti, F. M. Rangel Pardo, P. Rosso, and M. Sanguinetti (2019, jun). SemEval-2019 task 5: Multilingual detection of hate speech against immigrants and women in Twitter. In Proceedings of the 13th International Workshop on Semantic Evaluation, pp. 54-63. Association for Computational Linguistics.
|
| 236 |
+
|
| 237 |
+
Davidson, T., D. Warmsley, M. Macy, and I. Weber (2017, May). Automated hate speech detection and the problem of offensive language. In Proceedings of the International AAAI Conference on Web and Social Media, Volume 11, pp. 512-515.
|
| 238 |
+
|
| 239 |
+
Ma, J., Z. Zhao, X. Yi, J. Chen, L. Hong, and E. H. Chi (2018). Modeling task relationships in multi-task learning with multi-gate mixture-of-experts. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD '18, New York, NY, USA, pp. 1930-1939. Association for Computing Machinery.
|
| 240 |
+
|
| 241 |
+
Schmidt, A. and M. Wiegand (2017). A survey on hate speech detection using natural language processing. In SocialNLP@EACL.
|
| 242 |
+
|
| 243 |
+
Shazeer, N., A. Mirhoseini, K. Maziarz, A. Davis, Q. Le, G. Hinton, and J. Dean (2017). Outrageously large neural networks: The sparsely-gated mixture-of-experts layer.
|
| 244 |
+
|
| 245 |
+
Shen, D., G. Wang, W. Wang, M. R. Min, Q. Su, Y. Zhang, R. Henao, and L. Carin (2018). On the use of word embeddings alone to represent natural language sequences.
|
| 246 |
+
|
| 247 |
+
Vaswani, A., N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin (2017). Attention is all you need. arXiv.
|
| 248 |
+
|
| 249 |
+
Yin, W. and A. Zubiaga (2021). Towards generalisable hate speech detection: a review on obstacles and solutions. PeerJ Computer Science 7, e598.
|
| 250 |
+
|
| 251 |
+
Zhou, X., Y. Yong, X. Fan, G. Ren, Y. Song, Y. Diao, L. Yang, and H. Lin (2021, aug). Hate speech detection based on sentiment knowledge sharing. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 7158-7166. Association for Computational Linguistics.
|
| 252 |
+
|
| 253 |
+
219 220 221 222 223 224
|
papers/ML_Reproducibility_Challenge/ML_Reproducibility_Challenge 2021/ML_Reproducibility_Challenge 2021 Fall/SSSGs3M7nRY/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,236 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ REPRODUCIBILITY REPORT: HATE SPEECH DETECTION BASED ON SENTIMENT KNOWLEDGE SHARING
|
| 2 |
+
|
| 3 |
+
Anonymous Author(s)
|
| 4 |
+
|
| 5 |
+
Affiliation
|
| 6 |
+
|
| 7 |
+
Address
|
| 8 |
+
|
| 9 |
+
email 1
|
| 10 |
+
|
| 11 |
+
§ REPRODUCIBILITY SUMMARY
|
| 12 |
+
|
| 13 |
+
This report summarises our efforts to reproduce the results presented in the ACL2021 paper Hate Speech Detection 3 based on Sentiment Knowledge Sharing by Zhou et al. (2021), as part of the ML Reproducibility Challenge 2021.
|
| 14 |
+
|
| 15 |
+
§ 4 SCOPE OF REPRODUCIBILITY
|
| 16 |
+
|
| 17 |
+
The main goal of this reproducibility attempt is to confirm the effectiveness of the hate speech detection framework proposed by Zhou et al. (2021). In particular, our efforts are directed at validating their main claim that sentiment 7 knowledge sharing in a multi-task learning setup improves the performance of the model in predicting hate speech. Besides reproducing their main results, we perform repeated experiments to assess the variability of the scores and 9 perform a hyperparameter search.
|
| 18 |
+
|
| 19 |
+
§ 10 METHODOLOGY
|
| 20 |
+
|
| 21 |
+
The authors provide a code-base which is available at https://github.com/1783696285/SKS.We reuse the available code, modifying it where necessary and integrating it with a few additional scripts for statistics computation and data preparation. Our code, data and results are available at https://anonymous.4open.science/r/repro-SKS-A.
|
| 22 |
+
|
| 23 |
+
§ 4 RESULTS
|
| 24 |
+
|
| 25 |
+
Our findings diverge substantially from the results reported in the original paper. In particular, in our reproduction experiments, including sentiment features hurts the performance of the model in the hate speech detection task 17 (approximately 0.5 to ${2.0}\mathrm{\;F}1$ -score).
|
| 26 |
+
|
| 27 |
+
§ 18 WHAT WAS EASY
|
| 28 |
+
|
| 29 |
+
The paper provides some broad indications with respect to the training details and the code-base is publicly available. Similarly, the data-sets are also freely available and the authors provide links to them in their repository.
|
| 30 |
+
|
| 31 |
+
§ 21 WHAT WAS DIFFICULT
|
| 32 |
+
|
| 33 |
+
The code-base is rather convoluted. Following the instructions included in the authors' repository resulted in a number of exceptions caused by formatting issues, missing code snippets and hard-coded values. Additionally, the lack of a clear and comprehensive documentation contributed to an arduous code review and reproducibility effort.
|
| 34 |
+
|
| 35 |
+
§ 25 COMMUNICATION WITH ORIGINAL AUTHORS
|
| 36 |
+
|
| 37 |
+
We managed to reach one of the authors and exchange a few messages over GitHub. However, despite multiple attempts, we did not manage to reach the authors per email and get an answer to our questions concerning some aspects of the implementation.
|
| 38 |
+
|
| 39 |
+
§ 1 INTRODUCTION
|
| 40 |
+
|
| 41 |
+
Being able to quickly and reliably detect hate speech in an automatic manner is an important task. Due to the growing number of regulations concerning the use of hate speech and other forms of offensive language online, this topic has gained increasing interest both in academia and industry (Davidson et al., 2017; Schmidt and Wiegand, 2017; Basile et al., 2019; Yin and Zubiaga, 2021).
|
| 42 |
+
|
| 43 |
+
As in any supervised learning task, the availability and the size of labelled data-sets pose significant challenges. The task is made even more arduous by its multilingual and multi-domain nature. One way to alleviate such problems is to make use of additional data-sets from other, related tasks.
|
| 44 |
+
|
| 45 |
+
The study by Zhou et al. (2021) that we attempt to reproduce describes a multi-task learning framework for online hate speech detection that relies on the purportedly strong negative sentiment characterising this threatening form of communication. The model presented in the original paper, Sentiment Knowledge Sharing (SKS), is a multi-head attention network that predicts whether the input text contains hate speech or not. The main claim of the paper revolves around the fact that the model is (optionally) trained in a multi-task setting for sentiment analysis, and it incorporates information from a dictionary of derogatory words through 'category embeddings' (see Section 3.1 for further details).
|
| 46 |
+
|
| 47 |
+
Based on experiments carried out on two benchmark data-sets, Zhou et al. (2021) claim that training a model relying both on sentiment information and category embeddings would improve its performance in the task of hate speech detection.
|
| 48 |
+
|
| 49 |
+
§ 2 SCOPE OF REPRODUCIBILITY
|
| 50 |
+
|
| 51 |
+
The work of Zhou et al. (2021) is based on the intuition that hate speech detection and sentiment analysis are two highly correlated tasks and that hate speech is likely to arise from derogatory words. Our reproducibility attempt tries to verify the following claims:
|
| 52 |
+
|
| 53 |
+
* A model relying both on Sentiment Knowledge Sharing (SKS) and a dictionary of derogatory words scores better than several strong baselines where sentiment features are not considered.
|
| 54 |
+
|
| 55 |
+
* Ablating the sentiment knowledge component (-s) results in a poorer performance, as the model would rely solely on derogatory words features which, despite being likely indicators of hate speech, can make the model too sensitive too false positives (e.g. I'm so fucking ready!).
|
| 56 |
+
|
| 57 |
+
* A model where both sentiment knowledge and derogatory word features are ablated (-sc) scores the worst performance.
|
| 58 |
+
|
| 59 |
+
Besides trying to reproduce the original results (see Table 3 in Zhou et al. (2021)), we perform a hyperparameter search to validate the values reported by the authors. Every experiment we perform is ran multiple times to check whether any observed differences stand when the variability of the scores are taken into consideration.
|
| 60 |
+
|
| 61 |
+
§ 3 METHODOLOGY
|
| 62 |
+
|
| 63 |
+
§ 3.1 MODEL DESCRIPTIONS
|
| 64 |
+
|
| 65 |
+
The SKS model relies heavily on the Mixture-of-Experts layer (MoE) as introduced by Shazeer et al. (2017) and the Multi-gate Mixture-of-Experts (MMoE) model presented by Ma et al. (2018). Its overall architecture consists mainly of three macro-components: an input layer, a sentiment knowledge sharing layer and a gated attention layer.
|
| 66 |
+
|
| 67 |
+
§ 3.1.1 THE INPUT LAYER
|
| 68 |
+
|
| 69 |
+
In the input layer, word embeddings are used to encode words of each target sentence. Specifically, every token ${w}_{i}$ of a given sentence $S = \left\{ {{w}_{1},{w}_{2},\ldots ,{w}_{i},\ldots ,{w}_{N}}\right\}$ is transformed into a real-valued vector ${x}_{i} \in {\mathbb{R}}^{d}$ . Additionally, given that derogatory words represent a helpful marker of hate speech, each vector ${x}_{i}$ is concatenated with a category embedding vector ${c}_{i} \in {\mathbb{R}}^{{d}^{i}}$ , such that ${x}_{i}^{\prime } = {x}_{i} \oplus {c}_{i}$ .
|
| 70 |
+
|
| 71 |
+
Category embeddings are created on the basis of a dictionary of derogatory words which is used to classify sentences into two categories, either containing derogatory words or not. The result of the classification is encoded as a vector ${c}_{i}$ and appended to each word embedding ${x}_{i}$ , such that the encoded sentence is $S = \left\{ {{x}_{1}^{\prime },{x}_{2}^{\prime },\ldots ,{x}_{N}^{\prime }}\right\}$ .
|
| 72 |
+
|
| 73 |
+
§ 3.1.2 THE SENTIMENT KNOWLEDGE SHARING LAYER
|
| 74 |
+
|
| 75 |
+
The sentiment knowledge sharing component relies on a multi-task learning strategy which, according to the authors, would allow to take advantage of the high correlation between the two tasks of sentiment analysis and hate speech detection. In the proposed implementation, the two tasks share a bottom hidden layer implemented following the Mixture-of-Experts framework (MoE). The MoE layer is made up of multiple identical feature extraction units each of which, in turn, is composed of a multi-head attention layer using 4 heads and two feed forward neural networks.
|
| 76 |
+
|
| 77 |
+
Each unit relies on the idea of multi-head attention introduced by Vaswani et al. (2017), where the input matrix $\mathrm{X}$ is mapped to query $\mathrm{Q} \in {\mathbb{R}}^{\left( {n}_{1} \times {d}_{1}\right) }$ , key $\mathrm{K} \in {\mathbb{R}}^{\left( {n}_{1} \times {d}_{1}\right) }$ , and value $\mathrm{V} \in {\mathbb{R}}^{\left( {n}_{1} \times {d}_{1}\right) }$ using linear transformations. Given these three matrices the attention parameters are computed as follows:
|
| 78 |
+
|
| 79 |
+
$$
|
| 80 |
+
\operatorname{Attention}\left( {\mathrm{Q},\mathrm{K},\mathrm{V}}\right) = \operatorname{softmax}\left( \frac{{\mathrm{{QK}}}^{\top }}{{d}_{1}}\right) \mathrm{V} \tag{1}
|
| 81 |
+
$$
|
| 82 |
+
|
| 83 |
+
In the implementation proposed by Zhou et al. (2021) $\mathrm{K} = \mathrm{V}$ and ${d}_{1}$ corresponds to the number of hidden layer units. The ith output of the multi-head attention mechanism is:
|
| 84 |
+
|
| 85 |
+
$$
|
| 86 |
+
{\mathrm{M}}_{i} = \operatorname{Attention}\left( {{\mathrm{{QW}}}_{i}^{Q},{\mathrm{{KW}}}_{i}^{K},{\mathrm{{VW}}}_{i}^{V}}\right) \tag{2}
|
| 87 |
+
$$
|
| 88 |
+
|
| 89 |
+
where the parameter matrix ${\mathrm{W}}_{i}^{Q} \in {\mathbb{R}}^{{n}_{1} \times \frac{{d}_{1}}{l}},{\mathrm{\;W}}_{i}^{K} \in {\mathbb{R}}^{{n}_{1} \times \frac{{d}_{1}}{l}}$ and ${\mathrm{W}}_{i}^{V} \in {\mathbb{R}}^{{n}_{1} \times \frac{{d}_{1}}{l}}$ . All outputs are then concatenated and multiplied by ${\mathrm{W}}^{O}$ to obtain the final feature representation ${\mathrm{H}}^{s} = \operatorname{concat}\left( {{\mathrm{M}}_{1},{\mathrm{M}}_{2},\ldots ,{\mathrm{M}}_{l}}\right) {\mathrm{W}}^{O}$ .
|
| 90 |
+
|
| 91 |
+
Finally, the authors decide to use both maximum and average pooling (Shen et al., 2018) to fuse the feature representations, concatenating the two results:
|
| 92 |
+
|
| 93 |
+
$$
|
| 94 |
+
{\mathrm{P}}_{m} = \text{ Pooling\_max }\left( {\mathrm{H}}^{s}\right) \tag{3}
|
| 95 |
+
$$
|
| 96 |
+
|
| 97 |
+
$$
|
| 98 |
+
{\mathrm{P}}_{a} = \text{ Pooling\_average }\left( {\mathrm{H}}^{s}\right) \tag{4}
|
| 99 |
+
$$
|
| 100 |
+
|
| 101 |
+
$$
|
| 102 |
+
{\mathrm{P}}_{s} = \operatorname{concat}\left( {{\mathrm{P}}_{m},{\mathrm{P}}_{a}}\right) \tag{5}
|
| 103 |
+
$$
|
| 104 |
+
|
| 105 |
+
§ 3.1.3 THE GATED ATTENTION LAYER
|
| 106 |
+
|
| 107 |
+
The third macro-component is a gated attention mechanism which allows to select a subset of the feature extraction units from the previous layer. The output ${g}^{k}\left( x\right)$ of a specific gate $k$ corresponds to the probability of selecting a specific unit. The subset of units selected through this process are then weighted and summed to get the final representation ${f}^{k}\left( x\right)$ of a given sentence, which is passed to a feed-forward neural network to detect hate speech:
|
| 108 |
+
|
| 109 |
+
$$
|
| 110 |
+
{g}^{k}\left( x\right) = \operatorname{softmax}\left( {{\mathrm{W}}_{gn} * \operatorname{gate}\left( x\right) }\right) \tag{6}
|
| 111 |
+
$$
|
| 112 |
+
|
| 113 |
+
$$
|
| 114 |
+
{f}^{k}\left( x\right) = \mathop{\sum }\limits_{{i = 1}}^{n}{g}^{k}{\left( x\right) }_{i}{f}_{i}\left( x\right) \tag{7}
|
| 115 |
+
$$
|
| 116 |
+
|
| 117 |
+
$$
|
| 118 |
+
{y}_{k} = {h}^{k}{f}^{k}\left( x\right) \tag{8}
|
| 119 |
+
$$
|
| 120 |
+
|
| 121 |
+
§ 3.2 DATASETS
|
| 122 |
+
|
| 123 |
+
Following in the steps of Zhou et al. (2021), we test the model and report results on two public hate speech data-sets: SemEval2019 Task-5 (SE, Basile et al.,2019) ${}^{1}$ and Davidson (DV, Davidson et al.,2017). ${}^{2}$
|
| 124 |
+
|
| 125 |
+
The SE data-set contains a total of 13,000 tweets and is divided into training-, validation- and test-set, consisting of 9,000,1,000and 3,000 samples, respectively. The training-set contains 3,783 instances of hate speech and 5,217 instances that are not. In the validation-set 427 samples are classified as hate speech and 573 as non-hate speech. The test-set is split into 1,260 hate speech samples and 1,740 non-hate speech ones.
|
| 126 |
+
|
| 127 |
+
The DV data-set contains a total of 24,783 manually labelled tweets. Each tweet is assigned to either one of three classes: hate speech(1,430), offensive language(19,190)or neither(4,163). Zhou et al. (2021) merge the last two classes together and obtain 1,430 tweets classified as hate speech and 23,353 classified as non-hate speech.
|
| 128 |
+
|
| 129 |
+
Finally, the model relies also on a sentiment data-set obtained from Kaggle. ${}^{3}$ The original authors only use the training-set which contains 31,962 tweets, 2,242 of which are classified as having a negative sentiment, while the remaining29,720a positive one.
|
| 130 |
+
|
| 131 |
+
§ 3.3 HYPERPARAMETERS
|
| 132 |
+
|
| 133 |
+
We begin our reproducibility attempt, relying solely on the hyperparameters reported in the original paper. Our results are summarised in Table 1.
|
| 134 |
+
|
| 135 |
+
In the input layer, all word vectors are initialised using Glove Common Crawl Embeddings (840B Token) with a dimension of 300, while category embeddings are randomly initialised and have a dimension of 100 .
|
| 136 |
+
|
| 137 |
+
In the sentiment knowledge sharing layer, the multi-head attention mechanism is implemented using 4 heads. The two feed-forward networks in each expert unit have one layer with 400 units and two layers with 150 units, respectively. However, contrary to what we see in the implementation, it is worth noting that the original paper reports 200 units for the second network. After each layer a dropout rate of 0.1 is used.
|
| 138 |
+
|
| 139 |
+
The model is trained by mini-batches of 512 instances for 15 epochs, using the RMSprop optimiser and a learning rate of 0.001 . The original authors report using learning rate decay and early stopping to avoid overfitting.
|
| 140 |
+
|
| 141 |
+
§ 3.3.1 HYPERPARAMETERS TUNING
|
| 142 |
+
|
| 143 |
+
The original work does not provide any details regarding hyperparameters tuning and upon contacting the authors to inquire about it we received no answers. Thus, we attempt to tune learning rate $\left( {10}^{-6}\right.$ to ${10}^{-1}$ , on a $\log$ scale), batch size (from 32 to 1024, on a ${\log }_{2}$ scale) and dropout rate (0.0 to 0.4 with increments of 0.1 ) on the SE data-set using grid-search with 60 epochs and find that the respective optimal values are 0.001, 256 and 0.0 . However, the values indicated in the original paper performs similarly. Considering the model variation (see Table 1 and Figure 1), the differences can easily be attributed to the model variance, due to random initialization.
|
| 144 |
+
|
| 145 |
+
§ 3.4 EXPERIMENTAL SETUP AND CODE
|
| 146 |
+
|
| 147 |
+
We try to reproduce the results presented in Table 3 of the original paper (Zhou et al., 2021). For both data-sets the authors train three models: SKS, which relies both on sentiment knowledge sharing and category embeddings; -s, a model where the sentiment knowledge sharing component is ablated; -sc, a model that does without both sentiment knowledge sharing and category embeddings. We rely largely on the Tensorflow implementation (Abadi et al., 2015) made available by the authors, modifying it where necessary and integrating it with a few additional scripts for statistics computation and data preparation.
|
| 148 |
+
|
| 149 |
+
For each result reported in the original paper we repeat the corresponding experiment 10 times. Specifically, for each repetition the model is reinitialised and trained over 15 epochs. We keep the results from the best epoch of each repetition and then compute the average and the standard deviation for the originally employed measures i.e. accuracy and macro-F1 score for the SE data-set and accuracy and weighted-F1 score for the DV data-set.
|
| 150 |
+
|
| 151 |
+
${}^{1}$ http://hatespeech.di.unito.it/hateval.html
|
| 152 |
+
|
| 153 |
+
${}^{2}$ https://github.com/t-davidson/hate-speech-and-offensive-language/tree/master/data
|
| 154 |
+
|
| 155 |
+
${}^{3}$ https://www.kaggle.com/dv1453/twitter-sentiment-analysis-analytics-vidya
|
| 156 |
+
|
| 157 |
+
Given that the DV data-set is highly unbalanced, the authors use a 5 -fold cross-validation approach to measure the performance of each model. We follow the same approach and adopt the 10 times repetition strategy for each fold.
|
| 158 |
+
|
| 159 |
+
Our code, data as well as the final and intermediate per-iteration results are available at https://anonymous.4open.science/r/repro-SKS-A.
|
| 160 |
+
|
| 161 |
+
§ 3.5 COMPUTATIONAL REQUIREMENTS
|
| 162 |
+
|
| 163 |
+
To run our experiments we use a NVIDIA TITAN Xp with a 12 GB memory. Training the models on the SE data-set took approximately 24 minutes for the SKS model and 7 minutes both for the - sc and -s model. On the DV data-set the training took approximately 3 hours for the SKS model and 2 hours both for the - sc and -s model. The hyperparameters tuning step on the SE data-set took approximately 33 hours.
|
| 164 |
+
|
| 165 |
+
§ 4 RESULTS
|
| 166 |
+
|
| 167 |
+
We report the original results along the ones we obtained using the specified hyperparameters in Table 1. Comparing our findings with those reported by the original authors we observe a discrepancy in all three measures, Accuracy, macro-F1 and weighted-F1 score, for both data-sets. In the SE data-set, the most notable differences concern the results of the SKS and -s models. In the DV data-set, there are some noteworthy discrepancies only with respect to the SKS model.
|
| 168 |
+
|
| 169 |
+
Looking at the mean scores we obtain on the SE data-set, the SKS model does not outperform both ablated versions -s and - sc, thus contradicting the first and second claim in Section 2. In fact, while SKS obtains an Accuracy of 61.04 and a macro-F1 score of 60.88, the -s model outperforms it, reaching an Accuracy and a macro-F1 score of 64.17 and 63.05, respectively. On the other hand, the third claim appears to hold. With an Accuracy of 60.52 and a macro-F1 score of 60.47 the - sc model is the one registering the worst performance.
|
| 170 |
+
|
| 171 |
+
Turning to the DV data-set, none of the claims appear to be substantiated by our findings either. The SKS models scores the lowest with an Accuracy of 93.63 and a weighted-F1 score of 93.62, while the ablated versions -s and -sc register similar values for both metrics, with an Accuracy of 93.99 and 93.98 and a weighted-F1 score of 94.11 and 94.12, respectively.
|
| 172 |
+
|
| 173 |
+
max width=
|
| 174 |
+
|
| 175 |
+
3*Model 4|c|DV 4|c|SE
|
| 176 |
+
|
| 177 |
+
2-9
|
| 178 |
+
2|c|Acc 2|c|F1 (weighted) 2|c|Acc 2|c|F1 (macro)
|
| 179 |
+
|
| 180 |
+
2-9
|
| 181 |
+
Orig. Repro. Orig. Repro. Orig. Repro. Orig. Repro.
|
| 182 |
+
|
| 183 |
+
1-9
|
| 184 |
+
-sc 94.0 93.98 (±1.61) 94.0 94.12 (±1.73) 59.6 60.52 (±1.44) 59.3 60.47 (±1.40)
|
| 185 |
+
|
| 186 |
+
1-9
|
| 187 |
+
-s 94.5 93.99 (±1.49) 94.3 94.11 (±1.58) 61.3 64.17 (±0.99) 61.3 63.05 (±0.63)
|
| 188 |
+
|
| 189 |
+
1-9
|
| 190 |
+
SKS 95.1 ${93.63}\left( {\pm {2.09}}\right)$ 96.3 93.62 (±2.37) 65.9 ${61.04}\left( {\pm {1.81}}\right)$ 65.2 ${60.88}\left( {\pm {1.64}}\right)$
|
| 191 |
+
|
| 192 |
+
1-9
|
| 193 |
+
|
| 194 |
+
Table 1: For each data-set and performance measure we report each model's original (Orig) results on the left and the reproduced (Repro) ones on the right, including the standard deviation of the reproduced score.
|
| 195 |
+
|
| 196 |
+
For a visual inspection of the results presented in Table 1 we also plot box plots of the scores obtained in multiple reproduction attempts in Figure 1. Despite some overlap in the range of the obtained scores, the median scores of the SKS model is lower than those of the ablated versions. The figure also shows that the scores reported in the original paper fall within the range $\pm {1.5}$ standard deviation from the mean of the scores of the multiple reproduction experiments. However, for both data-sets, the original scores of SKS is substantially above this range.
|
| 197 |
+
|
| 198 |
+
§ 4.1 ALTERNATIVE METRICS
|
| 199 |
+
|
| 200 |
+
The original paper reports macro- or weighted-averaged F1 scores, with the motivation of comparability to earlier research on these data-sets. However, the task at hand is a binary classification task with a clear positive class. Incorporating the negative class score through averaging does not allow to assess the success of the classifier on the task. Furthermore, relying on weighted averaging without having a justified set of weights, but using weights proportional to the support of each class, rewards classifiers with majority bias even further.
|
| 201 |
+
|
| 202 |
+
< g r a p h i c s >
|
| 203 |
+
|
| 204 |
+
Figure 1: Box plots of (a) F1-weighted on the DV data-set and (b) F1-macro on the SE data-set, from repeated experiments with different initialisations. Circles represent the scores reported in the original article. Red square in (b) indicates the single outlier for the - s option on this data-sets. The rest of the scores are equal to the median. Note that the $y$ -axes do not have the same scale.
|
| 205 |
+
|
| 206 |
+
< g r a p h i c s >
|
| 207 |
+
|
| 208 |
+
Figure 2: Box plots of binary precision (blue), recall (orange), and F1-scores on (a) the DV data-set and (b) the SE data-set, from repeated experiments with different initialisations. Note that the y-axes do not have the same scale.
|
| 209 |
+
|
| 210 |
+
To present a more interpretable impression of the success of each model and provide further insight into the differences based on model ablation and alternations, Figure 2 depicts the distribution of precision and recall for the different reproduction experiments carried out on the two data-sets.
|
| 211 |
+
|
| 212 |
+
The plots indicate that, despite a large overlap, jointly learning sentiment analysis (SKS) improves the precision of the hate speech detection on the DV data-set. Despite having a negative impact on the recall, this also yields a slightly better median F1 score. The effect of the sentiment task is mostly negative on the SE data.
|
| 213 |
+
|
| 214 |
+
§ 5 DISCUSSION
|
| 215 |
+
|
| 216 |
+
The reproducibility results from Section 4 do not support the claims outlined in Section 2 for either data-sets. In particular, our findings seem to suggest that the multi-task learning approach implemented by the authors to allow the SKS model to extract sentiment features and apply them to hate speech detection does not yield the expected results. However, considering the lack of a comprehensive documentation, the convoluted structure of the code-base and the insufficient communication with the original authors it is hard to draw definitive conclusions. In fact, there are a number of plausible explanations as to why our findings diverge from those reported in the original paper.
|
| 217 |
+
|
| 218 |
+
For instance, considering the slight difference between the optimal hyperparameters we found and those reported by Zhou et al. (2021), and large variation of the model scores, one could speculate that, at least for part of the experiments, the authors employed some parameters which have not been reported. This would also explain the difference between some of the values indicated in the paper those used in the provided implementation.
|
| 219 |
+
|
| 220 |
+
Another explanation could lie in the fact that we inadvertently deviated from the original implementation while trying to fix some of the issues we faced in running the code-base. Whenever information was missing or not completely clear assumptions had to be made, and we tried to approximate the original results by trial and error. This was the case for the - sc model where the procedure to ablate the category embeddings component was not given and the answer we received from the authors did not help us overcome the problem.
|
| 221 |
+
|
| 222 |
+
The main intuition behind the original study is the fact that hate speech typically carries a negative sentiment. Hence, this relation between the tasks would help the model to identify hate speech better (arguably by increasing recall). However, a manual inspection of the data-set, on the other hand, suggests that it may actually be surprising for a classifier informed by sentiment analysis to help hate speech detection. Both data-sets are collected using keywords that are likely to contain hate speech, and negative classes constitute posts that are either offensive (but not hate speech), or people counteracting earlier offensive content. Hence, the sentiment on the negative class is not necessarily positive and helpful for discriminating hate speech in these data-sets. However, in a more realistic environment, the author's proposal may be correct. Given more 'normal' negative class instances, learning sentiment analysis jointly is likely to inform the hate speech detection. ${}^{4}$ The binary evaluation metrics presented in Figure 2 indicates that at least on the DV data-set, the addition of sentiment may have some positive effects. Understanding the reasons for these differences, and improving the joint learning model is a possible direction for the future research.
|
| 223 |
+
|
| 224 |
+
§ 5.1 WHAT WAS EASY
|
| 225 |
+
|
| 226 |
+
The paper provides some broad indications with respect to the training details and both the data-sets and the code-base are open-sourced.
|
| 227 |
+
|
| 228 |
+
§ 5.2 WHAT WAS DIFFICULT
|
| 229 |
+
|
| 230 |
+
The lack of a comprehensive documentation, the convoluted structure of the code-base and the insufficient communication with the original authors contributed to an arduous code review and reproducibility effort.
|
| 231 |
+
|
| 232 |
+
§ 5.3 COMMUNICATION WITH ORIGINAL AUTHORS
|
| 233 |
+
|
| 234 |
+
We first tried to review and run the provided code-base by ourselves. However, after stumbling on a number of issues related to how the data-sets were being processed and how to run the -sc model ablating category embeddings, we decided to reach out to the authors through GitHub. One of the corresponding authors provided some indications which, unfortunately, did not help us overcome the problems at hand.
|
| 235 |
+
|
| 236 |
+
We also tried to contact the authors per email twice, inquiring about some aspects of the model implementation as well as the procedure they followed to tune the hyper-parameters. However, we never received an answer.
|
papers/ML_Reproducibility_Challenge/ML_Reproducibility_Challenge 2021/ML_Reproducibility_Challenge 2021 Fall/SVx46hzmhRK/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,238 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Replication Study of DECAF: Generating Fair Synthetic Data Using Causally-Aware Generative Networks
|
| 2 |
+
|
| 3 |
+
Anonymous Author(s)
|
| 4 |
+
|
| 5 |
+
Affiliation
|
| 6 |
+
|
| 7 |
+
Address
|
| 8 |
+
|
| 9 |
+
email
|
| 10 |
+
|
| 11 |
+
## 1 1 Summary
|
| 12 |
+
|
| 13 |
+
## 2 1.1 Scope of reproducibility
|
| 14 |
+
|
| 15 |
+
In this paper we attempt to reproduce the results found in "DECAF: Generating Fair Synthetic Data 4 Using Causally-Aware Generative Networks" by Breugel et al [2]. The goal of the original paper is create a model that intakes a biased dataset and outputs a debiased synthetic dataset that can be used 6 to train downstream models to make unbiased predictions both on synthetic and real data.
|
| 16 |
+
|
| 17 |
+
## 7 1.2 Methodology
|
| 18 |
+
|
| 19 |
+
3 We built upon the (incomplete) code provided by the authors to repeat the first experiment of [2] which involves removing existing bias from real data with existing bias, and the second experiment where synthetically injected bias is added to real data and then removed.
|
| 20 |
+
|
| 21 |
+
### 1.3 Results
|
| 22 |
+
|
| 23 |
+
We reproduced most of the data utility results reported in the first experiment for the Adult dataset. However, the fairness metric generally match the original paper but are numerically not comparable in absolute or relative terms. For the second experiment, we were unsuccessful in reproducing results found by the authors. We note however that we made considerable changes to the experimental setup, which may make it difficult to perform a direct comparison of the results.
|
| 24 |
+
|
| 25 |
+
### 1.4 What was easy
|
| 26 |
+
|
| 27 |
+
The smaller size and tabular format of both datasets allowed for quick training and model modifications.
|
| 28 |
+
|
| 29 |
+
### 1.5 What was difficult
|
| 30 |
+
|
| 31 |
+
There are several possible interpretations of the paper on both a methodological and conceptual level. Reproducing the experiments required rewriting or adding large sections of code. Given these multiple interpretations it was difficult to be confident in the reproduction. In addition, several results found by the authors appear to be counterintuitive, such as algorithms debiasing without being designed to do so and sometimes outperforming debiasing algorithms on the same dataset.
|
| 32 |
+
|
| 33 |
+
### 1.6 Communication with original authors
|
| 34 |
+
|
| 35 |
+
7 We sent two emails to the authors describing our issues. We received a reply with a few extra files, 28 but no direct answer to content questions.
|
| 36 |
+
|
| 37 |
+
## 2 Introduction
|
| 38 |
+
|
| 39 |
+
It is broadly acknowledged that real world data contains bias. Despite efforts to make data collection more equitable and representative, a myriad of challenges remain. The effects of bias are well understood, as biased data can lead to the under-representation of particular demographics, such as the case of political representation in the United States Census [7]. As technology progressed to the emergence of machine learning (ML) models, the same challenges persisted as ML models adopted the biases of the data and humans who created them. Models trained on biased data can pass bias downstream to various other applications, a phenomenon referred to as algorithmic bias[5]. Such models have potential to not only perpetuate but exacerbate social inequality, yet bias is omnipresent in everything that humans touch. Hence, there is a clear and present need for methods that can utilize biased data to produce unbiased results.
|
| 40 |
+
|
| 41 |
+
## 3 Background
|
| 42 |
+
|
| 43 |
+
The notion of using Generative Adversarial Networks (GAN) to increase fairness within artificial intelligence is broadly supported by the literature. Various models exists such as FairGAN [10], GANSAN[1], and Fairness GAN [9] to name but a few. Notably, fairness efforts have typically recognized a fairness-accuracy trade-off assumption, where a fairer algorithm comes at the cost of accuracy. However, recent work has challenged these assumptions, finding that the accuracy cost of fairness is negligible in some circumstances [8]. Nonetheless, given the increased awareness of the nefarious effects of data bias, many research efforts have been directed towards the debiasing of data and other attempts to create fairer artificial intelligence.
|
| 44 |
+
|
| 45 |
+
### 3.1 DECAF premise
|
| 46 |
+
|
| 47 |
+
One such effort and the subject of the present study is DEbiasing CAusal Fairness (DECAF) [2]. DECAF takes a distinct approach to debiasing data, explicitly approaching fairness from a causal standpoint with a goal of downstream model fairness. There are three broad approaches to fairness that may be identified, (1) the preprocessing approach, where the characteristics of the input data are changed to suppress undesirable biases [2], (2) the algorithmic modification approach, where the learning algorithm itself is adapted to reduce bias [4], and (3) the postprocessing approach, where the output of a model is manipulated to obtain the desired level of fairness [6]. The DECAF approach falls in the first category of preprocessing because it attempts to remove bias from the input data and subsequently from all downstream models.
|
| 48 |
+
|
| 49 |
+
The DECAF model is a generative adversarial network (GAN) that utilizes the causal structure of directed acyclical graphs (DAGs) to remove bias from real data. The three critical assumptions of the DECAF method are (1) the data generating process is represented by a DAG, (2) the DAG is causally sufficient, and (3) the DAG is known for a given dataset. DAGs are central to the method, as it is through edge manipulation that debiasing is performed.
|
| 50 |
+
|
| 51 |
+
The model may be separated into two stages. During the first training phase, the model learns the causal conditionals of the dataset from its DAG. In the second inference phase, the data is debiased through DAG modification. Each fairness level defines a unique set of edge removals from the original DAG, resulting in a new, intervened DAG. These intervened DAGs are given to the model to generate synthetic, fair datasets from the original data. The synthetic datasets have similar distributions to the original data, but avoid bias. Because the method debiases at inference time, retraining the model is not required when using different fairness measures, thus providing inference-time fairness.
|
| 52 |
+
|
| 53 |
+
Once DECAF generates a synthetic and unbiased dataset, a simple multilayer perceptron (MLP) is trained on this synthetic data to create an unbiased classifier that can be used both on the original data and in other settings. Because the data used for training the MLP has already been debiased, the authors claim that the MLP or any chosen downstream model is guaranteed to be fair since it doesn't 75 incorporate any of the bias from the original training data; this is a hallmark of the preprocessing approach to fairness.
|
| 54 |
+
|
| 55 |
+
### 3.2 Fairness standards
|
| 56 |
+
|
| 57 |
+
Three definitions of algorithmic fairness are used in the paper, each corresponding to a unique modified DAG. The most lenient standard is the commonly used Fairness Through Unawareness (FTU) definition, which entails that the protected variable, $A$ , is not explicitly used by the model to predict the label, $\widehat{Y}$ . While widely used because it avoids direct discrimination, FTU fails to eliminate indirect discrimination.
|
| 58 |
+
|
| 59 |
+
A more stringent definition of fairness is Demographic Parity (DP), which declares that classification probability must be independent of classes, i.e. if the protected attribute is gender, all gender classes have the same success rate. The DP definition is considered to be very strict because it potentially under-utilizes feature differences between groups in the process of blocking indirect discrimination.
|
| 60 |
+
|
| 61 |
+
Conditional Fairness (CF) lies in the middle ground between the first two definitions by presuming that the selection rate between groups segregated by the protected attribute must be the same when conditioned on some explanatory variable(s) determined by prior knowledge. Each of these standards corresponds to a variation of DECAF, respectively DECAF-ND (no debiasing), DECAF-FTU, DECAF-CF, and DECAF-DP. The fairness of each model is tested against FTU and DP metrics.
|
| 62 |
+
|
| 63 |
+
## 4 Scope of reproducibility and claims
|
| 64 |
+
|
| 65 |
+
The authors claim that DECAF allows for the generation of unbiased synthetic data from biased real data and that their method does so with minimal loss in data utility compared to other approaches. Furthermore, they identify five characteristics of fair synthetic data that their method achieves: (1) allows post-hoc distribution changes, (2) provides fairness, (3) supports causal notions of fairness, (4) allows inference-time fairness, and (5) requires minimal assumptions. Additionally, they claim that DECAF is the only method to achieve all of the five listed characteristics.
|
| 66 |
+
|
| 67 |
+
The authors identify three main contributions of their work:
|
| 68 |
+
|
| 69 |
+
(i) DECAF, a causal GAN-based model that can use a biased dataset $X$ to generate an equivalent synthetic unbiased dataset $\mathcal{X}$ with minimal loss of data utility
|
| 70 |
+
|
| 71 |
+
(ii) A flexible causal approach for modifying DECAF to generate fair data
|
| 72 |
+
|
| 73 |
+
(iii) Guarantee that downstream models trained on the generated synthetic data will make unbiased predictions on both synthetic and real-life (biased) data
|
| 74 |
+
|
| 75 |
+
We aim to evaluate claims (i) and (iii) by replicating the two experiments of [2]. We will focus on the narrow interpretation of reproducibility, namely whether the experiment can be reproduced by independent researchers with the same setup rather than testing against the more general standard of replicatability on different datasets. Despite the availability of code, there were considerable problems with running the models even with instructions given, meaning that we limited our scope to direct reproducibility. As the authors have done, we will evaluate the data utility of the DECAF method with precision, recall, and area under the receiver operation characteristic (AUROC); fairness will be evaluated with Fairness Through Unawareness (FTU) and Demographic Parity (DP) measures.
|
| 76 |
+
|
| 77 |
+
## 5 Methodology
|
| 78 |
+
|
| 79 |
+
While code from the creators of the DECAF method is available [1, documentation leaves room for interpretation and the instructions given for running the code do not reproduce the results as presented. In addition, there are several possible discrepancies between the method described in the paper and the code provided. Thus, we made the assumption that the paper leads and adjusted the code accordingly to match.
|
| 80 |
+
|
| 81 |
+
### 5.1 Methodological Code Changes
|
| 82 |
+
|
| 83 |
+
Though the DECAF class was working, several components of the experimental setup code was either missing or not fully explained. Thus, we had to extrapolate heavily to produce results. The major code changes required are listed below:
|
| 84 |
+
|
| 85 |
+
---
|
| 86 |
+
|
| 87 |
+
${}^{1}$ The DECAF code is available at: https://github.com/vanderschaarlab/DECAF
|
| 88 |
+
|
| 89 |
+
---
|
| 90 |
+
|
| 91 |
+
(i) Preprocessing: the paper mentioned standardizing continuous variables, however, following the procedure given in the paper generated uninterpretable results. As a solution we attempted to standardize all variables, including categorical ones though we question the conceptual validity of this decision. After standardizing with StandardScaler, we still were not getting results as high as the reported metrics, so we tried normalizing with MinMaxScaler which finally produced matching results in data utility. The DECAF class employs a final sigmoid layer that converts all generated data to a range between 0 and 1 . We suspect this was the reason why their run_example.py script would only predict labels of one class and why using a Scaler allowed us to obtain meaningful predictions.
|
| 92 |
+
|
| 93 |
+
(ii) DAGs: There appears to be a mismatch with the dags provided, as neither contain all of the variables in the datasets. In addition the code provided utilized a toy graph. The authors state that they used Tetrad to generate the DAG for the dataset, so we attempted to generate a full causal graph for the Adult dataset, but our generated graphs did not match Figure 6 and 7 of [2]. Hence, we manually input the graphs from the paper.
|
| 94 |
+
|
| 95 |
+
(iii) Label Generation: The paper instructed that the labels for synthetic data should be generated by the model as they are part of the causal dependencies graph. The original code did not generate the labels for the synthetic dataset, but instead generated only the $\mathrm{x}$ values and then predicted the labels from those generated $\mathrm{x}$ values using the baseline model. The code seemed to omit the target variable from the GAN input, but we felt this would leave out valuable causal information contained in the edges from the explanatory variables to the target variable. Thus, we decided to include the target variable in the DAG, and this indeed improved our results. In the end, we were forced to generate labels for experiment 1 , while predicting labels for experiment 2 in order to obtain interpretable results.
|
| 96 |
+
|
| 97 |
+
(iv) Downstream Classifer: The paper mentions an MLP from sklearn, but the example code uses an XGBClassifier as the downstream classifier which was giving us installation issues. We followed the paper by using an MLP.
|
| 98 |
+
|
| 99 |
+
### 5.2 Dataset
|
| 100 |
+
|
| 101 |
+
For the first experiment, we worked with the Adult dataset 2 [3] collected from the 1994 United States Census. The dataset contains about 45,000 data points, and 2,000 data points were set aside for the test set as specified by [2]. The protected attribute is sex, and the target variable is income with roughly ${75}\%$ in the ’ $< = {50}\mathrm{k}$ ’ class and the remaining ${25}\%$ belonging to the ’ $> {50}\mathrm{k}$ ’ class. This makes sense considering the average earnings of Americans at the time, but does make our data rather skewed towards one class. We manually input the DAG from Figure 6 of [2] and used the preprocessing steps described in the previous section.
|
| 102 |
+
|
| 103 |
+
For the second experiment, we used the Credit Approval dataset [3] of credit card applications. This dataset is considerably smaller than the first dataset with only 678 data points. The original paper did not specify how large the test set was, so we chose a typical ${80}\% /{20}\%$ split for training and testing. The protected attribute is ethnicity and the target variable is application approval. About 55% of the applications were approved while the rest were rejected, so this dataset is considerably more balanced than the other. Again, we had to manually input the graph from Figure 7 of the original paper. Since the protected attribute here, ethnicity, is not binary, we first converted the variable to be binary with 0 corresponding to 'not discriminated against' and 1 to 'discriminated against'. Then we used the same preprocessing steps as in the first experiment.
|
| 104 |
+
|
| 105 |
+
### 5.3 Hyperparameters
|
| 106 |
+
|
| 107 |
+
A hyperparameter search is not necessary for our experiments. We used the DECAF class as given with the parameters set by the authors' code. The only modification we made was changing the dag_seed parameter from the provided toy graph to the respective graphs for each dataset presented on Page 28 of [2]. The DECAF generator is instantiated with $d$ , the number of features, sub-networks with shared hidden layers. The generator and discriminator both use 2 hidden layers with ${2d}$ neurons. The generator is updated once for every 10 discriminator updates. Adam was used as the optimizer with a learning rate of 0.001 . The other GANs used for comparison were also given default parameters and settings from their respective packages because no settings were specified by the authors.
|
| 108 |
+
|
| 109 |
+
---
|
| 110 |
+
|
| 111 |
+
${}^{2}$ The Adult dataset is available at http://archive.ics.uci.edu/ml/index.php
|
| 112 |
+
|
| 113 |
+
---
|
| 114 |
+
|
| 115 |
+
Table 1: Reproduction results on bias removal experiment on the Adult dataset.
|
| 116 |
+
|
| 117 |
+
<table><tr><td rowspan="2">Method</td><td colspan="3">Data Quality</td><td colspan="2">Fairness</td></tr><tr><td>Precision</td><td>Recall</td><td>AUROC</td><td>FTU</td><td>DP</td></tr><tr><td>Original data</td><td>$\mathbf{{0.881}} \pm {0.006}$</td><td>${0.917} \pm {0.009}$</td><td>${0.772} \pm {0.008}$</td><td>${0.047} \pm {0.010}$</td><td>${0.207} \pm {0.013}$</td></tr><tr><td>GAN</td><td>${0.772} \pm {0.098}$</td><td>${0.344} \pm {0.249}$</td><td>${0.523} \pm {0.048}$</td><td>${0.202} \pm {0.197}$</td><td>${0.202} \pm {0.182}$</td></tr><tr><td>WGAN-GP</td><td>${0.784} \pm {0.073}$</td><td>${0.467} \pm {0.195}$</td><td>${0.514} \pm {0.067}$</td><td>${0.208} \pm {0.189}$</td><td>${0.231} \pm {0.166}$</td></tr><tr><td>FairGAN</td><td>${0.835} \pm {0.043}$</td><td>${0.911} \pm {0.081}$</td><td>${0.672} \pm {0.061}$</td><td>${0.097} \pm {0.113}$</td><td>${0.157} \pm {0.155}$</td></tr><tr><td>DECAF-ND</td><td>${0.880} \pm {0.024}$</td><td>${0.774} \pm {0.047}$</td><td>${0.734} \pm {0.023}$</td><td>${0.114} \pm {0.040}$</td><td>${0.353} \pm {0.023}$</td></tr><tr><td>DECAF-FTU</td><td>${0.866} \pm {0.027}$</td><td>${0.800} \pm {0.043}$</td><td>${0.708} \pm {0.043}$</td><td>${0.041} \pm {0.020}$</td><td>${0.260} \pm {0.085}$</td></tr><tr><td>DECAF-CF</td><td>${0.769} \pm {0.012}$</td><td>${0.954} \pm {0.025}$</td><td>${0.541} \pm {0.028}$</td><td>${0.022} \pm {0.018}$</td><td>${0.026} \pm {0.023}$</td></tr><tr><td>DECAF-DP</td><td>${0.753} \pm {0.003}$</td><td>$\mathbf{{0.978}} \pm {0.022}$</td><td>${0.502} \pm {0.009}$</td><td>$\mathbf{{0.006}} \pm {0.007}$</td><td>0.012±0.009</td></tr></table>
|
| 118 |
+
|
| 119 |
+
An MLP with default parameters from sklearn was used. The default settings are 100 neurons with ReLU activation functions and Adam with a learning rate of 0.001 . A Softmax activation and binary cross entropy loss were used for the output layer.
|
| 120 |
+
|
| 121 |
+
### 5.4 Experimental setup and code
|
| 122 |
+
|
| 123 |
+
In this study, we aimed to replicate the experiments of the original paper, Debiasing Census Data (experiment 1) and Fair Credit Approval (experiment 2), to evaluate the performance of DECAF when generating unbiased synthetic data from real, biased data from the Adult dataset.
|
| 124 |
+
|
| 125 |
+
We trained each model listed in Table 2 of the original paper, four DECAF GANs and three other GANs for comparison, for 50 epochs. A synthetic dataset was generated from each model that was then used to train an MLP to classify a test set of 2,000 unmodified data points from the original dataset. We compared these predictions with the ground truth labels from the original data to evaluate performance and fairness. This process was repeated ten times to obtain average metrics over multiple runs as specified by the authors.
|
| 126 |
+
|
| 127 |
+
To mimic the DECAF paper, precision, recall, and AUROC were used to measure the performance of the models, while FTU and DP were used to measure the fairness of the models. Precision, recall, and AUROC are given by sklearn.metrics, and higher scores indicate better performance. Lower FTU and DP scores indicate less bias. To calculate FTU, set all the labels of the protected attribute to one class and predict the labels; repeat with the remaining class (for binary variables), and compare the difference of the means of the two prediction sets, such that $\left| {{P}_{A = 0}\left( {\widehat{Y} \mid X}\right) - {P}_{A = 1}\left( {\widehat{Y} \mid X}\right) }\right|$ Then for DP, segregate the dataset into datapoints with one class label and datapoints with the other label (for binary variables), and again predict the labels of each set and compare the difference of the means of the two prediction sets, such that $\left| {P\left( {\widehat{Y} \mid A = 0}\right) - P\left( {\widehat{Y} \mid A = 1}\right) }\right|$ . To compare our replication against the original experiments of the authors, we compare both the absolute difference and the relative difference (as a ratio) with our findings. Our code and more details can be found on our Github repository ${}^{3}$ .
|
| 128 |
+
|
| 129 |
+
### 5.5 Computational requirements
|
| 130 |
+
|
| 131 |
+
Because the datasets used are small and tabular, the computational requirements are minimal. No GPU was necessary; all models were run on an Intel Core i7-8750h CPU. It takes six minutes to train DECAF models on the Adult dataset [3] for 50 epochs, and five seconds to generate synthetic data. The total runtime is about four hours for experiment 1 and about two hours for experiment 2.
|
| 132 |
+
|
| 133 |
+
## 6 Results
|
| 134 |
+
|
| 135 |
+
We were able to reproduce some results in experiment 1 , but we could not get similar results on the second experiment. Table 1 shows our result that synthetic data is generated using each benchmark method, after which a separate MLP is trained on each dataset for computing the metrics, and Table 2 is the result from the original paper. Section 5.4 details how we obtained the relevant metrics. We can see DECAF does have the effect of debiasing and there is improvement comparable with FairGAN.
|
| 136 |
+
|
| 137 |
+
---
|
| 138 |
+
|
| 139 |
+
${}^{3}$ Our Github repository: https://anonymous.4open.science/r/DECAF-CFOA/
|
| 140 |
+
|
| 141 |
+
---
|
| 142 |
+
|
| 143 |
+
Table 2: Original results of bias removal experiment on the Adult dataset.
|
| 144 |
+
|
| 145 |
+
<table><tr><td rowspan="2">Method</td><td colspan="3">Data quality</td><td colspan="2">Fairness</td></tr><tr><td>Precision</td><td>Recall</td><td>AUROC</td><td>FTU</td><td>DP</td></tr><tr><td>Original data</td><td>$\mathbf{{0.920}} \pm {0.006}$</td><td>$\mathbf{{0.936}} \pm {0.008}$</td><td>$\mathbf{{0.807}} \pm {0.004}$</td><td>${0.116} \pm {0.028}$</td><td>${0.180} \pm {0.010}$</td></tr><tr><td>GAN</td><td>${0.607} \pm {0.080}$</td><td>${0.439} \pm {0.037}$</td><td>${0.567} \pm {0.132}$</td><td>${0.023} \pm {0.010}$</td><td>${0.089} \pm {0.008}$</td></tr><tr><td>WGAN-GP</td><td>${0.683} \pm {0.015}$</td><td>${0.914} \pm {0.005}$</td><td>${0.798} \pm {0.009}$</td><td>${0.120} \pm {0.014}$</td><td>${0.189} \pm {0.024}$</td></tr><tr><td>FairGAN</td><td>${0.681} \pm {0.023}$</td><td>${0.814} \pm {0.079}$</td><td>${0.766} \pm {0.029}$</td><td>${0.009} \pm {0.002}$</td><td>${0.097} \pm {0.018}$</td></tr><tr><td>DECAF-ND</td><td>${0.780} \pm {0.023}$</td><td>${0.920} \pm {0.045}$</td><td>${0.781} \pm {0.007}$</td><td>${0.152} \pm {0.013}$</td><td>${0.198} \pm {0.013}$</td></tr><tr><td>DECAF-FTU</td><td>${0.763} \pm {0.033}$</td><td>${0.925} \pm {0.040}$</td><td>${0.765} \pm {0.010}$</td><td>${0.004} \pm {0.004}$</td><td>${0.054} \pm {0.005}$</td></tr><tr><td>DECAF-CF</td><td>${0.743} \pm {0.022}$</td><td>${0.875} \pm {0.038}$</td><td>${0.769} \pm {0.004}$</td><td>${0.003} \pm {0.006}$</td><td>${0.039} \pm {0.011}$</td></tr><tr><td>DECAF-DP</td><td>${0.781} \pm {0.018}$</td><td>${0.881} \pm {0.050}$</td><td>${0.672} \pm {0.014}$</td><td>$\mathbf{{0.001}} \pm {0.001}$</td><td>$\mathbf{{0.001}} \pm {0.001}$</td></tr></table>
|
| 146 |
+
|
| 147 |
+

|
| 148 |
+
|
| 149 |
+
Figure 1: Plot of precision, recall, AUROC, FTU, and DP over bias strength.
|
| 150 |
+
|
| 151 |
+
Also same as in the original paper, DECAF-ND performs almost the best among all methods in terms of data quality. Methods DECAF-FTU, DECAF-CF, and DECAF-DP have relatively lower scores on data quality but perform better on fairness.
|
| 152 |
+
|
| 153 |
+
Figure 1 shows DECAF results for experiment 2 in which removing synthetically injected bias. These results do not match the Figure 3 of original paper. This mismatch is not surprising because the second experiment is based on the first experiment where we suspect our setup already significantly diverges from that of the authors.
|
| 154 |
+
|
| 155 |
+
## 218 7 Discussion
|
| 156 |
+
|
| 157 |
+
Overall, we have been able to produce the results found by the authors. That being said, there are multiple interpretations of the results and overall saliency is relatively low. For the purpose of this paper, we will focus primarily on the fairness metrics since the data utility metrics are closer to the findings of the authors and fairness is the primary goal of the method. Though the order of the fairness of various models of our results match with the original results from the paper, our numerical figures do not match the authors' results with a satisfactory level of precision. Several observations are further pursued as plausible explanations for this phenomenon.
|
| 158 |
+
|
| 159 |
+
### 7.1 Interpretation of the results
|
| 160 |
+
|
| 161 |
+
As shown in Tables 1 and 2, we obtained interpretable results for all models tested in experiment 1. For the most part, we found effects similar to the authors, but they deviate significantly in numerical terms. More specifically, we do find that as the model variations move from least strict to most strict definition of fairness, the fairness increases and data utility decreases. However, there are notable deviations from the authors results, specifically concerning the fairness metrics of the GAN. In addition, we find that DECAF-ND increases the level of bias compared to the original dataset which matches the authors. However, we find a higher DP of 0.353 and a FTU of 0.114 compared to the authors DP of 0.198 and FTU of 0.152 . These results run counter to our expectations.
|
| 162 |
+
|
| 163 |
+
The results found in the Credit dataset also show the directional correctness of DECAF in reducing bias, but direct comparison to the authors findings is difficult because our results differ significantly from the authors' findings. In particular, we find the FTU and DP scores is maximized at, 0 and minimized at 1. In addition, the authors find relatively stable data utility metrics, whereas we find a significant decrease between bias 0.25 and 0.75 . The results for bias 1.0 and 0 do reflect the average value found by the authors, with the exception of recall which is significantly lower.
|
| 164 |
+
|
| 165 |
+
Furthermore, the authors did not directly interpret their chosen metrics. The original paper designated FTU and DP measures for fairness and reported figures, but did not explain the actual meaning of the numbers and magnitude of changes seen. For example, most of the reported fairness metrics were very small, but we did not have any guidance on the significance of a .001 decrease in the FTU metric. Thus, we felt the paper lacked explainability. Additionally, the fairness definitions themselves, the instructions for calculating the fairness measures, and the given FTU and DP code were somewhat contradictory. Calculating FTU and DP based on our interpretation of the authors' method did not reproduce their results. Using the FTU and DP calculations from an extra code file we received still did not produce matching results. One possibility is that the authors' final fairness metrics calculation code was not contained in the files we had access to and does not match any of the implementations we attempted.
|
| 166 |
+
|
| 167 |
+
### 7.2 What was easy
|
| 168 |
+
|
| 169 |
+
One aspect that eased our investigation into the reproduceability of [2] was the tabular format and small size of the datasets we used. Training and modifying the model was not computationally expensive or time consuming, thus we could test many different strategies to find the closest solution.
|
| 170 |
+
|
| 171 |
+
### 7.3 What was difficult
|
| 172 |
+
|
| 173 |
+
We were originally under the impression that the DECAF code repository was fully functional as a basis for extension. Upon further examination, we found that it was not working and did not reproduce the published results. Thus, we had to pivot from extending their code to replicating the results with our own code which was challenging in itself. While attempting to reproduce the experiments, we found that the instructions given were incomplete and contradictory to the code provided.
|
| 174 |
+
|
| 175 |
+
There are multiple obstacles to replicating the experiments as described, which can broadly be separated into conceptual and methodological issues. On the former, there are many important research decisions that are not fully articulated, as well as results that appear counterintuitive. For example, the authors found that their application of GAN, a method that does not do explicit debiasing, had significantly improved fairness metrics compared to the original dataset. One would expect that all the methods that do not debias, namely original data, GAN, WGAN-GP and DECAF-ND would perform in the same order of magnitude in terms of fairness, but this is not the case in the author's initial findings. Moreover, while the DECAF models do reduce bias in line with the level of fairness required, DECAF-ND actually makes the dataset more biased compared to the original dataset. Our reproduction of GAN does match the expected results, with original data, GAN, and WGAN all returning roughly the same fairness metrics. As discussed, we successfully reproduced the overall impact of DECAF, namely higher fairness and lower data utility for more stringent definitions of fairness. However, DECAF-ND exhibits considerably higher bias than the original dataset and no clear intuition is given on why this may be the case.
|
| 176 |
+
|
| 177 |
+
In addition to the conceptual challenges, there are multiple methodological issues. Following the instructions provided by the authors resulted in numerous compatibility warnings and failed tests. As described in section 5.1, several substantial changes were needed to generate any interpretable results. Further compounding these issues, there are inconsistencies in the applied method, as the code utilized in the example explicitly deviates from the approach described in the experimental setup. We were forced to generate labels for experiment 1 , while predicting labels for experiment 2. Attempts to use generated labels made experiment 2 uninterpretable, as all key performance indicators would become zero otherwise. This methodological inconsistency between experiments further problematizes the reproducibility of DECAF.
|
| 178 |
+
|
| 179 |
+
### 7.4 Overall reproducibilty
|
| 180 |
+
|
| 181 |
+
Due to the number of possible conceptual and methodological interpretations with the code, modifications were needed as described in section 5.1. While we were successful in producing results that could be interpreted, the numerical variations and methodological deviations are so substantial that further research would be needed to assess the overall accuracy of the authors claims. We found evidence that supports the narrow interpretation of the claims made by the author, namely that DECAF reduces bias in downstream models, and allows for the generation of debiased synthetic data. However, the authors claim that the approach allows for minimal data utility loss. Without a further explanation on what is considered minimal data utility loss, it is difficult to evaluate this claim, especially with amount of deviation found between the authors results and ours. While our findings on the first experiment are in line with the authors, the results of the second experiment are in direct contradiction to their findings. Since any fundamental issues in experiment 1 are likely to carry over to experiment 2 we focus our recommendations on experiment 1 .
|
| 182 |
+
|
| 183 |
+
Overall, we find that the results are reproducible but difficult to interpret and compare. Fruitful avenues of further investigation would be to re-evaluate the fairness metrics. Another hypothesis is that there is a more functional issue with the DECAF model itself that would lend itself to further investigation.
|
| 184 |
+
|
| 185 |
+
### 7.5 Communication with original authors
|
| 186 |
+
|
| 187 |
+
We sent two emails to the authors of DECAF detailing the aforementioned code issues. One author did respond with a few extra code files, but unfortunately did not directly address out content questions. However, several of the interpretations we made were retroactively confirmed by the extra code files.
|
| 188 |
+
|
| 189 |
+
## 8 Conclusion
|
| 190 |
+
|
| 191 |
+
During our investigation, we faced multiple significant challenges in reproducing the results of the original paper. The biggest challenges stemmed from the number of possible interpretations of the code and method. While we were not able to reproduce the results in full, we believe methods like DECAF have great potential for expansion. The relevance of unbiased downstream classifiers and the evident need for bias removal in real data will likely remain a societally relevant area of research. For instance, the Adult dataset[3] we studied is nearing 30 years old. Perhaps an intriguing next phase could be to pull this year's Census data to investigate how bias has changed over time and if DECAF is still applicable for removing likely more nuanced and hidden bias that persists through the increased awareness of bias and techniques for counteracting bias that exist today.
|
| 192 |
+
|
| 193 |
+
## References
|
| 194 |
+
|
| 195 |
+
[1] Ulrich Aïvodji et al. "Local data debiasing for fairness based on generative adversarial training". In: Algorithms 14.3 (2021), p. 87. DOI: 10.3390/a14030087.
|
| 196 |
+
|
| 197 |
+
[2] Boris van Breugel et al. "DECAF: Generating Fair Synthetic Data Using Causally-Aware Generative Networks". In: CoRR abs/2110.12884 (2021). arXiv: 2110.12884. URL: https: //arxiv.org/abs/2110.12884
|
| 198 |
+
|
| 199 |
+
[3] Dheeru Dua and Casey Graff. UCI Machine Learning Repository. 2017. URL: http:// archive.ics.uci.edu/ml
|
| 200 |
+
|
| 201 |
+
[4] Harrison Edwards and Amos Storkey. "Censoring Representations with an Adversary". In: (Nov. 2015).
|
| 202 |
+
|
| 203 |
+
[5] Gabbrielle M. Johnson. "Algorithmic bias: on the implicit biases of social technology". In: Synthese 198.10 (2020), pp. 9941-9961. DOI: 10.1007/s11229-020-02696-y.
|
| 204 |
+
|
| 205 |
+
[6] Toshihiro Kamishima et al. "Fairness-aware classifier with prejudice remover regularizer". In: Machine Learning and Knowledge Discovery in Databases (2012), pp. 35-50. DOI: 10.1007/ 978-3-642-33486-3_3.
|
| 206 |
+
|
| 207 |
+
[7] William P. O'Hare. "Who Is Missing? Undercounts and Omissions in the U.S. Census". In: SpringerBriefs in Population Studies Differential Undercounts in the U.S. Census (2019), pp. 1-12. DOI: 10.1007/978-3-030-10973-8_1.
|
| 208 |
+
|
| 209 |
+
[8] Kit T. Rodolfa, Hemank Lamba, and Rayid Ghani. "Empirical observation of negligible fairness-accuracy trade-offs in Machine Learning for Public Policy". In: Nature Machine Intelligence 3.10 (2021), pp. 896-904. DOI: 10.1038/s42256-021-00396-x.
|
| 210 |
+
|
| 211 |
+
[9] P. Sattigeri et al. "Fairness gan: Generating datasets with fairness properties using a generative Adversarial Network". In: IBM Journal of Research and Development 63.4/5 (2019). DOI: 10.1147/jrd.2019.2945519
|
| 212 |
+
|
| 213 |
+
[10] Depeng Xu et al. "Fairgan: Fairness-aware Generative Adversarial Networks". In: 2018 IEEE International Conference on Big Data (Big Data) (2018). DOI: 10.1109/bigdata. 2018
|
| 214 |
+
|
| 215 |
+
343 8622525
|
| 216 |
+
|
| 217 |
+
## 344 9 Appendices
|
| 218 |
+
|
| 219 |
+
Table 3: Absolute difference between authors' findings and our results.
|
| 220 |
+
|
| 221 |
+
<table><tr><td rowspan="2">Method</td><td colspan="3">Data quality</td><td colspan="2">Fairness</td></tr><tr><td>Precision</td><td>Recall</td><td>AUROC</td><td>FTU</td><td>DP</td></tr><tr><td>Original data</td><td>0.109</td><td>0.046</td><td>.807</td><td>0.116</td><td>.180</td></tr><tr><td>GAN</td><td>-0.165</td><td>0.095</td><td>0.044</td><td>-0.179</td><td>-0.113</td></tr><tr><td>WGAN-GP</td><td>-0.101</td><td>0.447</td><td>0.284</td><td>-0.088</td><td>-0.042</td></tr><tr><td>FairGAN</td><td>-0.154</td><td>-0.097</td><td>0.094</td><td>-0.088</td><td>-0.06</td></tr><tr><td>DECAF-ND</td><td>-0.107</td><td>0.143</td><td>0.047</td><td>0.038</td><td>-0.155</td></tr><tr><td>DECAF-FTU</td><td>-0.103</td><td>0.125</td><td>0.057</td><td>-0.037</td><td>-0.206</td></tr><tr><td>DECAF-CF</td><td>-0.026</td><td>-0.079</td><td>0.228</td><td>-0.019</td><td>0.013</td></tr><tr><td>DECAF-DP</td><td>0.028</td><td>-0.097</td><td>0.17</td><td>-0.005</td><td>-0.011</td></tr></table>
|
| 222 |
+
|
| 223 |
+
45 Absolute difference is calculated as the value found by the authors minus the value found in our 346 reproduction.
|
| 224 |
+
|
| 225 |
+
Table 4: Performance relative to original data from authors.
|
| 226 |
+
|
| 227 |
+
<table><tr><td rowspan="2">Method</td><td colspan="3">Data quality</td><td colspan="2">Fairness</td></tr><tr><td>Precision</td><td>Recall</td><td>AUROC</td><td>FTU</td><td>DP</td></tr><tr><td>Original data</td><td>1</td><td>1</td><td/><td>1</td><td/></tr><tr><td>GAN</td><td>0.66</td><td>0.46</td><td>0.70</td><td>0.20</td><td>0.49</td></tr><tr><td>WGAN-GP</td><td>0.74</td><td>0.95</td><td>0.98</td><td>1.03</td><td>1.05</td></tr><tr><td>FairGAN</td><td>0.74</td><td>0.85</td><td>0.95</td><td>0.08</td><td>0.54</td></tr><tr><td>DECAF-ND</td><td>0.85</td><td>0.96</td><td>0.97</td><td>1.31</td><td>1.10</td></tr><tr><td>DECAF-FTU</td><td>0.83</td><td>0.96</td><td>0.95</td><td>0.03</td><td>0.30</td></tr><tr><td>DECAF-CF</td><td>0.81</td><td>0.91</td><td>0.95</td><td>0.3</td><td>0.22</td></tr><tr><td>DECAF-DP</td><td>0.85</td><td>0.91</td><td>0.83</td><td>0.01</td><td>0.01</td></tr></table>
|
| 228 |
+
|
| 229 |
+
Relative performance is calculated as the ratio between the original data and the performance of the 348 selected model on the same variable.
|
| 230 |
+
|
| 231 |
+
Table 5: Performance relative to original data in our findings.
|
| 232 |
+
|
| 233 |
+
<table><tr><td rowspan="2">Method</td><td colspan="3">Data quality</td><td colspan="2">Fairness</td></tr><tr><td>Precision</td><td>Recall</td><td>AUROC</td><td>FTU</td><td>DP</td></tr><tr><td>Original data</td><td>1</td><td>1</td><td/><td>1</td><td/></tr><tr><td>GAN</td><td>0.95</td><td>0.38</td><td>0.72</td><td>4.30</td><td>0.98</td></tr><tr><td>WGAN-GP</td><td>0.97</td><td>0.51</td><td>0.71</td><td>4.43</td><td>1.12</td></tr><tr><td>FairGAN</td><td>1.03</td><td>0.99</td><td>0.93</td><td>2.06</td><td>0.76</td></tr><tr><td>DECAF-ND</td><td>1.09</td><td>0.85</td><td>1.02</td><td>2.43</td><td>1.70</td></tr><tr><td>DECAF-FTU</td><td>1.07</td><td>0.87</td><td>0.98</td><td>0.87</td><td>1.26</td></tr><tr><td>DECAF-CF</td><td>0.95</td><td>0.104</td><td>0.75</td><td>0.47</td><td>0.13</td></tr><tr><td>DECAF-DP</td><td>0.93</td><td>1.07</td><td>0.70</td><td>0.13</td><td>0.06</td></tr></table>
|
| 234 |
+
|
| 235 |
+
Table 6: Reproduction results on bias removal experiment on the Credit dataset.
|
| 236 |
+
|
| 237 |
+
<table><tr><td rowspan="2">Method</td><td colspan="3">Data quality</td><td colspan="2">Fairness</td></tr><tr><td>Precision</td><td>Recall</td><td>AUROC</td><td>FTU</td><td>DP</td></tr><tr><td>Original data</td><td>$\mathbf{{0.915}} \pm {0.007}$</td><td>${0.787} \pm {0.009}$</td><td>${0.840} \pm {0.004}$</td><td>$\mathbf{{0.013}} \pm {0.008}$</td><td>0.011 ± 0.007</td></tr><tr><td>DECAF-ND</td><td>${0.809} \pm {0.083}$</td><td>${0.813} \pm {0.047}$</td><td>${0.758} \pm {0.080}$</td><td>${0.085} \pm {0.035}$</td><td>${0.053} \pm {0.035}$</td></tr><tr><td>DECAF-FTU</td><td>${0.821} \pm {0.072}$</td><td>${0.811} \pm {0.050}$</td><td>${0.770} \pm {0.055}$</td><td>${0.032} \pm {0.028}$</td><td>${0.065} \pm {0.040}$</td></tr><tr><td>DECAF-DP</td><td>${0.784} \pm {0.064}$</td><td>0.836±0.047</td><td>${0.744} \pm {0.055}$</td><td>${0.045} \pm {0.036}$</td><td>${0.063} \pm {0.030}$</td></tr></table>
|
| 238 |
+
|
papers/ML_Reproducibility_Challenge/ML_Reproducibility_Challenge 2021/ML_Reproducibility_Challenge 2021 Fall/SVx46hzmhRK/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,241 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ REPLICATION STUDY OF DECAF: GENERATING FAIR SYNTHETIC DATA USING CAUSALLY-AWARE GENERATIVE NETWORKS
|
| 2 |
+
|
| 3 |
+
Anonymous Author(s)
|
| 4 |
+
|
| 5 |
+
Affiliation
|
| 6 |
+
|
| 7 |
+
Address
|
| 8 |
+
|
| 9 |
+
email
|
| 10 |
+
|
| 11 |
+
§ 1 1 SUMMARY
|
| 12 |
+
|
| 13 |
+
§ 2 1.1 SCOPE OF REPRODUCIBILITY
|
| 14 |
+
|
| 15 |
+
In this paper we attempt to reproduce the results found in "DECAF: Generating Fair Synthetic Data 4 Using Causally-Aware Generative Networks" by Breugel et al [2]. The goal of the original paper is create a model that intakes a biased dataset and outputs a debiased synthetic dataset that can be used 6 to train downstream models to make unbiased predictions both on synthetic and real data.
|
| 16 |
+
|
| 17 |
+
§ 7 1.2 METHODOLOGY
|
| 18 |
+
|
| 19 |
+
3 We built upon the (incomplete) code provided by the authors to repeat the first experiment of [2] which involves removing existing bias from real data with existing bias, and the second experiment where synthetically injected bias is added to real data and then removed.
|
| 20 |
+
|
| 21 |
+
§ 1.3 RESULTS
|
| 22 |
+
|
| 23 |
+
We reproduced most of the data utility results reported in the first experiment for the Adult dataset. However, the fairness metric generally match the original paper but are numerically not comparable in absolute or relative terms. For the second experiment, we were unsuccessful in reproducing results found by the authors. We note however that we made considerable changes to the experimental setup, which may make it difficult to perform a direct comparison of the results.
|
| 24 |
+
|
| 25 |
+
§ 1.4 WHAT WAS EASY
|
| 26 |
+
|
| 27 |
+
The smaller size and tabular format of both datasets allowed for quick training and model modifications.
|
| 28 |
+
|
| 29 |
+
§ 1.5 WHAT WAS DIFFICULT
|
| 30 |
+
|
| 31 |
+
There are several possible interpretations of the paper on both a methodological and conceptual level. Reproducing the experiments required rewriting or adding large sections of code. Given these multiple interpretations it was difficult to be confident in the reproduction. In addition, several results found by the authors appear to be counterintuitive, such as algorithms debiasing without being designed to do so and sometimes outperforming debiasing algorithms on the same dataset.
|
| 32 |
+
|
| 33 |
+
§ 1.6 COMMUNICATION WITH ORIGINAL AUTHORS
|
| 34 |
+
|
| 35 |
+
7 We sent two emails to the authors describing our issues. We received a reply with a few extra files, 28 but no direct answer to content questions.
|
| 36 |
+
|
| 37 |
+
§ 2 INTRODUCTION
|
| 38 |
+
|
| 39 |
+
It is broadly acknowledged that real world data contains bias. Despite efforts to make data collection more equitable and representative, a myriad of challenges remain. The effects of bias are well understood, as biased data can lead to the under-representation of particular demographics, such as the case of political representation in the United States Census [7]. As technology progressed to the emergence of machine learning (ML) models, the same challenges persisted as ML models adopted the biases of the data and humans who created them. Models trained on biased data can pass bias downstream to various other applications, a phenomenon referred to as algorithmic bias[5]. Such models have potential to not only perpetuate but exacerbate social inequality, yet bias is omnipresent in everything that humans touch. Hence, there is a clear and present need for methods that can utilize biased data to produce unbiased results.
|
| 40 |
+
|
| 41 |
+
§ 3 BACKGROUND
|
| 42 |
+
|
| 43 |
+
The notion of using Generative Adversarial Networks (GAN) to increase fairness within artificial intelligence is broadly supported by the literature. Various models exists such as FairGAN [10], GANSAN[1], and Fairness GAN [9] to name but a few. Notably, fairness efforts have typically recognized a fairness-accuracy trade-off assumption, where a fairer algorithm comes at the cost of accuracy. However, recent work has challenged these assumptions, finding that the accuracy cost of fairness is negligible in some circumstances [8]. Nonetheless, given the increased awareness of the nefarious effects of data bias, many research efforts have been directed towards the debiasing of data and other attempts to create fairer artificial intelligence.
|
| 44 |
+
|
| 45 |
+
§ 3.1 DECAF PREMISE
|
| 46 |
+
|
| 47 |
+
One such effort and the subject of the present study is DEbiasing CAusal Fairness (DECAF) [2]. DECAF takes a distinct approach to debiasing data, explicitly approaching fairness from a causal standpoint with a goal of downstream model fairness. There are three broad approaches to fairness that may be identified, (1) the preprocessing approach, where the characteristics of the input data are changed to suppress undesirable biases [2], (2) the algorithmic modification approach, where the learning algorithm itself is adapted to reduce bias [4], and (3) the postprocessing approach, where the output of a model is manipulated to obtain the desired level of fairness [6]. The DECAF approach falls in the first category of preprocessing because it attempts to remove bias from the input data and subsequently from all downstream models.
|
| 48 |
+
|
| 49 |
+
The DECAF model is a generative adversarial network (GAN) that utilizes the causal structure of directed acyclical graphs (DAGs) to remove bias from real data. The three critical assumptions of the DECAF method are (1) the data generating process is represented by a DAG, (2) the DAG is causally sufficient, and (3) the DAG is known for a given dataset. DAGs are central to the method, as it is through edge manipulation that debiasing is performed.
|
| 50 |
+
|
| 51 |
+
The model may be separated into two stages. During the first training phase, the model learns the causal conditionals of the dataset from its DAG. In the second inference phase, the data is debiased through DAG modification. Each fairness level defines a unique set of edge removals from the original DAG, resulting in a new, intervened DAG. These intervened DAGs are given to the model to generate synthetic, fair datasets from the original data. The synthetic datasets have similar distributions to the original data, but avoid bias. Because the method debiases at inference time, retraining the model is not required when using different fairness measures, thus providing inference-time fairness.
|
| 52 |
+
|
| 53 |
+
Once DECAF generates a synthetic and unbiased dataset, a simple multilayer perceptron (MLP) is trained on this synthetic data to create an unbiased classifier that can be used both on the original data and in other settings. Because the data used for training the MLP has already been debiased, the authors claim that the MLP or any chosen downstream model is guaranteed to be fair since it doesn't 75 incorporate any of the bias from the original training data; this is a hallmark of the preprocessing approach to fairness.
|
| 54 |
+
|
| 55 |
+
§ 3.2 FAIRNESS STANDARDS
|
| 56 |
+
|
| 57 |
+
Three definitions of algorithmic fairness are used in the paper, each corresponding to a unique modified DAG. The most lenient standard is the commonly used Fairness Through Unawareness (FTU) definition, which entails that the protected variable, $A$ , is not explicitly used by the model to predict the label, $\widehat{Y}$ . While widely used because it avoids direct discrimination, FTU fails to eliminate indirect discrimination.
|
| 58 |
+
|
| 59 |
+
A more stringent definition of fairness is Demographic Parity (DP), which declares that classification probability must be independent of classes, i.e. if the protected attribute is gender, all gender classes have the same success rate. The DP definition is considered to be very strict because it potentially under-utilizes feature differences between groups in the process of blocking indirect discrimination.
|
| 60 |
+
|
| 61 |
+
Conditional Fairness (CF) lies in the middle ground between the first two definitions by presuming that the selection rate between groups segregated by the protected attribute must be the same when conditioned on some explanatory variable(s) determined by prior knowledge. Each of these standards corresponds to a variation of DECAF, respectively DECAF-ND (no debiasing), DECAF-FTU, DECAF-CF, and DECAF-DP. The fairness of each model is tested against FTU and DP metrics.
|
| 62 |
+
|
| 63 |
+
§ 4 SCOPE OF REPRODUCIBILITY AND CLAIMS
|
| 64 |
+
|
| 65 |
+
The authors claim that DECAF allows for the generation of unbiased synthetic data from biased real data and that their method does so with minimal loss in data utility compared to other approaches. Furthermore, they identify five characteristics of fair synthetic data that their method achieves: (1) allows post-hoc distribution changes, (2) provides fairness, (3) supports causal notions of fairness, (4) allows inference-time fairness, and (5) requires minimal assumptions. Additionally, they claim that DECAF is the only method to achieve all of the five listed characteristics.
|
| 66 |
+
|
| 67 |
+
The authors identify three main contributions of their work:
|
| 68 |
+
|
| 69 |
+
(i) DECAF, a causal GAN-based model that can use a biased dataset $X$ to generate an equivalent synthetic unbiased dataset $\mathcal{X}$ with minimal loss of data utility
|
| 70 |
+
|
| 71 |
+
(ii) A flexible causal approach for modifying DECAF to generate fair data
|
| 72 |
+
|
| 73 |
+
(iii) Guarantee that downstream models trained on the generated synthetic data will make unbiased predictions on both synthetic and real-life (biased) data
|
| 74 |
+
|
| 75 |
+
We aim to evaluate claims (i) and (iii) by replicating the two experiments of [2]. We will focus on the narrow interpretation of reproducibility, namely whether the experiment can be reproduced by independent researchers with the same setup rather than testing against the more general standard of replicatability on different datasets. Despite the availability of code, there were considerable problems with running the models even with instructions given, meaning that we limited our scope to direct reproducibility. As the authors have done, we will evaluate the data utility of the DECAF method with precision, recall, and area under the receiver operation characteristic (AUROC); fairness will be evaluated with Fairness Through Unawareness (FTU) and Demographic Parity (DP) measures.
|
| 76 |
+
|
| 77 |
+
§ 5 METHODOLOGY
|
| 78 |
+
|
| 79 |
+
While code from the creators of the DECAF method is available [1, documentation leaves room for interpretation and the instructions given for running the code do not reproduce the results as presented. In addition, there are several possible discrepancies between the method described in the paper and the code provided. Thus, we made the assumption that the paper leads and adjusted the code accordingly to match.
|
| 80 |
+
|
| 81 |
+
§ 5.1 METHODOLOGICAL CODE CHANGES
|
| 82 |
+
|
| 83 |
+
Though the DECAF class was working, several components of the experimental setup code was either missing or not fully explained. Thus, we had to extrapolate heavily to produce results. The major code changes required are listed below:
|
| 84 |
+
|
| 85 |
+
${}^{1}$ The DECAF code is available at: https://github.com/vanderschaarlab/DECAF
|
| 86 |
+
|
| 87 |
+
(i) Preprocessing: the paper mentioned standardizing continuous variables, however, following the procedure given in the paper generated uninterpretable results. As a solution we attempted to standardize all variables, including categorical ones though we question the conceptual validity of this decision. After standardizing with StandardScaler, we still were not getting results as high as the reported metrics, so we tried normalizing with MinMaxScaler which finally produced matching results in data utility. The DECAF class employs a final sigmoid layer that converts all generated data to a range between 0 and 1 . We suspect this was the reason why their run_example.py script would only predict labels of one class and why using a Scaler allowed us to obtain meaningful predictions.
|
| 88 |
+
|
| 89 |
+
(ii) DAGs: There appears to be a mismatch with the dags provided, as neither contain all of the variables in the datasets. In addition the code provided utilized a toy graph. The authors state that they used Tetrad to generate the DAG for the dataset, so we attempted to generate a full causal graph for the Adult dataset, but our generated graphs did not match Figure 6 and 7 of [2]. Hence, we manually input the graphs from the paper.
|
| 90 |
+
|
| 91 |
+
(iii) Label Generation: The paper instructed that the labels for synthetic data should be generated by the model as they are part of the causal dependencies graph. The original code did not generate the labels for the synthetic dataset, but instead generated only the $\mathrm{x}$ values and then predicted the labels from those generated $\mathrm{x}$ values using the baseline model. The code seemed to omit the target variable from the GAN input, but we felt this would leave out valuable causal information contained in the edges from the explanatory variables to the target variable. Thus, we decided to include the target variable in the DAG, and this indeed improved our results. In the end, we were forced to generate labels for experiment 1, while predicting labels for experiment 2 in order to obtain interpretable results.
|
| 92 |
+
|
| 93 |
+
(iv) Downstream Classifer: The paper mentions an MLP from sklearn, but the example code uses an XGBClassifier as the downstream classifier which was giving us installation issues. We followed the paper by using an MLP.
|
| 94 |
+
|
| 95 |
+
§ 5.2 DATASET
|
| 96 |
+
|
| 97 |
+
For the first experiment, we worked with the Adult dataset 2 [3] collected from the 1994 United States Census. The dataset contains about 45,000 data points, and 2,000 data points were set aside for the test set as specified by [2]. The protected attribute is sex, and the target variable is income with roughly ${75}\%$ in the ’ $< = {50}\mathrm{k}$ ’ class and the remaining ${25}\%$ belonging to the ’ $> {50}\mathrm{k}$ ’ class. This makes sense considering the average earnings of Americans at the time, but does make our data rather skewed towards one class. We manually input the DAG from Figure 6 of [2] and used the preprocessing steps described in the previous section.
|
| 98 |
+
|
| 99 |
+
For the second experiment, we used the Credit Approval dataset [3] of credit card applications. This dataset is considerably smaller than the first dataset with only 678 data points. The original paper did not specify how large the test set was, so we chose a typical ${80}\% /{20}\%$ split for training and testing. The protected attribute is ethnicity and the target variable is application approval. About 55% of the applications were approved while the rest were rejected, so this dataset is considerably more balanced than the other. Again, we had to manually input the graph from Figure 7 of the original paper. Since the protected attribute here, ethnicity, is not binary, we first converted the variable to be binary with 0 corresponding to 'not discriminated against' and 1 to 'discriminated against'. Then we used the same preprocessing steps as in the first experiment.
|
| 100 |
+
|
| 101 |
+
§ 5.3 HYPERPARAMETERS
|
| 102 |
+
|
| 103 |
+
A hyperparameter search is not necessary for our experiments. We used the DECAF class as given with the parameters set by the authors' code. The only modification we made was changing the dag_seed parameter from the provided toy graph to the respective graphs for each dataset presented on Page 28 of [2]. The DECAF generator is instantiated with $d$ , the number of features, sub-networks with shared hidden layers. The generator and discriminator both use 2 hidden layers with ${2d}$ neurons. The generator is updated once for every 10 discriminator updates. Adam was used as the optimizer with a learning rate of 0.001 . The other GANs used for comparison were also given default parameters and settings from their respective packages because no settings were specified by the authors.
|
| 104 |
+
|
| 105 |
+
${}^{2}$ The Adult dataset is available at http://archive.ics.uci.edu/ml/index.php
|
| 106 |
+
|
| 107 |
+
Table 1: Reproduction results on bias removal experiment on the Adult dataset.
|
| 108 |
+
|
| 109 |
+
max width=
|
| 110 |
+
|
| 111 |
+
2*Method 3|c|Data Quality 2|c|Fairness
|
| 112 |
+
|
| 113 |
+
2-6
|
| 114 |
+
Precision Recall AUROC FTU DP
|
| 115 |
+
|
| 116 |
+
1-6
|
| 117 |
+
Original data $\mathbf{{0.881}} \pm {0.006}$ ${0.917} \pm {0.009}$ ${0.772} \pm {0.008}$ ${0.047} \pm {0.010}$ ${0.207} \pm {0.013}$
|
| 118 |
+
|
| 119 |
+
1-6
|
| 120 |
+
GAN ${0.772} \pm {0.098}$ ${0.344} \pm {0.249}$ ${0.523} \pm {0.048}$ ${0.202} \pm {0.197}$ ${0.202} \pm {0.182}$
|
| 121 |
+
|
| 122 |
+
1-6
|
| 123 |
+
WGAN-GP ${0.784} \pm {0.073}$ ${0.467} \pm {0.195}$ ${0.514} \pm {0.067}$ ${0.208} \pm {0.189}$ ${0.231} \pm {0.166}$
|
| 124 |
+
|
| 125 |
+
1-6
|
| 126 |
+
FairGAN ${0.835} \pm {0.043}$ ${0.911} \pm {0.081}$ ${0.672} \pm {0.061}$ ${0.097} \pm {0.113}$ ${0.157} \pm {0.155}$
|
| 127 |
+
|
| 128 |
+
1-6
|
| 129 |
+
DECAF-ND ${0.880} \pm {0.024}$ ${0.774} \pm {0.047}$ ${0.734} \pm {0.023}$ ${0.114} \pm {0.040}$ ${0.353} \pm {0.023}$
|
| 130 |
+
|
| 131 |
+
1-6
|
| 132 |
+
DECAF-FTU ${0.866} \pm {0.027}$ ${0.800} \pm {0.043}$ ${0.708} \pm {0.043}$ ${0.041} \pm {0.020}$ ${0.260} \pm {0.085}$
|
| 133 |
+
|
| 134 |
+
1-6
|
| 135 |
+
DECAF-CF ${0.769} \pm {0.012}$ ${0.954} \pm {0.025}$ ${0.541} \pm {0.028}$ ${0.022} \pm {0.018}$ ${0.026} \pm {0.023}$
|
| 136 |
+
|
| 137 |
+
1-6
|
| 138 |
+
DECAF-DP ${0.753} \pm {0.003}$ $\mathbf{{0.978}} \pm {0.022}$ ${0.502} \pm {0.009}$ $\mathbf{{0.006}} \pm {0.007}$ 0.012±0.009
|
| 139 |
+
|
| 140 |
+
1-6
|
| 141 |
+
|
| 142 |
+
An MLP with default parameters from sklearn was used. The default settings are 100 neurons with ReLU activation functions and Adam with a learning rate of 0.001 . A Softmax activation and binary cross entropy loss were used for the output layer.
|
| 143 |
+
|
| 144 |
+
§ 5.4 EXPERIMENTAL SETUP AND CODE
|
| 145 |
+
|
| 146 |
+
In this study, we aimed to replicate the experiments of the original paper, Debiasing Census Data (experiment 1) and Fair Credit Approval (experiment 2), to evaluate the performance of DECAF when generating unbiased synthetic data from real, biased data from the Adult dataset.
|
| 147 |
+
|
| 148 |
+
We trained each model listed in Table 2 of the original paper, four DECAF GANs and three other GANs for comparison, for 50 epochs. A synthetic dataset was generated from each model that was then used to train an MLP to classify a test set of 2,000 unmodified data points from the original dataset. We compared these predictions with the ground truth labels from the original data to evaluate performance and fairness. This process was repeated ten times to obtain average metrics over multiple runs as specified by the authors.
|
| 149 |
+
|
| 150 |
+
To mimic the DECAF paper, precision, recall, and AUROC were used to measure the performance of the models, while FTU and DP were used to measure the fairness of the models. Precision, recall, and AUROC are given by sklearn.metrics, and higher scores indicate better performance. Lower FTU and DP scores indicate less bias. To calculate FTU, set all the labels of the protected attribute to one class and predict the labels; repeat with the remaining class (for binary variables), and compare the difference of the means of the two prediction sets, such that $\left| {{P}_{A = 0}\left( {\widehat{Y} \mid X}\right) - {P}_{A = 1}\left( {\widehat{Y} \mid X}\right) }\right|$ Then for DP, segregate the dataset into datapoints with one class label and datapoints with the other label (for binary variables), and again predict the labels of each set and compare the difference of the means of the two prediction sets, such that $\left| {P\left( {\widehat{Y} \mid A = 0}\right) - P\left( {\widehat{Y} \mid A = 1}\right) }\right|$ . To compare our replication against the original experiments of the authors, we compare both the absolute difference and the relative difference (as a ratio) with our findings. Our code and more details can be found on our Github repository ${}^{3}$ .
|
| 151 |
+
|
| 152 |
+
§ 5.5 COMPUTATIONAL REQUIREMENTS
|
| 153 |
+
|
| 154 |
+
Because the datasets used are small and tabular, the computational requirements are minimal. No GPU was necessary; all models were run on an Intel Core i7-8750h CPU. It takes six minutes to train DECAF models on the Adult dataset [3] for 50 epochs, and five seconds to generate synthetic data. The total runtime is about four hours for experiment 1 and about two hours for experiment 2.
|
| 155 |
+
|
| 156 |
+
§ 6 RESULTS
|
| 157 |
+
|
| 158 |
+
We were able to reproduce some results in experiment 1, but we could not get similar results on the second experiment. Table 1 shows our result that synthetic data is generated using each benchmark method, after which a separate MLP is trained on each dataset for computing the metrics, and Table 2 is the result from the original paper. Section 5.4 details how we obtained the relevant metrics. We can see DECAF does have the effect of debiasing and there is improvement comparable with FairGAN.
|
| 159 |
+
|
| 160 |
+
${}^{3}$ Our Github repository: https://anonymous.4open.science/r/DECAF-CFOA/
|
| 161 |
+
|
| 162 |
+
Table 2: Original results of bias removal experiment on the Adult dataset.
|
| 163 |
+
|
| 164 |
+
max width=
|
| 165 |
+
|
| 166 |
+
2*Method 3|c|Data quality 2|c|Fairness
|
| 167 |
+
|
| 168 |
+
2-6
|
| 169 |
+
Precision Recall AUROC FTU DP
|
| 170 |
+
|
| 171 |
+
1-6
|
| 172 |
+
Original data $\mathbf{{0.920}} \pm {0.006}$ $\mathbf{{0.936}} \pm {0.008}$ $\mathbf{{0.807}} \pm {0.004}$ ${0.116} \pm {0.028}$ ${0.180} \pm {0.010}$
|
| 173 |
+
|
| 174 |
+
1-6
|
| 175 |
+
GAN ${0.607} \pm {0.080}$ ${0.439} \pm {0.037}$ ${0.567} \pm {0.132}$ ${0.023} \pm {0.010}$ ${0.089} \pm {0.008}$
|
| 176 |
+
|
| 177 |
+
1-6
|
| 178 |
+
WGAN-GP ${0.683} \pm {0.015}$ ${0.914} \pm {0.005}$ ${0.798} \pm {0.009}$ ${0.120} \pm {0.014}$ ${0.189} \pm {0.024}$
|
| 179 |
+
|
| 180 |
+
1-6
|
| 181 |
+
FairGAN ${0.681} \pm {0.023}$ ${0.814} \pm {0.079}$ ${0.766} \pm {0.029}$ ${0.009} \pm {0.002}$ ${0.097} \pm {0.018}$
|
| 182 |
+
|
| 183 |
+
1-6
|
| 184 |
+
DECAF-ND ${0.780} \pm {0.023}$ ${0.920} \pm {0.045}$ ${0.781} \pm {0.007}$ ${0.152} \pm {0.013}$ ${0.198} \pm {0.013}$
|
| 185 |
+
|
| 186 |
+
1-6
|
| 187 |
+
DECAF-FTU ${0.763} \pm {0.033}$ ${0.925} \pm {0.040}$ ${0.765} \pm {0.010}$ ${0.004} \pm {0.004}$ ${0.054} \pm {0.005}$
|
| 188 |
+
|
| 189 |
+
1-6
|
| 190 |
+
DECAF-CF ${0.743} \pm {0.022}$ ${0.875} \pm {0.038}$ ${0.769} \pm {0.004}$ ${0.003} \pm {0.006}$ ${0.039} \pm {0.011}$
|
| 191 |
+
|
| 192 |
+
1-6
|
| 193 |
+
DECAF-DP ${0.781} \pm {0.018}$ ${0.881} \pm {0.050}$ ${0.672} \pm {0.014}$ $\mathbf{{0.001}} \pm {0.001}$ $\mathbf{{0.001}} \pm {0.001}$
|
| 194 |
+
|
| 195 |
+
1-6
|
| 196 |
+
|
| 197 |
+
< g r a p h i c s >
|
| 198 |
+
|
| 199 |
+
Figure 1: Plot of precision, recall, AUROC, FTU, and DP over bias strength.
|
| 200 |
+
|
| 201 |
+
Also same as in the original paper, DECAF-ND performs almost the best among all methods in terms of data quality. Methods DECAF-FTU, DECAF-CF, and DECAF-DP have relatively lower scores on data quality but perform better on fairness.
|
| 202 |
+
|
| 203 |
+
Figure 1 shows DECAF results for experiment 2 in which removing synthetically injected bias. These results do not match the Figure 3 of original paper. This mismatch is not surprising because the second experiment is based on the first experiment where we suspect our setup already significantly diverges from that of the authors.
|
| 204 |
+
|
| 205 |
+
§ 218 7 DISCUSSION
|
| 206 |
+
|
| 207 |
+
Overall, we have been able to produce the results found by the authors. That being said, there are multiple interpretations of the results and overall saliency is relatively low. For the purpose of this paper, we will focus primarily on the fairness metrics since the data utility metrics are closer to the findings of the authors and fairness is the primary goal of the method. Though the order of the fairness of various models of our results match with the original results from the paper, our numerical figures do not match the authors' results with a satisfactory level of precision. Several observations are further pursued as plausible explanations for this phenomenon.
|
| 208 |
+
|
| 209 |
+
§ 7.1 INTERPRETATION OF THE RESULTS
|
| 210 |
+
|
| 211 |
+
As shown in Tables 1 and 2, we obtained interpretable results for all models tested in experiment 1. For the most part, we found effects similar to the authors, but they deviate significantly in numerical terms. More specifically, we do find that as the model variations move from least strict to most strict definition of fairness, the fairness increases and data utility decreases. However, there are notable deviations from the authors results, specifically concerning the fairness metrics of the GAN. In addition, we find that DECAF-ND increases the level of bias compared to the original dataset which matches the authors. However, we find a higher DP of 0.353 and a FTU of 0.114 compared to the authors DP of 0.198 and FTU of 0.152 . These results run counter to our expectations.
|
| 212 |
+
|
| 213 |
+
The results found in the Credit dataset also show the directional correctness of DECAF in reducing bias, but direct comparison to the authors findings is difficult because our results differ significantly from the authors' findings. In particular, we find the FTU and DP scores is maximized at, 0 and minimized at 1. In addition, the authors find relatively stable data utility metrics, whereas we find a significant decrease between bias 0.25 and 0.75 . The results for bias 1.0 and 0 do reflect the average value found by the authors, with the exception of recall which is significantly lower.
|
| 214 |
+
|
| 215 |
+
Furthermore, the authors did not directly interpret their chosen metrics. The original paper designated FTU and DP measures for fairness and reported figures, but did not explain the actual meaning of the numbers and magnitude of changes seen. For example, most of the reported fairness metrics were very small, but we did not have any guidance on the significance of a .001 decrease in the FTU metric. Thus, we felt the paper lacked explainability. Additionally, the fairness definitions themselves, the instructions for calculating the fairness measures, and the given FTU and DP code were somewhat contradictory. Calculating FTU and DP based on our interpretation of the authors' method did not reproduce their results. Using the FTU and DP calculations from an extra code file we received still did not produce matching results. One possibility is that the authors' final fairness metrics calculation code was not contained in the files we had access to and does not match any of the implementations we attempted.
|
| 216 |
+
|
| 217 |
+
§ 7.2 WHAT WAS EASY
|
| 218 |
+
|
| 219 |
+
One aspect that eased our investigation into the reproduceability of [2] was the tabular format and small size of the datasets we used. Training and modifying the model was not computationally expensive or time consuming, thus we could test many different strategies to find the closest solution.
|
| 220 |
+
|
| 221 |
+
§ 7.3 WHAT WAS DIFFICULT
|
| 222 |
+
|
| 223 |
+
We were originally under the impression that the DECAF code repository was fully functional as a basis for extension. Upon further examination, we found that it was not working and did not reproduce the published results. Thus, we had to pivot from extending their code to replicating the results with our own code which was challenging in itself. While attempting to reproduce the experiments, we found that the instructions given were incomplete and contradictory to the code provided.
|
| 224 |
+
|
| 225 |
+
There are multiple obstacles to replicating the experiments as described, which can broadly be separated into conceptual and methodological issues. On the former, there are many important research decisions that are not fully articulated, as well as results that appear counterintuitive. For example, the authors found that their application of GAN, a method that does not do explicit debiasing, had significantly improved fairness metrics compared to the original dataset. One would expect that all the methods that do not debias, namely original data, GAN, WGAN-GP and DECAF-ND would perform in the same order of magnitude in terms of fairness, but this is not the case in the author's initial findings. Moreover, while the DECAF models do reduce bias in line with the level of fairness required, DECAF-ND actually makes the dataset more biased compared to the original dataset. Our reproduction of GAN does match the expected results, with original data, GAN, and WGAN all returning roughly the same fairness metrics. As discussed, we successfully reproduced the overall impact of DECAF, namely higher fairness and lower data utility for more stringent definitions of fairness. However, DECAF-ND exhibits considerably higher bias than the original dataset and no clear intuition is given on why this may be the case.
|
| 226 |
+
|
| 227 |
+
In addition to the conceptual challenges, there are multiple methodological issues. Following the instructions provided by the authors resulted in numerous compatibility warnings and failed tests. As described in section 5.1, several substantial changes were needed to generate any interpretable results. Further compounding these issues, there are inconsistencies in the applied method, as the code utilized in the example explicitly deviates from the approach described in the experimental setup. We were forced to generate labels for experiment 1, while predicting labels for experiment 2. Attempts to use generated labels made experiment 2 uninterpretable, as all key performance indicators would become zero otherwise. This methodological inconsistency between experiments further problematizes the reproducibility of DECAF.
|
| 228 |
+
|
| 229 |
+
§ 7.4 OVERALL REPRODUCIBILTY
|
| 230 |
+
|
| 231 |
+
Due to the number of possible conceptual and methodological interpretations with the code, modifications were needed as described in section 5.1. While we were successful in producing results that could be interpreted, the numerical variations and methodological deviations are so substantial that further research would be needed to assess the overall accuracy of the authors claims. We found evidence that supports the narrow interpretation of the claims made by the author, namely that DECAF reduces bias in downstream models, and allows for the generation of debiased synthetic data. However, the authors claim that the approach allows for minimal data utility loss. Without a further explanation on what is considered minimal data utility loss, it is difficult to evaluate this claim, especially with amount of deviation found between the authors results and ours. While our findings on the first experiment are in line with the authors, the results of the second experiment are in direct contradiction to their findings. Since any fundamental issues in experiment 1 are likely to carry over to experiment 2 we focus our recommendations on experiment 1 .
|
| 232 |
+
|
| 233 |
+
Overall, we find that the results are reproducible but difficult to interpret and compare. Fruitful avenues of further investigation would be to re-evaluate the fairness metrics. Another hypothesis is that there is a more functional issue with the DECAF model itself that would lend itself to further investigation.
|
| 234 |
+
|
| 235 |
+
§ 7.5 COMMUNICATION WITH ORIGINAL AUTHORS
|
| 236 |
+
|
| 237 |
+
We sent two emails to the authors of DECAF detailing the aforementioned code issues. One author did respond with a few extra code files, but unfortunately did not directly address out content questions. However, several of the interpretations we made were retroactively confirmed by the extra code files.
|
| 238 |
+
|
| 239 |
+
§ 8 CONCLUSION
|
| 240 |
+
|
| 241 |
+
During our investigation, we faced multiple significant challenges in reproducing the results of the original paper. The biggest challenges stemmed from the number of possible interpretations of the code and method. While we were not able to reproduce the results in full, we believe methods like DECAF have great potential for expansion. The relevance of unbiased downstream classifiers and the evident need for bias removal in real data will likely remain a societally relevant area of research. For instance, the Adult dataset[3] we studied is nearing 30 years old. Perhaps an intriguing next phase could be to pull this year's Census data to investigate how bias has changed over time and if DECAF is still applicable for removing likely more nuanced and hidden bias that persists through the increased awareness of bias and techniques for counteracting bias that exist today.
|
papers/ML_Reproducibility_Challenge/ML_Reproducibility_Challenge 2021/ML_Reproducibility_Challenge 2021 Fall/SW4eu2MmnRY/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,51 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Reproducing Results for Crossing the Line: Where do Demographic Variables Fit into Humor Detection?
|
| 2 |
+
|
| 3 |
+
Anonymous Author(s)
|
| 4 |
+
|
| 5 |
+
Affiliation
|
| 6 |
+
|
| 7 |
+
Address
|
| 8 |
+
|
| 9 |
+
email
|
| 10 |
+
|
| 11 |
+
1
|
| 12 |
+
|
| 13 |
+
## Reproducibility Summary
|
| 14 |
+
|
| 15 |
+
2
|
| 16 |
+
|
| 17 |
+
## 3 Scope of Reproducibility
|
| 18 |
+
|
| 19 |
+
4 Within the original experiment two groups of annotators of size 10 each in age groups 18-25 and 55-70 were relied on 5 to generate the metrics displayed as part of the results. The scope of our study was limited to instances from The Short 6 Jokes dataset on Kaggle with token sizes between 11 and 16 for annotators 21 in number divided into demographic bins 7 by gender: male, female, non-binary, gender being the chief source of demographic diversity under study.
|
| 20 |
+
|
| 21 |
+
## Methodology
|
| 22 |
+
|
| 23 |
+
The paper was based on gathering direct human response over field values of humorous and/or offensive by presenting short jokes from a varied set of humor genres and subgenres to a diverse audience over different demographic categories segmented by age (18-25, 26-40, 40-55, 56-70), educational qualification as an index of socio-economic status (High School, Undergraduate, Postgraduate), and gender (Male, Female, Non-binary).
|
| 24 |
+
|
| 25 |
+
The same methodology as the original paper of using binary classification ((humorous 1, non-humorous 0), (offensive 1, non-offensive 0)) and adding values demographic bin-wise in studying findings was used.
|
| 26 |
+
|
| 27 |
+
## Results
|
| 28 |
+
|
| 29 |
+
6 It was found that inter-annotator agreement was higher, when categorized using demographic data, in this case gender, as exemplified by the instance of a subset of jokes being identified with the keyword sexist were found humorous and offensive by female annotators, with male members ranking jokes in question reporting it as simply offensive.
|
| 30 |
+
|
| 31 |
+
## What was easy
|
| 32 |
+
|
| 33 |
+
0 The easy part of the reproduction study was to collect data with regards to short, English jokes representing various genres of humor with a diversity in addressed audiences, expressed sentiment and degrees of simplicity.
|
| 34 |
+
|
| 35 |
+
## 22 What was difficult
|
| 36 |
+
|
| 37 |
+
The tougher part of recreating the study and determining results was ensuring diversity in representatives of survey response group without insensitive treatment of interviewed people or any prejudice in choosing response.
|
| 38 |
+
|
| 39 |
+
## 25 Communication with original authors
|
| 40 |
+
|
| 41 |
+
Context was gathered from the completeness of submitted paper and no one to one communication was established with the author or the purpose of this reproducibility study at the time.
|
| 42 |
+
|
| 43 |
+
## 28 Conclusion
|
| 44 |
+
|
| 45 |
+
The scale of operations could be significantly increased by introducing a more automated form of response collection such as gauging responses to sample inputs on a user forum kept live for engagement and recording responses as per categorized demographic bins having obtained consent to do so in a transparent manner with users agreeing to provide such information for research purposes.
|
| 46 |
+
|
| 47 |
+
## 33 References
|
| 48 |
+
|
| 49 |
+
Meaney, J. A. "Crossing the Line: Where do Demographic Variables Fit into Humor Detection?" ACL (2020).
|
| 50 |
+
|
| 51 |
+
Paula Cristina Teixeira Fortuna. 2017. Automatic detection of hate speech in text: an overview of the topic and dataset 36 annotation with hierarchical classes.
|
papers/ML_Reproducibility_Challenge/ML_Reproducibility_Challenge 2021/ML_Reproducibility_Challenge 2021 Fall/SW4eu2MmnRY/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,45 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ REPRODUCING RESULTS FOR CROSSING THE LINE: WHERE DO DEMOGRAPHIC VARIABLES FIT INTO HUMOR DETECTION?
|
| 2 |
+
|
| 3 |
+
Anonymous Author(s)
|
| 4 |
+
|
| 5 |
+
Affiliation
|
| 6 |
+
|
| 7 |
+
Address
|
| 8 |
+
|
| 9 |
+
email
|
| 10 |
+
|
| 11 |
+
1
|
| 12 |
+
|
| 13 |
+
§ REPRODUCIBILITY SUMMARY
|
| 14 |
+
|
| 15 |
+
2
|
| 16 |
+
|
| 17 |
+
§ 3 SCOPE OF REPRODUCIBILITY
|
| 18 |
+
|
| 19 |
+
4 Within the original experiment two groups of annotators of size 10 each in age groups 18-25 and 55-70 were relied on 5 to generate the metrics displayed as part of the results. The scope of our study was limited to instances from The Short 6 Jokes dataset on Kaggle with token sizes between 11 and 16 for annotators 21 in number divided into demographic bins 7 by gender: male, female, non-binary, gender being the chief source of demographic diversity under study.
|
| 20 |
+
|
| 21 |
+
§ METHODOLOGY
|
| 22 |
+
|
| 23 |
+
The paper was based on gathering direct human response over field values of humorous and/or offensive by presenting short jokes from a varied set of humor genres and subgenres to a diverse audience over different demographic categories segmented by age (18-25, 26-40, 40-55, 56-70), educational qualification as an index of socio-economic status (High School, Undergraduate, Postgraduate), and gender (Male, Female, Non-binary).
|
| 24 |
+
|
| 25 |
+
The same methodology as the original paper of using binary classification ((humorous 1, non-humorous 0), (offensive 1, non-offensive 0)) and adding values demographic bin-wise in studying findings was used.
|
| 26 |
+
|
| 27 |
+
§ RESULTS
|
| 28 |
+
|
| 29 |
+
6 It was found that inter-annotator agreement was higher, when categorized using demographic data, in this case gender, as exemplified by the instance of a subset of jokes being identified with the keyword sexist were found humorous and offensive by female annotators, with male members ranking jokes in question reporting it as simply offensive.
|
| 30 |
+
|
| 31 |
+
§ WHAT WAS EASY
|
| 32 |
+
|
| 33 |
+
0 The easy part of the reproduction study was to collect data with regards to short, English jokes representing various genres of humor with a diversity in addressed audiences, expressed sentiment and degrees of simplicity.
|
| 34 |
+
|
| 35 |
+
§ 22 WHAT WAS DIFFICULT
|
| 36 |
+
|
| 37 |
+
The tougher part of recreating the study and determining results was ensuring diversity in representatives of survey response group without insensitive treatment of interviewed people or any prejudice in choosing response.
|
| 38 |
+
|
| 39 |
+
§ 25 COMMUNICATION WITH ORIGINAL AUTHORS
|
| 40 |
+
|
| 41 |
+
Context was gathered from the completeness of submitted paper and no one to one communication was established with the author or the purpose of this reproducibility study at the time.
|
| 42 |
+
|
| 43 |
+
§ 28 CONCLUSION
|
| 44 |
+
|
| 45 |
+
The scale of operations could be significantly increased by introducing a more automated form of response collection such as gauging responses to sample inputs on a user forum kept live for engagement and recording responses as per categorized demographic bins having obtained consent to do so in a transparent manner with users agreeing to provide such information for research purposes.
|
papers/ML_Reproducibility_Challenge/ML_Reproducibility_Challenge 2021/ML_Reproducibility_Challenge 2021 Fall/SWNM52GXh0Y/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,304 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# [Reproducibility Report] Explainable Deep One-Class Classification
|
| 2 |
+
|
| 3 |
+
Anonymous Author(s)
|
| 4 |
+
|
| 5 |
+
Affiliation
|
| 6 |
+
|
| 7 |
+
Address
|
| 8 |
+
|
| 9 |
+
email
|
| 10 |
+
|
| 11 |
+
1
|
| 12 |
+
|
| 13 |
+
## Reproducibility Summary
|
| 14 |
+
|
| 15 |
+
## Scope of Reproducibility
|
| 16 |
+
|
| 17 |
+
Liznerski et al. [23] proposed Fully Convolutional Data Description (FCDD), an explainable version of the Hypersphere Classifier (HSC) to directly address image anomaly detection (AD) and pixel-wise AD without any post-hoc explainer methods. The authors claim that FCDD achieves results comparable with the state-of-the-art in sample-wise AD on Fashion-MNIST and CIFAR-10 and exceeds the state-of-the-art on the pixel-wise task on MVTec-AD. They also give evidence to show a clear improvement by using few ( 1 up to 8 ) real anomalous images in MVTec-AD for supervision at the pixel level. Finally, a qualitative study with horse images on PASCAL-VOC shows that FCDD can intrinsically reveal spurious model decisions by providing built-in anomaly score heatmaps.
|
| 18 |
+
|
| 19 |
+
## Methodology
|
| 20 |
+
|
| 21 |
+
We have reproduced the quantitative results in the main text of [23] except for the performance on ImageNet: sample-wise AD on Fashion-MNIST and CIFAR-10, and pixel-wise AD on MVTec-AD. We used the author's code with GPUs NVIDIA TITAN X and NVIDIA TITAN Xp. A more detailed look into FCDD’s performance variability is presented, and a Critical Difference (CD) diagram is proposed as a more appropriate tool to compare methods over the datasets in MVTec-AD. Finally, we study the generalization power of the unsupervised FCDD during training.
|
| 22 |
+
|
| 23 |
+
## Results
|
| 24 |
+
|
| 25 |
+
All per-class performances (in terms of Area Under the ROC Curve (ROC-AUC) [31]) announced in the paper were replicated with absolute difference of at most 2% and below 1% on average, confirming the paper’s claims. We report the experiments' GPU and CPU memory requirements and their average training time. Our analyses beyond the paper's scope show that claiming to "exceed the state-of-the-art" should be considered with care, and evidence is given to argue that the pixel-wise unsupervised FCDD could narrow the gap with its semi-supervised version.
|
| 26 |
+
|
| 27 |
+
## What was easy
|
| 28 |
+
|
| 29 |
+
The paper was clear and explicitly gave many training and hyperparameters details, which were conveniently set as default in the author's scripts. Their code was well organized and easy to interact with.
|
| 30 |
+
|
| 31 |
+
## 25 What was difficult
|
| 32 |
+
|
| 33 |
+
Using ImageNet proved to be challenging due to its size and need to manually set it up; we could not complete the 27 experiments on this dataset.
|
| 34 |
+
|
| 35 |
+
## 28 Communication with original authors
|
| 36 |
+
|
| 37 |
+
We reached the main author by e-mail to ask for help with ImageNet and discuss a few practical details. He promptly replied with useful information.
|
| 38 |
+
|
| 39 |
+
## 1 Introduction
|
| 40 |
+
|
| 41 |
+
Liznerski et al. [23] proposed a deep learning based AD method capable of doing pixel-wise AD (also known as "anomaly segmentation") by directly generating anomaly score maps with a loss function based on the Hypersphere Classifier (HSC) [29], a successor of Deep Support Vector Data Description (DSVDD) [28], using a fully-convolutional neural network - hence the name Fully Convolutional Data Description (FCDD).
|
| 42 |
+
|
| 43 |
+
By only using convolutions, down-samplings, and batch normalization (no attention mechanism, nor fully connected layers), an image of dimensions $C \times H \times W$ (respectively, the number of channels, the height, and the width) is transformed into a latent representation ${C}^{\prime } \times U \times V$ , where $U < H$ and $V < W$ . This low-resolution representation is a $U \times V$ grid of ${C}^{\prime }$ -dimensional vectors, from which the pseudo-Huber loss function yields a $U \times V$ heatmap of anomaly scores.
|
| 44 |
+
|
| 45 |
+
Each of these ${C}^{\prime }$ -dimensional vectors contains information from a corresponding receptive field within the full resolution $\left( {H \times W}\right)$ image. Evidence [24] suggests that the effective influence of the input pixels decreases Gaussian-ly as their position is further away from the center of the receptive field. FCDD uses this principle to up-sample the obtained heatmap back to the original resolution $\left( {H \times W}\right)$ , therefore directly obtaining a visual, explainable anomaly score map.
|
| 46 |
+
|
| 47 |
+
Finally, FCDD is also adapted to perform anomaly detection at the sample (image) level by taking the average score on the low resolution anomaly heatmap.
|
| 48 |
+
|
| 49 |
+
Vocabulary: Sample v.s. Pixel-wise Anomaly Detection The authors refer to anomaly detection (AD) at the image level (e.g. given an unseen image, a model trained on horse images should infer if there is a horse present in it or, otherwise, the image is anomalous) simply as "detection", while anomaly segmentation/localization (i.e. finding regions, sets of pixels where there exists anomalous characteristics) is referred as "pixel-wise AD". Analogously, for the sake of clarity, we refer to the former as "sample-wise AD". Both setups are further explained in Section 3.3.
|
| 50 |
+
|
| 51 |
+
## 2 Scope of reproducibility
|
| 52 |
+
|
| 53 |
+
We aimed to reproduce the results announced in [23] to verify the effectiveness of the proposed method both in sample-wise and pixel-wise anomaly detection. Specifically, we tested the following claims from the original paper:
|
| 54 |
+
|
| 55 |
+
1. Claim 1: FCDD is comparable with state-of-the-art methods in terms of ROC-AUC in sample-wise anomaly detection on standard benchmarks (namely, Fashion-MNIST, CIFAR-10, and ImageNet);
|
| 56 |
+
|
| 57 |
+
2. Claim 2: FCDD exceeds the state-of-the-art on MVTec-AD in anomaly segmentation in the unsupervised setting in terms of pixel-wise ROC-AUC;
|
| 58 |
+
|
| 59 |
+
3. Claim 3: FCDD can incorporate real anomalies, and including only a few annotated images $\left( { \approx 5}\right)$ containing real, segmented anomalies, the performance consistently improves;
|
| 60 |
+
|
| 61 |
+
4. Claim 4: FCDD can reveal spurious model decisions without any extra explanation method on top of it.
|
| 62 |
+
|
| 63 |
+
The experiments on supporting Claim 1 on Fashion-MNIST and CIFAR-10 have been replicated, as well as all the tests on MVTec-AD, supporting Claims 2 and 3, and the qualitative analysis on PASCAL-VOC, supporting Claim 4. We provide details about computational requirements (CPU memory, GPU memory, and training time) necessary to run these experiments.
|
| 64 |
+
|
| 65 |
+
Beyond the paper Other analyses are proposed on the results obtained from the experiments corresponding to Claims 2 and 3, which further confirm Claim 3 but show that Claim 2 should be taken with consideration. We also investigate the evolution of the test performance during the optimization in MVTec-AD's unsupervised setting (see Section 3.3), revealing opportunity for improvement that could narrow down the gap with the semi-supervised setting.
|
| 66 |
+
|
| 67 |
+
## 3 Methodology
|
| 68 |
+
|
| 69 |
+
We used the author's code (PyTorch 1.9.1 and Torchvision 0.10.1), publicly available on GitHub [4], to reproduce the 2 quantitative experiments presented in the main text. It required no external documentation, and the whole reproduction took roughly one month-person of work.
|
| 70 |
+
|
| 71 |
+
### 3.1 Datasets
|
| 72 |
+
|
| 73 |
+
The proposed method was originally tested [23] on Fashion-MNIST [32], CIFAR-10 [19], ImageNetlk [13], MVTec-AD [9], and PASCAL VOC [14]. Besides, EMNIST [11], CIFAR-100 [19], and ImageNet21k [version "fall 2011") were used as Outlier Exposure (OE) [16] datasets. All the datasets except for ImageNet were publicly available and automatically downloaded.
|
| 74 |
+
|
| 75 |
+
ImageNet We requested access and download ImageNet1k (version "ILSVRC 2012") from its official website [5]. ImageNet21k (a.k.a. ImageNet22k) was downloaded from academictorrents. com [1] because the version used in the original paper was not available in the official website anymore.
|
| 76 |
+
|
| 77 |
+
### 3.2 Models
|
| 78 |
+
|
| 79 |
+
We used the same neural networks as the original paper, which depend on the dataset:
|
| 80 |
+
|
| 81 |
+
- Fashion-MNIST: three convolutional layers separated by two max-pool layers, where the first convolution is followed by a batch normalization and a leaky ReLU;
|
| 82 |
+
|
| 83 |
+
- CIFAR-10: two convolutions preceded by three blocks, each, composed of a convolution, a batch normalization, a leaky ReLU, and a max-pool layer;
|
| 84 |
+
|
| 85 |
+
- MVTec-AD and PASCAL-VOC (Clever Hans): the first 10 (frozen) layers from VGG11 pre-trained on ImageNet followed by two convolutional layers.
|
| 86 |
+
|
| 87 |
+
### 3.3 Experimental setup
|
| 88 |
+
|
| 89 |
+
The paper presents two quantitative experiments: sample-wise (section "4.1 Standard Anomaly Detection Benchmarks" in [23]) and pixel-wise AD (section "4.2 Explaining Defects in Manufacturing" in [23]), as well as a qualitative experiment.
|
| 90 |
+
|
| 91 |
+
We followed the same experimental procedure used in [23]: each experiment - i.e. given a dataset, its OE when applicable, a normal class, and all hyperparameters - was repeated five times, and the reported values are the average over them unless stated otherwise (e.g. Figure 1).
|
| 92 |
+
|
| 93 |
+
Sample-wise Standard one-vs-rest setup, where one class of the given database is chosen as normal and all the others are used as anomalous. Each image has a binary ground truth signal - logically derived from its label - and the model assigns an anomaly score to it (therefore "sample-wise"). The metric used is the ROC-AUC on the test split, and all the classes are evaluated as normal. The datasets used in this experiment and their respective OE dataset is summarized in Table 1, and its results support Claim 1.
|
| 94 |
+
|
| 95 |
+
Table 1: Sample-wise experiments: tested datasets and their respective OE sources. From the dataset in the column "one-vs-rest", one class is used as normal at training and test time while all others are considered anomalies at test time only. The column Outlier Exposure (OE) is the dataset used as a source of anomalies at training time. "Experiment reference" will further be used to reference these configurations.
|
| 96 |
+
|
| 97 |
+
<table><tr><td>One-vs-rest dataset</td><td>OE dataset</td><td>Experiment reference</td></tr><tr><td rowspan="2">Fashion-MNIST</td><td>EMNIST</td><td>F-MNIST (OE-EMNIST)</td></tr><tr><td>CIFAR-100</td><td>F-MNIST (OE-CIFAR-100)</td></tr><tr><td>CIFAR-10</td><td>CIFAR-100</td><td>CIFAR-10</td></tr><tr><td>ImageNet1k</td><td>ImageNet21k</td><td>ImageNet</td></tr></table>
|
| 98 |
+
|
| 99 |
+
Pixel-wise Anomalies are defined at the pixel level (binary segmentation mask where "1" means "anomalous" and "0" means "normal") and an image is considered anomalous if it contains anomalous pixels although normal pixels are also present in the image. In each experiment a single class in MVTec-AD is fixed, its normal images are used both for training and test, and anomalous ones are used for test. As for the anomalous samples at training time, two settings were tested:
|
| 100 |
+
|
| 101 |
+
---
|
| 102 |
+
|
| 103 |
+
${}^{1}$ Also known as "ImageNet22k" or "full ImageNet".
|
| 104 |
+
|
| 105 |
+
---
|
| 106 |
+
|
| 107 |
+
Table 2: Memory requirements and training time using NVIDIA GPUs TITAN X and TITAN Xp (one at a time, indistinctly).
|
| 108 |
+
|
| 109 |
+
<table><tr><td>Experiment</td><td>CPU memory (Gb)</td><td>GPU memory (Gb)</td><td>Training duration</td></tr><tr><td>F-MNIST (OE-CIFAR-100)</td><td>2</td><td>1.3</td><td>12 min</td></tr><tr><td>CIFAR-10</td><td>3</td><td>1.9</td><td>34 min</td></tr><tr><td>MVTec-AD unsupervised</td><td>38</td><td>5.5</td><td>1h 13 min</td></tr><tr><td>MVTec-AD semi-supervised</td><td>33</td><td>5.5</td><td>41 min</td></tr><tr><td>PASCAL VOC (Clever Hans)</td><td>5</td><td>11.8</td><td>21 min</td></tr></table>
|
| 110 |
+
|
| 111 |
+
- Unsupervised: synthetic random anomalies are generated using a "confetti noise" (colored blobs added to the image);
|
| 112 |
+
|
| 113 |
+
- Semi-supervised: one image per anomaly group ( 1 up to 8 types depending on the class) is removed from the test set and used for training.
|
| 114 |
+
|
| 115 |
+
Neither of these settings require an OE dataset because the anomalous samples are either synthetic or real on images of the nominal class. The performance metric is the ROC-AUC of the anomaly scores at the pixel level. MVTec-AD is the only dataset used in this case, and the results of these experiment support Claims 2 and 3.
|
| 116 |
+
|
| 117 |
+
Clever Hans (PASCAL VOC) About one fifth of the images in the class "horse" in PASCAL VOC [14] contain a watermark [20], which may cause models to learn spurious features. This is known as the "Clever Hans" effect as a reference to Hans, a horse claimed to be capable of performing arithmetic operations while it, in fact, read his master's reactions [26] - analogously, a model making decisions based on the watermarks would be "cheating" the real problem. In this experiment, a model is trained using all the classes in PASCAL VOC as normal, only the class "horse" as anomalous (a swapped one-vs-rest setting), and ImageNet1k as OE dataset. The goal is to qualitatively observe if one class classifiers are also vulnerable to the Clever Hans effect and show that FCDD transparently reveals such weaknesses as it intrinsically provides explanations (score heatmaps). This experiment has no quantitative metric but it supports Claim 4.
|
| 118 |
+
|
| 119 |
+
### 3.4 Hyperparameters
|
| 120 |
+
|
| 121 |
+
Running the author's code with the default parameters, as described in the original paper, did not require any hyperpa-rameter tuning to achieve the reported results (differences detailed in Section 4) and confirm the authors' claims. We underline that the results on MVTec-AD were obtained using the same hyperparameters on both settings, unsupervised and semi-supervised.
|
| 122 |
+
|
| 123 |
+
### 3.5 Computational requirements
|
| 124 |
+
|
| 125 |
+
We used the NVIDIA GPUs "TITAN X" [6] and "TITAN Xp" [7] to run our experiments. The two GPUs were used indistinctly as they have similar characteristics, and only one GPU was used at a time. The GPU and CPU memory requirements and the average training duration of our experiments are listed in the Table 2 below.
|
| 126 |
+
|
| 127 |
+
CPU memory was recorded with an in-house python script using the library psutil [3] at 1Hz. GPU memory was recorded using gpustat [2] at $1\mathrm{\;{Hz}}$ . Both memory values are the maximum recorded during the experiments, including training and inference time. The training duration is an average over all the experiments.
|
| 128 |
+
|
| 129 |
+
On F-MNIST (OE-CIFAR-100) and CIFAR-10, the range of duration did not vary more than two minutes, on MVTec-AD unsupervised it ranged from 15 minutes up to one hour, and on MVTec-AD semi-supervised it ranged from 22 minutes up to one hour and 56 minutes depending on the class.
|
| 130 |
+
|
| 131 |
+
### 3.6 Beyond the paper
|
| 132 |
+
|
| 133 |
+
We propose a more detailed visualization of the distribution of performances (due to random effects, all hyperparameters - held constant) of the two settings (unsupervised and semi-supervised) evaluated on MVTec-AD, and a critical difference diagram as alternative evaluation of performance across several datasets (the individual classes in MVTec-AD).
|
| 134 |
+
|
| 135 |
+
The network architectures used for the experiments on MVTec-AD were pre-trained on ImageNet and most of the weights are kept frozen, therefore raising the question of how much of FCDD's performance is due to the pre-training. 44 We took snapshots of the unsupervised model's weights in order to visualize the evolution of the performance on the test set during training.
|
| 136 |
+
|
| 137 |
+
## 4 Results
|
| 138 |
+
|
| 139 |
+
### 4.1 Reproducing the original paper
|
| 140 |
+
|
| 141 |
+
We reproduced the unsupervised and semi-supervised settings for MVTec-AD, and all experiments on Table 1 except for ImageNet - due to resource limitations, this experiment could not be completed in time.
|
| 142 |
+
|
| 143 |
+
The results of F-MNIST (OE-EMNIST) were not detailed in the original paper but it is claimed to have a class-mean ROC-AUC of $\sim 3\%$ below F-MNIST (OE-CIFAR-100), and we observed a difference of 2.7%.
|
| 144 |
+
|
| 145 |
+
We summarize the differences between our results and those from the original paper [23] in Table 3. The error margins presented are in absolute differences and refer to the ROC-AUC (which is expressed in %) from each individual class's experiment (recall: the mean over five iterations).
|
| 146 |
+
|
| 147 |
+
Table 3: Differences between the original paper results and ours. All the values are in absolute difference of ROC-AUC, expressed in $\%$ (it is not a relative error). The columns in "Difference per class" show statistics of the absolute difference of each individual class performance, while the column "Mean ROC-AUC diff." corresponds to the difference measured after the mean is taken over all the classes.
|
| 148 |
+
|
| 149 |
+
<table><tr><td rowspan="2">Experiment</td><td rowspan="2">ROC-AUC type</td><td rowspan="2">N. classes</td><td colspan="2">Diff. per class</td><td rowspan="2">Mean ROC-AUC diff.</td></tr><tr><td>Max</td><td>Mean</td></tr><tr><td>F-MNIST (OE-CIFAR-100)</td><td>sample-wise</td><td>10</td><td>1%</td><td>0.6%</td><td>0.01%</td></tr><tr><td>CIFAR-10</td><td>sample-wise</td><td>10</td><td>0.5%</td><td>0.3%</td><td>0.4%</td></tr><tr><td>MVTec-AD unsupervised</td><td>pixel-wise</td><td>15</td><td>2%</td><td>0.6%</td><td>0.2%</td></tr><tr><td>MVTec-AD semi-supervised</td><td>pixel-wise</td><td>15</td><td>2%</td><td>0.7%</td><td>0.4%</td></tr></table>
|
| 150 |
+
|
| 151 |
+
5 Clever Hans (PASCAL VOC) The experiment on PASCAL VOC ("Clever Hans Effect") has been manually verified, 156 and similar (flawed) explanations on horse images have been observed. Two examples are shown in Figure 5 in Appendix A.
|
| 152 |
+
|
| 153 |
+
### 4.2 Beyond the paper
|
| 154 |
+
|
| 155 |
+
Figure 1 further details the performance comparison between the unsupervised and semi-supervised settings on MVTec-AD on each class.
|
| 156 |
+
|
| 157 |
+
Figure 2 compares the methods in Table 2 in [23] with a CD diagram using the Wilcoxon-Holm procedure implemented by [17]. We replaced the results for FCDD from [23] by our own and copied the others from the literature [10, 30, 25, 22, 21, 12, 35]. For each class, the methods are sorted by their respective ROC-AUC and assigned a ranking from 1 to 10 according to their position; then, every pair of methods is compared with the Wilcoxon signed rank test with the confidence level $\alpha = 5\%$ . The CD diagram shows the average ranks of the methods on the horizontal scale, and the red bars group methods where each pair of methods are not significantly different according to the Wilcoxon signed rank test. The ranks from one to five (six to ten omitted for the sake of brevity) are shown in Table 4 in the Appendix A.
|
| 158 |
+
|
| 159 |
+
Figure 3 shows the test pixel-wise ROC-AUC scores during the optimization of the model used for MVTec-AD with the unsupervised setting. Due to time and resources constraints, we ran this experiment on 7 out of the 15 classes in MVTec-AD, each of them being evaluated 6 times (a few of them could not finish in time).
|
| 160 |
+
|
| 161 |
+

|
| 162 |
+
|
| 163 |
+
Figure 1: Our experiments on MVTec-AD: unsupervised and semi-supervised settings compared. We display a box plot of the performances (in terms of pixel-wise ROC-AUC on the test set) achieved in different runs along with their individual performances scattered on the $x$ -axis.
|
| 164 |
+
|
| 165 |
+

|
| 166 |
+
|
| 167 |
+
Figure 2: MVTec-AD Critical Difference diagram. Using the Table 2 from [23] with the results for FCDD replaced by our own, we build a critical difference diagram using the Wilcoxon-Holm method. Values on the scale are average rankings, and each red line groups a set of methods that are not significantly different in terms of ranking with a confidence level of $\alpha = 5\%$ . The per-class ROC-AUC values used for FCDD are from our own experiments, and those marked with "*" were taken from the literature. References: Scores for Self-Similarity (AE-SS) [10], L2 Autoencoder (AE-L2) [10], AnoGAN [30], and CNN Feature Dictionaries (CNNFD) [25] were taken from Table 3 in [10]. Other scores were taken from their respective papers: Visually Explained Variational Autoencoder (VEVAE) from Table 2 in [22], Superpixel Masking and Inpainting (SMAI) from Table 2 in [21], Gradient Descent Reconstruction with VAEs (GDR) from Table 1 in [12], Encoding Structure-Texture Relation with P-Net for AD (P-NET) from Table 6 in [35].
|
| 168 |
+
|
| 169 |
+
## 5 Discussion
|
| 170 |
+
|
| 171 |
+
Our reproduction of the experiments closely agree with the quantitative results published in the original paper. The proposed setup is adapted to support the claims announced in the paper hold, and the results corroborate it. We obtained results consistently close to the published ones without any further tuning of the parameters or modification of the authors' code.
|
| 172 |
+
|
| 173 |
+

|
| 174 |
+
|
| 175 |
+
Performances were recorded at the following epochs (out of 1 to 200):
|
| 176 |
+
|
| 177 |
+
1,2,3,5,8,11,14,17,20,25,40,45,50,60,70,80,90,100,130,170,200
|
| 178 |
+
|
| 179 |
+
Figure 3: MVTec-AD test performance history.
|
| 180 |
+
|
| 181 |
+
### 5.1 What was easy
|
| 182 |
+
|
| 183 |
+
The paper is clear, and it was easy to grasp the core ideas presented in the main text. It also provided enough details about the experimental setup, including training hyper parameters and network architecture in the appendices.
|
| 184 |
+
|
| 185 |
+
The code was overall well organized and the instructions to use it were direct and easy to use. Conveniently, the experiments were well encapsulated scripts and default parameters matched those described in the text. In particular, the experiments are self-documenting (i.e. they keep record of the configurations, logs, and results, etc) and flexible, allowing the user to change (many) parameters without modifying the code.
|
| 186 |
+
|
| 187 |
+
### 5.2 What was difficult
|
| 188 |
+
|
| 189 |
+
ImageNet Using ImageNet was the hardest part. At first, it took about a month to get access to it on the official website. Then we had to find an alternative source [I] to find the correct version of ImageNet21k ("fall 2011") because it was not available on the official website anymore. Basic operations (e.g. decompressing data, moving files) proved challenging due to its size (1.2 TB compressed), and the instructions to manually prepare this dataset could be more explicit - we wasted several hours of work because of a few mistakes we made.
|
| 190 |
+
|
| 191 |
+
We could not run the experiments on that dataset with the same hyperparameters because the GPU we dispose of did not have enough memory (16GB). Although, we note that some solutions like using multiple GPUs or decreasing the batch size were possible but could not be tried in time.
|
| 192 |
+
|
| 193 |
+
Minor code issues There were a few minor bugs, which we corrected without considerable difficulty. They were mostly related with the script add_exp_to_base.py, which automatically configures and launches baseline experiments based on a previously executed one. Finally, the code structure was slightly overcomplicated; e.g. the levels of abstraction/indirection, specially heritage, could be simpler. Although, we stress that this negative point is minor and did not cause any critical issues.
|
| 194 |
+
|
| 195 |
+
### 5.3 Communication with original authors
|
| 196 |
+
|
| 197 |
+
We exchanged e-mails with the main author, mostly to ask for help with getting access to the right versions of ImageNet and executing the experiments on it. He replied promptly, his answers were certainly helpful, and we would like to express our sincere appreciation.
|
| 198 |
+
|
| 199 |
+
### 5.4 Beyond the paper
|
| 200 |
+
|
| 201 |
+
MVTec-AD: supervision effect The visualization proposed in Figure 1 further demonstrates that, with only a few images of real anomalies added to the training, the model's performance consistently improves. Only 4/15 classes have performance overlap, all others show a clear shift in the performance distribution.
|
| 202 |
+
|
| 203 |
+
However, it must be mentioned that the synthetic anomalies ignore the local supervision, therefore making its training sub-optimal. Figure 4 illustrates this with training images and their respective masks from the class "Pill": in 4a we see that the the semi-supervised setting provides pixel-level annotations on the anomalies (the ground truth mask), while in 4b we see that the entire image is considered anomalous in the unsupervised setting. This is a source of sub-optimality because, in the anomalous images, most pixels are, in fact, normal. In other words, similar images patches, free of synthetic anomaly, can be found both in normal and anomalous images.
|
| 204 |
+
|
| 205 |
+
Ultimately, this a clear opportunity of improvement that can bring the unsupervised setting's performance closer to the semi-supervised setting.
|
| 206 |
+
|
| 207 |
+
Test performance history Figure 3 reveals another issue with the method. Take, for instance, the purple and blue lines in the row "Carpet"; they reach a maximum point at the beginning of the gradient descent then converge to a point with less and less generalization power. These performance histories are evaluated on the test set, which is assumed to be unavailable at training time, so this information could not be used to stop the training or reject it. However, this reveals another opportunity of improvement because the training setting does not push the model to generalize well enough. Note, in Figure 4b, that the confetti noise also "stains" the background, creating synthetic anomalies "out of context", so using more realistic ones could be a solution?
|
| 208 |
+
|
| 209 |
+

|
| 210 |
+
|
| 211 |
+
Figure 4: MVTec-AD training images: unsupervised vs. semi-supervised.
|
| 212 |
+
|
| 213 |
+
We also see that some (good) performances are likely due to the pre-training (on ImageNet). For instance, the class "Hazelnut" often reaches its maximum test performance (or almost) with few epochs.
|
| 214 |
+
|
| 215 |
+
Critical Difference (CD) diagram We propose a CD diagram in Figure 2 as a more appropriate methodology to aggregate results of all the classes. As a potential user is choosing an AD method for a new dataset, he or she is looking for the method(s) that will most likely be the best on his or her own problem. Therefore, the specific ROC-AUC scores on standard datasets have little importance but their relative performances is essential. In other words, what matters for a potential user is how the comparison of methods generalizes over specific datasets.
|
| 216 |
+
|
| 217 |
+
The experiments on MVTec-AD do not interact from one class (set as nominal) to another, making them essentially independent datasets; therefore, taking the average score over the classes may mislead the analysis. For instance, in [23], FCDD unsupervised is claimed to beat the state-of-the-art, although Figure 2 shows that GDR [12] has a better average ranking.
|
| 218 |
+
|
| 219 |
+
Note that the red bars in the diagram may give the impression that there is no relevant difference at all; although, it is important to observe that it was built considering only 15 datasets (therefore 15 rankings), making the statistical test hard, so using more datasets could refine these groups and provide better understanding. Finally, it is worth noting that the CD diagram is capable of incorporating new datasets, while the mean score over them would be too much affected if some cases are much easier or much harder than the others.
|
| 220 |
+
|
| 221 |
+
State-of-the-art It is worth mentioning that more recent methods have claimed better results on the same benchmarks used in this work. For instance, at least 5 papers [27, 18, 34, 33, 15] claim to have a mean ROC-AUC above 98% on the leaderboard for anomaly segmentation ("pixel-wise AD") on MVTec-AD in Papers with Code [8]. Unfortunately, we did not have the time to fully verify the experimental conditions in the sources but this serves as proxy evidence to take these results with consideration.
|
| 222 |
+
|
| 223 |
+
References
|
| 224 |
+
|
| 225 |
+
[1] Academic torrents, download page for ImageNet22k version "fall 2011". https://academictorrents.com/ details/564a77c1e1119da199ff32622a1609431b9f1c47
|
| 226 |
+
|
| 227 |
+
[2] Documentation page of gpustat. https://github.com/wookayin/gpustat
|
| 228 |
+
|
| 229 |
+
[3] Documentation page of the python library psutil. https://psutil.readthedocs.io/en/latest/
|
| 230 |
+
|
| 231 |
+
[4] GitHub repository liznerski/fcdd. https://github.com/liznerski/fcdd.Downloaded: 2021-12-20. Forked on commit 7 af 3d8e adabee81 ab8f7 db5d8a7f8389 ef 090213.
|
| 232 |
+
|
| 233 |
+
[5] ImageNet official page, challenge ILSVRC 2012. https://image-net.org/challenges/LSVRC/2012/ 2012-downloads.php
|
| 234 |
+
|
| 235 |
+
[6] NVIDIA's TITAN X product page. https://www.nvidia.com/en-us/geforce/products/10series/ titan-x-pascal/. Accessed: 2022-01-24.
|
| 236 |
+
|
| 237 |
+
[7] NVIDIA's TITAN Xp product page. https://www.nvidia.com/en-us/titan/titan-xp/.Accessed: 2022- 01-24.
|
| 238 |
+
|
| 239 |
+
[8] Papers with code's leaderboard on Anomaly Detection on MVTec AD. https://paperswithcode.com/sota/ anomaly-detection-on-mvtec-ad?metric=Segmentation%20AUROC
|
| 240 |
+
|
| 241 |
+
[9] P. Bergmann, K. Batzner, M. Fauser, D. Sattlegger, and C. Steger. The MVTec Anomaly Detection Dataset: A Comprehensive Real-World Dataset for Unsupervised Anomaly Detection. International Journal of Computer Vision, 129(4):1038-1059, Apr. 2021.
|
| 242 |
+
|
| 243 |
+
[10] P. Bergmann, M. Fauser, D. Sattlegger, and C. Steger. MVTec AD - A Comprehensive Real-World Dataset for Unsupervised Anomaly Detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2019.
|
| 244 |
+
|
| 245 |
+
[11] G. Cohen, S. Afshar, J. Tapson, and A. van Schaik. EMNIST: Extending MNIST to handwritten letters. In 2017 International Joint Conference on Neural Networks (IJCNN), pages 2921-2926, 2017.
|
| 246 |
+
|
| 247 |
+
[12] D. Dehaene, O. Frigo, S. Combrexelle, and P. Eline. Iterative energy-based projection on a normal data manifold for anomaly localization. page 17, 2020.
|
| 248 |
+
|
| 249 |
+
[13] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. ImageNet: A large-scale hierarchical image database. In 2009 IEEE Conference on Computer Vision and Pattern Recognition, pages 248-255, 2009.
|
| 250 |
+
|
| 251 |
+
[14] M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman. The Pascal Visual Object Classes (VOC) Challenge. International Journal of Computer Vision, 88(2):303-338, June 2010.
|
| 252 |
+
|
| 253 |
+
[15] D. Gudovskiy, S. Ishizaka, and K. Kozuka. CFLOW-AD: Real-Time Unsupervised Anomaly Detection with Localization via Conditional Normalizing Flows. arXiv:2107.12571 [cs], July 2021. arXiv: 2107.12571 version: 1.
|
| 254 |
+
|
| 255 |
+
[16] D. Hendrycks, M. Mazeika, and T. Dietterich. Deep Anomaly Detection with Outlier Exposure. In International Conference on Learning Representations, 2019.
|
| 256 |
+
|
| 257 |
+
[17] H. Ismail Fawaz, G. Forestier, J. Weber, L. Idoumghar, and P.-A. Muller. Deep learning for time series classification: a review. Data Mining and Knowledge Discovery, 33(4):917-963, 2019.
|
| 258 |
+
|
| 259 |
+
[18] J.-H. Kim, D.-H. Kim, S. Yi, and T. Lee. Semi-orthogonal Embedding for Efficient Unsupervised Anomaly Segmentation. arXiv:2105.14737 [cs], May 2021. arXiv: 2105.14737 version: 1.
|
| 260 |
+
|
| 261 |
+
[19] A. Krizhevsky. Learning Multiple Layers of Features from Tiny Images, 2009. Technical report.
|
| 262 |
+
|
| 263 |
+
[20] S. Lapuschkin, A. Binder, G. Montavon, K.-R. Muller, and W. Samek. Analyzing Classifiers: Fisher Vectors and Deep Neural Networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2016.
|
| 264 |
+
|
| 265 |
+
[21] Z. Li, N. Li, K. Jiang, Z. Ma, X. Wei, X. Hong, and Y. Gong. Superpixel Masking and Inpainting for Self-Supervised Anomaly Detection. page 12, 2020.
|
| 266 |
+
|
| 267 |
+
[22] W. Liu, R. Li, M. Zheng, S. Karanam, Z. Wu, B. Bhanu, R. J. Radke, and O. Camps. Towards Visually Explaining Variational Autoencoders. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2020.
|
| 268 |
+
|
| 269 |
+
[23] P. Liznerski, L. Ruff, R. A. Vandermeulen, B. J. Franks, M. Kloft, and K. R. Muller. Explainable Deep One-Class Classification. In International Conference on Learning Representations, 2021.
|
| 270 |
+
|
| 271 |
+
[24] W. Luo, Y. Li, R. Urtasun, and R. Zemel. Understanding the Effective Receptive Field in Deep Convolutional Neural Networks. In Proceedings of the 30th International Conference on Neural Information Processing Systems, NIPS'16, pages 4905-4913, Red Hook, NY, USA, 2016. Curran Associates Inc. event-place: Barcelona, Spain.
|
| 272 |
+
|
| 273 |
+
[25] P. Napoletano, F. Piccoli, and R. Schettini. Anomaly Detection in Nanofibrous Materials by CNN-Based Self-Similarity. Sensors (Basel, Switzerland), 18(1):209, Jan. 2018.
|
| 274 |
+
|
| 275 |
+
[26] O. Pfungst. Clever Hans (The Horse of Mr. Von Osten) A contribution to experimental animal and human psychology. Oct. 2010.
|
| 276 |
+
|
| 277 |
+
[27] K. Roth, L. Pemula, J. Zepeda, B. Schölkopf, T. Brox, and P. Gehler. Towards Total Recall in Industrial Anomaly Detection. arXiv:2106.08265 [cs], June 2021. arXiv: 2106.08265 version: 1.
|
| 278 |
+
|
| 279 |
+
[28] L. Ruff, R. Vandermeulen, N. Goernitz, L. Deecke, S. A. Siddiqui, A. Binder, E. Müller, and M. Kloft. Deep one-class classification. In J. Dy and A. Krause, editors, Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pages 4393-4402. PMLR, 10-15 Jul 2018.
|
| 280 |
+
|
| 281 |
+
[29] L. Ruff, R. A. Vandermeulen, B. J. Franks, K.-R. Müller, and M. Kloft. Rethinking Assumptions in Deep Anomaly Detection. arXiv:2006.00339 [cs, stat], July 2021. arXiv: 2006.00339.
|
| 282 |
+
|
| 283 |
+
[30] T. Schlegl, P. Seeböck, S. M. Waldstein, U. Schmidt-Erfurth, and G. Langs. Unsupervised Anomaly Detection with Generative Adversarial Networks to Guide Marker Discovery. In M. Niethammer, M. Styner, S. Aylward, H. Zhu, I. Oguz, P.-T. Yap, and D. Shen, editors, Information Processing in Medical Imaging, pages 146-157, Cham, 2017. Springer International Publishing.
|
| 284 |
+
|
| 285 |
+
[31] K. A. Spackman. Signal Detection Theory: Valuable Tools for Evaluating Inductive Learning. In Proceedings of the Sixth International Workshop on Machine Learning, pages 160-163, Ithaca, New York, USA, 1989. Morgan Kaufmann Publishers Inc., 340 Pine Street, Sixth Floor, San Francisco, CA, USA.
|
| 286 |
+
|
| 287 |
+
[32] H. Xiao, K. Rasul, and R. Vollgraf. Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms, 2017. arXiv: 1708.07747.
|
| 288 |
+
|
| 289 |
+
[33] J. Yu, Y. Zheng, X. Wang, W. Li, Y. Wu, R. Zhao, and L. Wu. FastFlow: Unsupervised Anomaly Detection and Localization via 2D Normalizing Flows. arXiv:2111.07677 [cs], Nov. 2021. arXiv: 2111.07677 version: 2.
|
| 290 |
+
|
| 291 |
+
[34] Y. Zheng, X. Wang, R. Deng, T. Bao, R. Zhao, and L. Wu. Focus Your Distribution: Coarse-to-Fine NonContrastive Learning for Anomaly Detection and Localization. arXiv:2110.04538 [cs], Oct. 2021. arXiv: 2110.04538 version: 1.
|
| 292 |
+
|
| 293 |
+
[35] K. Zhou, Y. Xiao, J. Yang, J. Cheng, W. Liu, W. Luo, Z. Gu, J. Liu, and S. Gao. Encoding Structure-Texture Relation with P-Net for Anomaly Detection in Retinal Images. In A. Vedaldi, H. Bischof, T. Brox, and J.-M. Frahm, editors, Computer Vision - ECCV 2020, pages 360-377, Cham, 2020. Springer International Publishing.
|
| 294 |
+
|
| 295 |
+
322 A Supplementary details
|
| 296 |
+
|
| 297 |
+
<table><tr><td>Normal Class</td><td>5</td><td>4</td><td>3</td><td>2</td><td>1</td></tr><tr><td>Bottle</td><td>GDR*</td><td>AE-SS*</td><td>FCDD (SS)</td><td>FCDD (U)</td><td>P-NET*</td></tr><tr><td/><td>92.0</td><td>93.0</td><td>96.7</td><td>97.0</td><td>99.0</td></tr><tr><td>Cable</td><td>VEVAE*</td><td>FCDD (U)</td><td>GDR*</td><td>SMAI*</td><td>FCDD (SS)</td></tr><tr><td/><td>90.0</td><td>90.5</td><td>91.0</td><td>92.0</td><td>94.1</td></tr><tr><td>Capsule</td><td>GDR*</td><td>FCDD (SS)</td><td>SMAI*</td><td>FCDD (U)</td><td>AE-SS*</td></tr><tr><td/><td>92.0</td><td>92.9</td><td>93.0</td><td>93.0</td><td>94.0</td></tr><tr><td>Carpet</td><td>VEVAE*</td><td>AE-SS*</td><td>SMAI*</td><td>FCDD (U)</td><td>FCDD (SS)</td></tr><tr><td/><td>78.0</td><td>87.0</td><td>88.0</td><td>96.4</td><td>98.7</td></tr><tr><td>Grid</td><td>AE-SS*</td><td>FCDD (SS)</td><td>GDR*</td><td>SMAI*</td><td>P-NET*</td></tr><tr><td/><td>94.0</td><td>95.2</td><td>96.0</td><td>97.0</td><td>98.0</td></tr><tr><td>Hazelnut</td><td>P-NET*</td><td>SMAI*</td><td>FCDD (SS)</td><td>GDR*</td><td>VEVAE*</td></tr><tr><td/><td>97.0</td><td>97.0</td><td>97.1</td><td>98.0</td><td>98.0</td></tr><tr><td>Leather</td><td>P-NET*</td><td>GDR*</td><td>VEVAE*</td><td>$\mathbf{{FCDD}\left( U\right) }$</td><td>FCDD (SS)</td></tr><tr><td/><td>89.0</td><td>93.0</td><td>95.0</td><td>98.4</td><td>98.7</td></tr><tr><td>Metal nut</td><td>GDR*</td><td>SMAI*</td><td>FCDD (U)</td><td>VEVAE*</td><td>FCDD (SS)</td></tr><tr><td/><td>91.0</td><td>92.0</td><td>94.0</td><td>94.0</td><td>97.5</td></tr><tr><td>Pill</td><td>AE-SS*</td><td>P-NET*</td><td>SMAI*</td><td>GDR*</td><td>FCDD (SS)</td></tr><tr><td/><td>91.0</td><td>91.0</td><td>92.0</td><td>93.0</td><td>96.8</td></tr><tr><td>Screw</td><td>AE-L2*</td><td>AE-SS*</td><td>SMAI*</td><td>VEVAE*</td><td>P-NET*</td></tr><tr><td/><td>96.0</td><td>96.0</td><td>96.0</td><td>97.0</td><td>100.0</td></tr><tr><td>Tile</td><td>VEVAE*</td><td>FCDD (U)</td><td>CNNFD*</td><td>P-NET*</td><td>FCDD (SS)</td></tr><tr><td/><td>80.0</td><td>91.4</td><td>93.0</td><td>97.0</td><td>98.5</td></tr><tr><td>Toothbrush</td><td>VEVAE*</td><td>FCDD (SS)</td><td>SMAI*</td><td>GDR*</td><td>P-NET*</td></tr><tr><td/><td>94.0</td><td>94.7</td><td>96.0</td><td>99.0</td><td>99.0</td></tr><tr><td>Transistor</td><td>FCDD (U)</td><td>AE-SS*</td><td>FCDD (SS)</td><td>GDR*</td><td>VEVAE*</td></tr><tr><td/><td>87.6</td><td>90.0</td><td>91.3</td><td>92.0</td><td>93.0</td></tr><tr><td>Wood</td><td>GDR*</td><td>FCDD (U)</td><td>CNNFD*</td><td>FCDD (SS)</td><td>P-NET*</td></tr><tr><td/><td>84.0</td><td>86.9</td><td>91.0</td><td>92.0</td><td>98.0</td></tr><tr><td>Zipper</td><td>AE-SS*</td><td>P-NET*</td><td>SMAI*</td><td>FCDD (U)</td><td>FCDD (SS)</td></tr><tr><td/><td>88.0</td><td>90.0</td><td>90.0</td><td>92.2</td><td>98.1</td></tr></table>
|
| 298 |
+
|
| 299 |
+
Table 4: Method rankings on MVTec-AD based on pixel-wise ROC-AUC. Using the Table 2 from [23], we compare the methods by normal class individually sorting the performances by pixel-wise ROC-AUC. The numbers in column names indicate the ranking (from 1 to 10); only the first 5 are displayed for the sake brevity. FCDD "unsupervised" and "semi-supervised" versions are respectively indicated by "(U)" and "(SS)", and their original values have been replaced by our own experiments' results.
|
| 300 |
+
|
| 301 |
+

|
| 302 |
+
|
| 303 |
+
Figure 5: Heatmaps from the experiment on PASCAL-VOC where the Clever Hans effect can be observed.
|
| 304 |
+
|
papers/ML_Reproducibility_Challenge/ML_Reproducibility_Challenge 2021/ML_Reproducibility_Challenge 2021 Fall/SWNM52GXh0Y/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,271 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ [REPRODUCIBILITY REPORT] EXPLAINABLE DEEP ONE-CLASS CLASSIFICATION
|
| 2 |
+
|
| 3 |
+
Anonymous Author(s)
|
| 4 |
+
|
| 5 |
+
Affiliation
|
| 6 |
+
|
| 7 |
+
Address
|
| 8 |
+
|
| 9 |
+
email
|
| 10 |
+
|
| 11 |
+
1
|
| 12 |
+
|
| 13 |
+
§ REPRODUCIBILITY SUMMARY
|
| 14 |
+
|
| 15 |
+
§ SCOPE OF REPRODUCIBILITY
|
| 16 |
+
|
| 17 |
+
Liznerski et al. [23] proposed Fully Convolutional Data Description (FCDD), an explainable version of the Hypersphere Classifier (HSC) to directly address image anomaly detection (AD) and pixel-wise AD without any post-hoc explainer methods. The authors claim that FCDD achieves results comparable with the state-of-the-art in sample-wise AD on Fashion-MNIST and CIFAR-10 and exceeds the state-of-the-art on the pixel-wise task on MVTec-AD. They also give evidence to show a clear improvement by using few ( 1 up to 8 ) real anomalous images in MVTec-AD for supervision at the pixel level. Finally, a qualitative study with horse images on PASCAL-VOC shows that FCDD can intrinsically reveal spurious model decisions by providing built-in anomaly score heatmaps.
|
| 18 |
+
|
| 19 |
+
§ METHODOLOGY
|
| 20 |
+
|
| 21 |
+
We have reproduced the quantitative results in the main text of [23] except for the performance on ImageNet: sample-wise AD on Fashion-MNIST and CIFAR-10, and pixel-wise AD on MVTec-AD. We used the author's code with GPUs NVIDIA TITAN X and NVIDIA TITAN Xp. A more detailed look into FCDD’s performance variability is presented, and a Critical Difference (CD) diagram is proposed as a more appropriate tool to compare methods over the datasets in MVTec-AD. Finally, we study the generalization power of the unsupervised FCDD during training.
|
| 22 |
+
|
| 23 |
+
§ RESULTS
|
| 24 |
+
|
| 25 |
+
All per-class performances (in terms of Area Under the ROC Curve (ROC-AUC) [31]) announced in the paper were replicated with absolute difference of at most 2% and below 1% on average, confirming the paper’s claims. We report the experiments' GPU and CPU memory requirements and their average training time. Our analyses beyond the paper's scope show that claiming to "exceed the state-of-the-art" should be considered with care, and evidence is given to argue that the pixel-wise unsupervised FCDD could narrow the gap with its semi-supervised version.
|
| 26 |
+
|
| 27 |
+
§ WHAT WAS EASY
|
| 28 |
+
|
| 29 |
+
The paper was clear and explicitly gave many training and hyperparameters details, which were conveniently set as default in the author's scripts. Their code was well organized and easy to interact with.
|
| 30 |
+
|
| 31 |
+
§ 25 WHAT WAS DIFFICULT
|
| 32 |
+
|
| 33 |
+
Using ImageNet proved to be challenging due to its size and need to manually set it up; we could not complete the 27 experiments on this dataset.
|
| 34 |
+
|
| 35 |
+
§ 28 COMMUNICATION WITH ORIGINAL AUTHORS
|
| 36 |
+
|
| 37 |
+
We reached the main author by e-mail to ask for help with ImageNet and discuss a few practical details. He promptly replied with useful information.
|
| 38 |
+
|
| 39 |
+
§ 1 INTRODUCTION
|
| 40 |
+
|
| 41 |
+
Liznerski et al. [23] proposed a deep learning based AD method capable of doing pixel-wise AD (also known as "anomaly segmentation") by directly generating anomaly score maps with a loss function based on the Hypersphere Classifier (HSC) [29], a successor of Deep Support Vector Data Description (DSVDD) [28], using a fully-convolutional neural network - hence the name Fully Convolutional Data Description (FCDD).
|
| 42 |
+
|
| 43 |
+
By only using convolutions, down-samplings, and batch normalization (no attention mechanism, nor fully connected layers), an image of dimensions $C \times H \times W$ (respectively, the number of channels, the height, and the width) is transformed into a latent representation ${C}^{\prime } \times U \times V$ , where $U < H$ and $V < W$ . This low-resolution representation is a $U \times V$ grid of ${C}^{\prime }$ -dimensional vectors, from which the pseudo-Huber loss function yields a $U \times V$ heatmap of anomaly scores.
|
| 44 |
+
|
| 45 |
+
Each of these ${C}^{\prime }$ -dimensional vectors contains information from a corresponding receptive field within the full resolution $\left( {H \times W}\right)$ image. Evidence [24] suggests that the effective influence of the input pixels decreases Gaussian-ly as their position is further away from the center of the receptive field. FCDD uses this principle to up-sample the obtained heatmap back to the original resolution $\left( {H \times W}\right)$ , therefore directly obtaining a visual, explainable anomaly score map.
|
| 46 |
+
|
| 47 |
+
Finally, FCDD is also adapted to perform anomaly detection at the sample (image) level by taking the average score on the low resolution anomaly heatmap.
|
| 48 |
+
|
| 49 |
+
Vocabulary: Sample v.s. Pixel-wise Anomaly Detection The authors refer to anomaly detection (AD) at the image level (e.g. given an unseen image, a model trained on horse images should infer if there is a horse present in it or, otherwise, the image is anomalous) simply as "detection", while anomaly segmentation/localization (i.e. finding regions, sets of pixels where there exists anomalous characteristics) is referred as "pixel-wise AD". Analogously, for the sake of clarity, we refer to the former as "sample-wise AD". Both setups are further explained in Section 3.3.
|
| 50 |
+
|
| 51 |
+
§ 2 SCOPE OF REPRODUCIBILITY
|
| 52 |
+
|
| 53 |
+
We aimed to reproduce the results announced in [23] to verify the effectiveness of the proposed method both in sample-wise and pixel-wise anomaly detection. Specifically, we tested the following claims from the original paper:
|
| 54 |
+
|
| 55 |
+
1. Claim 1: FCDD is comparable with state-of-the-art methods in terms of ROC-AUC in sample-wise anomaly detection on standard benchmarks (namely, Fashion-MNIST, CIFAR-10, and ImageNet);
|
| 56 |
+
|
| 57 |
+
2. Claim 2: FCDD exceeds the state-of-the-art on MVTec-AD in anomaly segmentation in the unsupervised setting in terms of pixel-wise ROC-AUC;
|
| 58 |
+
|
| 59 |
+
3. Claim 3: FCDD can incorporate real anomalies, and including only a few annotated images $\left( { \approx 5}\right)$ containing real, segmented anomalies, the performance consistently improves;
|
| 60 |
+
|
| 61 |
+
4. Claim 4: FCDD can reveal spurious model decisions without any extra explanation method on top of it.
|
| 62 |
+
|
| 63 |
+
The experiments on supporting Claim 1 on Fashion-MNIST and CIFAR-10 have been replicated, as well as all the tests on MVTec-AD, supporting Claims 2 and 3, and the qualitative analysis on PASCAL-VOC, supporting Claim 4. We provide details about computational requirements (CPU memory, GPU memory, and training time) necessary to run these experiments.
|
| 64 |
+
|
| 65 |
+
Beyond the paper Other analyses are proposed on the results obtained from the experiments corresponding to Claims 2 and 3, which further confirm Claim 3 but show that Claim 2 should be taken with consideration. We also investigate the evolution of the test performance during the optimization in MVTec-AD's unsupervised setting (see Section 3.3), revealing opportunity for improvement that could narrow down the gap with the semi-supervised setting.
|
| 66 |
+
|
| 67 |
+
§ 3 METHODOLOGY
|
| 68 |
+
|
| 69 |
+
We used the author's code (PyTorch 1.9.1 and Torchvision 0.10.1), publicly available on GitHub [4], to reproduce the 2 quantitative experiments presented in the main text. It required no external documentation, and the whole reproduction took roughly one month-person of work.
|
| 70 |
+
|
| 71 |
+
§ 3.1 DATASETS
|
| 72 |
+
|
| 73 |
+
The proposed method was originally tested [23] on Fashion-MNIST [32], CIFAR-10 [19], ImageNetlk [13], MVTec-AD [9], and PASCAL VOC [14]. Besides, EMNIST [11], CIFAR-100 [19], and ImageNet21k [version "fall 2011") were used as Outlier Exposure (OE) [16] datasets. All the datasets except for ImageNet were publicly available and automatically downloaded.
|
| 74 |
+
|
| 75 |
+
ImageNet We requested access and download ImageNet1k (version "ILSVRC 2012") from its official website [5]. ImageNet21k (a.k.a. ImageNet22k) was downloaded from academictorrents. com [1] because the version used in the original paper was not available in the official website anymore.
|
| 76 |
+
|
| 77 |
+
§ 3.2 MODELS
|
| 78 |
+
|
| 79 |
+
We used the same neural networks as the original paper, which depend on the dataset:
|
| 80 |
+
|
| 81 |
+
* Fashion-MNIST: three convolutional layers separated by two max-pool layers, where the first convolution is followed by a batch normalization and a leaky ReLU;
|
| 82 |
+
|
| 83 |
+
* CIFAR-10: two convolutions preceded by three blocks, each, composed of a convolution, a batch normalization, a leaky ReLU, and a max-pool layer;
|
| 84 |
+
|
| 85 |
+
* MVTec-AD and PASCAL-VOC (Clever Hans): the first 10 (frozen) layers from VGG11 pre-trained on ImageNet followed by two convolutional layers.
|
| 86 |
+
|
| 87 |
+
§ 3.3 EXPERIMENTAL SETUP
|
| 88 |
+
|
| 89 |
+
The paper presents two quantitative experiments: sample-wise (section "4.1 Standard Anomaly Detection Benchmarks" in [23]) and pixel-wise AD (section "4.2 Explaining Defects in Manufacturing" in [23]), as well as a qualitative experiment.
|
| 90 |
+
|
| 91 |
+
We followed the same experimental procedure used in [23]: each experiment - i.e. given a dataset, its OE when applicable, a normal class, and all hyperparameters - was repeated five times, and the reported values are the average over them unless stated otherwise (e.g. Figure 1).
|
| 92 |
+
|
| 93 |
+
Sample-wise Standard one-vs-rest setup, where one class of the given database is chosen as normal and all the others are used as anomalous. Each image has a binary ground truth signal - logically derived from its label - and the model assigns an anomaly score to it (therefore "sample-wise"). The metric used is the ROC-AUC on the test split, and all the classes are evaluated as normal. The datasets used in this experiment and their respective OE dataset is summarized in Table 1, and its results support Claim 1.
|
| 94 |
+
|
| 95 |
+
Table 1: Sample-wise experiments: tested datasets and their respective OE sources. From the dataset in the column "one-vs-rest", one class is used as normal at training and test time while all others are considered anomalies at test time only. The column Outlier Exposure (OE) is the dataset used as a source of anomalies at training time. "Experiment reference" will further be used to reference these configurations.
|
| 96 |
+
|
| 97 |
+
max width=
|
| 98 |
+
|
| 99 |
+
One-vs-rest dataset OE dataset Experiment reference
|
| 100 |
+
|
| 101 |
+
1-3
|
| 102 |
+
2*Fashion-MNIST EMNIST F-MNIST (OE-EMNIST)
|
| 103 |
+
|
| 104 |
+
2-3
|
| 105 |
+
CIFAR-100 F-MNIST (OE-CIFAR-100)
|
| 106 |
+
|
| 107 |
+
1-3
|
| 108 |
+
CIFAR-10 CIFAR-100 CIFAR-10
|
| 109 |
+
|
| 110 |
+
1-3
|
| 111 |
+
ImageNet1k ImageNet21k ImageNet
|
| 112 |
+
|
| 113 |
+
1-3
|
| 114 |
+
|
| 115 |
+
Pixel-wise Anomalies are defined at the pixel level (binary segmentation mask where "1" means "anomalous" and "0" means "normal") and an image is considered anomalous if it contains anomalous pixels although normal pixels are also present in the image. In each experiment a single class in MVTec-AD is fixed, its normal images are used both for training and test, and anomalous ones are used for test. As for the anomalous samples at training time, two settings were tested:
|
| 116 |
+
|
| 117 |
+
${}^{1}$ Also known as "ImageNet22k" or "full ImageNet".
|
| 118 |
+
|
| 119 |
+
Table 2: Memory requirements and training time using NVIDIA GPUs TITAN X and TITAN Xp (one at a time, indistinctly).
|
| 120 |
+
|
| 121 |
+
max width=
|
| 122 |
+
|
| 123 |
+
Experiment CPU memory (Gb) GPU memory (Gb) Training duration
|
| 124 |
+
|
| 125 |
+
1-4
|
| 126 |
+
F-MNIST (OE-CIFAR-100) 2 1.3 12 min
|
| 127 |
+
|
| 128 |
+
1-4
|
| 129 |
+
CIFAR-10 3 1.9 34 min
|
| 130 |
+
|
| 131 |
+
1-4
|
| 132 |
+
MVTec-AD unsupervised 38 5.5 1h 13 min
|
| 133 |
+
|
| 134 |
+
1-4
|
| 135 |
+
MVTec-AD semi-supervised 33 5.5 41 min
|
| 136 |
+
|
| 137 |
+
1-4
|
| 138 |
+
PASCAL VOC (Clever Hans) 5 11.8 21 min
|
| 139 |
+
|
| 140 |
+
1-4
|
| 141 |
+
|
| 142 |
+
* Unsupervised: synthetic random anomalies are generated using a "confetti noise" (colored blobs added to the image);
|
| 143 |
+
|
| 144 |
+
* Semi-supervised: one image per anomaly group ( 1 up to 8 types depending on the class) is removed from the test set and used for training.
|
| 145 |
+
|
| 146 |
+
Neither of these settings require an OE dataset because the anomalous samples are either synthetic or real on images of the nominal class. The performance metric is the ROC-AUC of the anomaly scores at the pixel level. MVTec-AD is the only dataset used in this case, and the results of these experiment support Claims 2 and 3.
|
| 147 |
+
|
| 148 |
+
Clever Hans (PASCAL VOC) About one fifth of the images in the class "horse" in PASCAL VOC [14] contain a watermark [20], which may cause models to learn spurious features. This is known as the "Clever Hans" effect as a reference to Hans, a horse claimed to be capable of performing arithmetic operations while it, in fact, read his master's reactions [26] - analogously, a model making decisions based on the watermarks would be "cheating" the real problem. In this experiment, a model is trained using all the classes in PASCAL VOC as normal, only the class "horse" as anomalous (a swapped one-vs-rest setting), and ImageNet1k as OE dataset. The goal is to qualitatively observe if one class classifiers are also vulnerable to the Clever Hans effect and show that FCDD transparently reveals such weaknesses as it intrinsically provides explanations (score heatmaps). This experiment has no quantitative metric but it supports Claim 4.
|
| 149 |
+
|
| 150 |
+
§ 3.4 HYPERPARAMETERS
|
| 151 |
+
|
| 152 |
+
Running the author's code with the default parameters, as described in the original paper, did not require any hyperpa-rameter tuning to achieve the reported results (differences detailed in Section 4) and confirm the authors' claims. We underline that the results on MVTec-AD were obtained using the same hyperparameters on both settings, unsupervised and semi-supervised.
|
| 153 |
+
|
| 154 |
+
§ 3.5 COMPUTATIONAL REQUIREMENTS
|
| 155 |
+
|
| 156 |
+
We used the NVIDIA GPUs "TITAN X" [6] and "TITAN Xp" [7] to run our experiments. The two GPUs were used indistinctly as they have similar characteristics, and only one GPU was used at a time. The GPU and CPU memory requirements and the average training duration of our experiments are listed in the Table 2 below.
|
| 157 |
+
|
| 158 |
+
CPU memory was recorded with an in-house python script using the library psutil [3] at 1Hz. GPU memory was recorded using gpustat [2] at $1\mathrm{\;{Hz}}$ . Both memory values are the maximum recorded during the experiments, including training and inference time. The training duration is an average over all the experiments.
|
| 159 |
+
|
| 160 |
+
On F-MNIST (OE-CIFAR-100) and CIFAR-10, the range of duration did not vary more than two minutes, on MVTec-AD unsupervised it ranged from 15 minutes up to one hour, and on MVTec-AD semi-supervised it ranged from 22 minutes up to one hour and 56 minutes depending on the class.
|
| 161 |
+
|
| 162 |
+
§ 3.6 BEYOND THE PAPER
|
| 163 |
+
|
| 164 |
+
We propose a more detailed visualization of the distribution of performances (due to random effects, all hyperparameters - held constant) of the two settings (unsupervised and semi-supervised) evaluated on MVTec-AD, and a critical difference diagram as alternative evaluation of performance across several datasets (the individual classes in MVTec-AD).
|
| 165 |
+
|
| 166 |
+
The network architectures used for the experiments on MVTec-AD were pre-trained on ImageNet and most of the weights are kept frozen, therefore raising the question of how much of FCDD's performance is due to the pre-training. 44 We took snapshots of the unsupervised model's weights in order to visualize the evolution of the performance on the test set during training.
|
| 167 |
+
|
| 168 |
+
§ 4 RESULTS
|
| 169 |
+
|
| 170 |
+
§ 4.1 REPRODUCING THE ORIGINAL PAPER
|
| 171 |
+
|
| 172 |
+
We reproduced the unsupervised and semi-supervised settings for MVTec-AD, and all experiments on Table 1 except for ImageNet - due to resource limitations, this experiment could not be completed in time.
|
| 173 |
+
|
| 174 |
+
The results of F-MNIST (OE-EMNIST) were not detailed in the original paper but it is claimed to have a class-mean ROC-AUC of $\sim 3\%$ below F-MNIST (OE-CIFAR-100), and we observed a difference of 2.7%.
|
| 175 |
+
|
| 176 |
+
We summarize the differences between our results and those from the original paper [23] in Table 3. The error margins presented are in absolute differences and refer to the ROC-AUC (which is expressed in %) from each individual class's experiment (recall: the mean over five iterations).
|
| 177 |
+
|
| 178 |
+
Table 3: Differences between the original paper results and ours. All the values are in absolute difference of ROC-AUC, expressed in $\%$ (it is not a relative error). The columns in "Difference per class" show statistics of the absolute difference of each individual class performance, while the column "Mean ROC-AUC diff." corresponds to the difference measured after the mean is taken over all the classes.
|
| 179 |
+
|
| 180 |
+
max width=
|
| 181 |
+
|
| 182 |
+
2*Experiment 2*ROC-AUC type 2*N. classes 2|c|Diff. per class 2*Mean ROC-AUC diff.
|
| 183 |
+
|
| 184 |
+
4-5
|
| 185 |
+
Max Mean
|
| 186 |
+
|
| 187 |
+
1-6
|
| 188 |
+
F-MNIST (OE-CIFAR-100) sample-wise 10 1% 0.6% 0.01%
|
| 189 |
+
|
| 190 |
+
1-6
|
| 191 |
+
CIFAR-10 sample-wise 10 0.5% 0.3% 0.4%
|
| 192 |
+
|
| 193 |
+
1-6
|
| 194 |
+
MVTec-AD unsupervised pixel-wise 15 2% 0.6% 0.2%
|
| 195 |
+
|
| 196 |
+
1-6
|
| 197 |
+
MVTec-AD semi-supervised pixel-wise 15 2% 0.7% 0.4%
|
| 198 |
+
|
| 199 |
+
1-6
|
| 200 |
+
|
| 201 |
+
5 Clever Hans (PASCAL VOC) The experiment on PASCAL VOC ("Clever Hans Effect") has been manually verified, 156 and similar (flawed) explanations on horse images have been observed. Two examples are shown in Figure 5 in Appendix A.
|
| 202 |
+
|
| 203 |
+
§ 4.2 BEYOND THE PAPER
|
| 204 |
+
|
| 205 |
+
Figure 1 further details the performance comparison between the unsupervised and semi-supervised settings on MVTec-AD on each class.
|
| 206 |
+
|
| 207 |
+
Figure 2 compares the methods in Table 2 in [23] with a CD diagram using the Wilcoxon-Holm procedure implemented by [17]. We replaced the results for FCDD from [23] by our own and copied the others from the literature [10, 30, 25, 22, 21, 12, 35]. For each class, the methods are sorted by their respective ROC-AUC and assigned a ranking from 1 to 10 according to their position; then, every pair of methods is compared with the Wilcoxon signed rank test with the confidence level $\alpha = 5\%$ . The CD diagram shows the average ranks of the methods on the horizontal scale, and the red bars group methods where each pair of methods are not significantly different according to the Wilcoxon signed rank test. The ranks from one to five (six to ten omitted for the sake of brevity) are shown in Table 4 in the Appendix A.
|
| 208 |
+
|
| 209 |
+
Figure 3 shows the test pixel-wise ROC-AUC scores during the optimization of the model used for MVTec-AD with the unsupervised setting. Due to time and resources constraints, we ran this experiment on 7 out of the 15 classes in MVTec-AD, each of them being evaluated 6 times (a few of them could not finish in time).
|
| 210 |
+
|
| 211 |
+
< g r a p h i c s >
|
| 212 |
+
|
| 213 |
+
Figure 1: Our experiments on MVTec-AD: unsupervised and semi-supervised settings compared. We display a box plot of the performances (in terms of pixel-wise ROC-AUC on the test set) achieved in different runs along with their individual performances scattered on the $x$ -axis.
|
| 214 |
+
|
| 215 |
+
< g r a p h i c s >
|
| 216 |
+
|
| 217 |
+
Figure 2: MVTec-AD Critical Difference diagram. Using the Table 2 from [23] with the results for FCDD replaced by our own, we build a critical difference diagram using the Wilcoxon-Holm method. Values on the scale are average rankings, and each red line groups a set of methods that are not significantly different in terms of ranking with a confidence level of $\alpha = 5\%$ . The per-class ROC-AUC values used for FCDD are from our own experiments, and those marked with "*" were taken from the literature. References: Scores for Self-Similarity (AE-SS) [10], L2 Autoencoder (AE-L2) [10], AnoGAN [30], and CNN Feature Dictionaries (CNNFD) [25] were taken from Table 3 in [10]. Other scores were taken from their respective papers: Visually Explained Variational Autoencoder (VEVAE) from Table 2 in [22], Superpixel Masking and Inpainting (SMAI) from Table 2 in [21], Gradient Descent Reconstruction with VAEs (GDR) from Table 1 in [12], Encoding Structure-Texture Relation with P-Net for AD (P-NET) from Table 6 in [35].
|
| 218 |
+
|
| 219 |
+
§ 5 DISCUSSION
|
| 220 |
+
|
| 221 |
+
Our reproduction of the experiments closely agree with the quantitative results published in the original paper. The proposed setup is adapted to support the claims announced in the paper hold, and the results corroborate it. We obtained results consistently close to the published ones without any further tuning of the parameters or modification of the authors' code.
|
| 222 |
+
|
| 223 |
+
< g r a p h i c s >
|
| 224 |
+
|
| 225 |
+
Performances were recorded at the following epochs (out of 1 to 200):
|
| 226 |
+
|
| 227 |
+
1,2,3,5,8,11,14,17,20,25,40,45,50,60,70,80,90,100,130,170,200
|
| 228 |
+
|
| 229 |
+
Figure 3: MVTec-AD test performance history.
|
| 230 |
+
|
| 231 |
+
§ 5.1 WHAT WAS EASY
|
| 232 |
+
|
| 233 |
+
The paper is clear, and it was easy to grasp the core ideas presented in the main text. It also provided enough details about the experimental setup, including training hyper parameters and network architecture in the appendices.
|
| 234 |
+
|
| 235 |
+
The code was overall well organized and the instructions to use it were direct and easy to use. Conveniently, the experiments were well encapsulated scripts and default parameters matched those described in the text. In particular, the experiments are self-documenting (i.e. they keep record of the configurations, logs, and results, etc) and flexible, allowing the user to change (many) parameters without modifying the code.
|
| 236 |
+
|
| 237 |
+
§ 5.2 WHAT WAS DIFFICULT
|
| 238 |
+
|
| 239 |
+
ImageNet Using ImageNet was the hardest part. At first, it took about a month to get access to it on the official website. Then we had to find an alternative source [I] to find the correct version of ImageNet21k ("fall 2011") because it was not available on the official website anymore. Basic operations (e.g. decompressing data, moving files) proved challenging due to its size (1.2 TB compressed), and the instructions to manually prepare this dataset could be more explicit - we wasted several hours of work because of a few mistakes we made.
|
| 240 |
+
|
| 241 |
+
We could not run the experiments on that dataset with the same hyperparameters because the GPU we dispose of did not have enough memory (16GB). Although, we note that some solutions like using multiple GPUs or decreasing the batch size were possible but could not be tried in time.
|
| 242 |
+
|
| 243 |
+
Minor code issues There were a few minor bugs, which we corrected without considerable difficulty. They were mostly related with the script add_exp_to_base.py, which automatically configures and launches baseline experiments based on a previously executed one. Finally, the code structure was slightly overcomplicated; e.g. the levels of abstraction/indirection, specially heritage, could be simpler. Although, we stress that this negative point is minor and did not cause any critical issues.
|
| 244 |
+
|
| 245 |
+
§ 5.3 COMMUNICATION WITH ORIGINAL AUTHORS
|
| 246 |
+
|
| 247 |
+
We exchanged e-mails with the main author, mostly to ask for help with getting access to the right versions of ImageNet and executing the experiments on it. He replied promptly, his answers were certainly helpful, and we would like to express our sincere appreciation.
|
| 248 |
+
|
| 249 |
+
§ 5.4 BEYOND THE PAPER
|
| 250 |
+
|
| 251 |
+
MVTec-AD: supervision effect The visualization proposed in Figure 1 further demonstrates that, with only a few images of real anomalies added to the training, the model's performance consistently improves. Only 4/15 classes have performance overlap, all others show a clear shift in the performance distribution.
|
| 252 |
+
|
| 253 |
+
However, it must be mentioned that the synthetic anomalies ignore the local supervision, therefore making its training sub-optimal. Figure 4 illustrates this with training images and their respective masks from the class "Pill": in 4a we see that the the semi-supervised setting provides pixel-level annotations on the anomalies (the ground truth mask), while in 4b we see that the entire image is considered anomalous in the unsupervised setting. This is a source of sub-optimality because, in the anomalous images, most pixels are, in fact, normal. In other words, similar images patches, free of synthetic anomaly, can be found both in normal and anomalous images.
|
| 254 |
+
|
| 255 |
+
Ultimately, this a clear opportunity of improvement that can bring the unsupervised setting's performance closer to the semi-supervised setting.
|
| 256 |
+
|
| 257 |
+
Test performance history Figure 3 reveals another issue with the method. Take, for instance, the purple and blue lines in the row "Carpet"; they reach a maximum point at the beginning of the gradient descent then converge to a point with less and less generalization power. These performance histories are evaluated on the test set, which is assumed to be unavailable at training time, so this information could not be used to stop the training or reject it. However, this reveals another opportunity of improvement because the training setting does not push the model to generalize well enough. Note, in Figure 4b, that the confetti noise also "stains" the background, creating synthetic anomalies "out of context", so using more realistic ones could be a solution?
|
| 258 |
+
|
| 259 |
+
< g r a p h i c s >
|
| 260 |
+
|
| 261 |
+
Figure 4: MVTec-AD training images: unsupervised vs. semi-supervised.
|
| 262 |
+
|
| 263 |
+
We also see that some (good) performances are likely due to the pre-training (on ImageNet). For instance, the class "Hazelnut" often reaches its maximum test performance (or almost) with few epochs.
|
| 264 |
+
|
| 265 |
+
Critical Difference (CD) diagram We propose a CD diagram in Figure 2 as a more appropriate methodology to aggregate results of all the classes. As a potential user is choosing an AD method for a new dataset, he or she is looking for the method(s) that will most likely be the best on his or her own problem. Therefore, the specific ROC-AUC scores on standard datasets have little importance but their relative performances is essential. In other words, what matters for a potential user is how the comparison of methods generalizes over specific datasets.
|
| 266 |
+
|
| 267 |
+
The experiments on MVTec-AD do not interact from one class (set as nominal) to another, making them essentially independent datasets; therefore, taking the average score over the classes may mislead the analysis. For instance, in [23], FCDD unsupervised is claimed to beat the state-of-the-art, although Figure 2 shows that GDR [12] has a better average ranking.
|
| 268 |
+
|
| 269 |
+
Note that the red bars in the diagram may give the impression that there is no relevant difference at all; although, it is important to observe that it was built considering only 15 datasets (therefore 15 rankings), making the statistical test hard, so using more datasets could refine these groups and provide better understanding. Finally, it is worth noting that the CD diagram is capable of incorporating new datasets, while the mean score over them would be too much affected if some cases are much easier or much harder than the others.
|
| 270 |
+
|
| 271 |
+
State-of-the-art It is worth mentioning that more recent methods have claimed better results on the same benchmarks used in this work. For instance, at least 5 papers [27, 18, 34, 33, 15] claim to have a mean ROC-AUC above 98% on the leaderboard for anomaly segmentation ("pixel-wise AD") on MVTec-AD in Papers with Code [8]. Unfortunately, we did not have the time to fully verify the experimental conditions in the sources but this serves as proxy evidence to take these results with consideration.
|
papers/ML_Reproducibility_Challenge/ML_Reproducibility_Challenge 2021/ML_Reproducibility_Challenge 2021 Fall/SY84JTG73CK/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,309 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Replication study of "Privacy-preserving Collaborative Learning with Automatic Transformation Search"
|
| 2 |
+
|
| 3 |
+
Anonymous Author(s)
|
| 4 |
+
|
| 5 |
+
Affiliation
|
| 6 |
+
|
| 7 |
+
Address
|
| 8 |
+
|
| 9 |
+
email 1
|
| 10 |
+
|
| 11 |
+
## Reproducibility Summary
|
| 12 |
+
|
| 13 |
+
## 2 Scope of Reproducibility
|
| 14 |
+
|
| 15 |
+
We evaluate the reproducibility of this paper, which proposes an automatic search algorithm to find privacy preserving transformation policies in the setting of federated learning. To achieve this we test all the main claims made by the authors by rerunning the experiments and reporting the reproduced results. We further extend their work to a new 6 dataset.
|
| 16 |
+
|
| 17 |
+
## 7 Methodology
|
| 18 |
+
|
| 19 |
+
We perform all experiments using the model architectures and hyperparameters proposed by the authors. We use the same datasets and extend their work to include one new dataset. A codebase was available which enables us to reproduce some of the results. However we deliver a contribution by fully re-implementing the codebase in PyTorch Lightning to ensure all components are modular, and experiments can be easily executed and extended, to the benefit of future research using the authors' method. All experiments are performed on Nvidia GTX 1080 GPUs.
|
| 20 |
+
|
| 21 |
+
## Results
|
| 22 |
+
|
| 23 |
+
Overall we find the same results as the authors: searched transformation policies can defend users in federated learning from reconstruction attacks. These transformations also have negligible impact on training efficiency and model accuracy. However we do not observe the reported correlation between the authors privacy-score and PSNR. We are in contact with the authors about this. Also we find that the results differ greatly from image to image, with standard deviations in PSNR values of over 25% the value. This means that for some specific images the method is not effective.
|
| 24 |
+
|
| 25 |
+
## 19 What was easy
|
| 26 |
+
|
| 27 |
+
Paper was clearly written and the general idea was easy to follow. There was a codebase available in PyTorch and part of the experiments were reproducible using this code.
|
| 28 |
+
|
| 29 |
+
## 22 What was difficult
|
| 30 |
+
|
| 31 |
+
The codebase was not clearly structured and has to be altered to produce results for most experiments reported in the paper. The reimplementation of the codebase was non-trivial due to otherwise undocumented details in the code having a large impact on outcomes.
|
| 32 |
+
|
| 33 |
+
## 26 Communication with original authors
|
| 34 |
+
|
| 35 |
+
The authors were contacted on multiple issues regarding implementation details and notation in the paper. Most of these were resolved swiftly and constructively. On two issues we remain in contact with the authors at this time.
|
| 36 |
+
|
| 37 |
+
## 1 Introduction
|
| 38 |
+
|
| 39 |
+
Collaborative learning systems allow multiple users to jointly train a Deep Learning (DL) model. Each user has their own training data which is to calculate local gradients [18] [21][12]. These local gradients are then shared among all users to update the parameters of the shared DL model, without the need of sensitive data leaving user's device. Primary benefit of federated learning is its capacity to improve the generalization of the resulting model, while maintaining privacy over the training data of individual users. This is especially important as confidentiality quickly becomes an essential quality of DL models [1]. Because of that, federated learning is used in applications from mobile networks [10] to autonomous driving [13] and health care [2].
|
| 40 |
+
|
| 41 |
+
However, this privacy benefit can be undone by reconstruction attacks as proposed by [6] [19] [20]. These attacks make it possible to reconstruct the original private training samples of users from the shared gradients of the federated learning system. This poses a considerable threat to the privacy of users of federated learning systems and the confidentiality of their data samples.
|
| 42 |
+
|
| 43 |
+
The paper subject to this reproducibility study proposes a novel approach to mitigate the threat from reconstruction attacks by augmenting the local training data of the user, before calculating the gradients [5]. Furthermore, the authors develop an automatic search algorithm to find the optimal transformation policies to augment the data and propose two novel metrics, ${S}_{pri}$ and ${S}_{acc}$ , to increase the efficiency of this search.
|
| 44 |
+
|
| 45 |
+
In this reproducibility report, we evaluate the main claims made by the authors of [5] by reproducing their experiments. Moreover, we assess the availability of hyperparameters and other information needed for reproducibility, as well as discuss the usability of the provided codebase. We also extend the experimental setup towards a new dataset.
|
| 46 |
+
|
| 47 |
+
## 2 Scope of reproducibility
|
| 48 |
+
|
| 49 |
+
The main goal of the original paper is to develop an automatic search algorithm to find transformation policies that can defend privacy-sensitive training data against reconstruction attacks in a federated learning system. To achieve this, authors devise two novel metrics, described in Section 3.2. The main claims made in the paper are the following:
|
| 50 |
+
|
| 51 |
+
- Claim 1: by augmenting training samples with carefully-selected transformation policies, reconstruction attacks become infeasible
|
| 52 |
+
|
| 53 |
+
- Claim 2: the proposed search algorithm can find good and general policies, i.e. policies that are able to defeat multiple variants of reconstruction attacks
|
| 54 |
+
|
| 55 |
+
- Claim 3: the found policies are highly transferable; good policies searched for one dataset are also suitable for another datasets
|
| 56 |
+
|
| 57 |
+
- Claim 4: the found policies have negligible impact on the training efficiency
|
| 58 |
+
|
| 59 |
+
- Claim 5: in general, a good policy is made up of transformations that distort the details of the training samples, while maintaining the semantic information
|
| 60 |
+
|
| 61 |
+
- Claim 6: the five transformations that work best are horizontal shifting (9), brightness (9), brightness (6), contrast (7) and contrast (6) (number inside the brackets represents the intensity of the applied transformation)
|
| 62 |
+
|
| 63 |
+
- Claim 7: ${S}_{pri}$ is a good measure of privacy; it is linearly correlated to Peak signal-to-noise ratio (PSNR) [9] with a Pearson Coefficient [15] of 0.697
|
| 64 |
+
|
| 65 |
+
Each of these claims is supported by the results of one or more experiments in [5], represented in the tables and figures. In this reproducibility study, we rerun the experiments and reproduce the resulting tables and figures. In Section 5, we list which experiments support which claims. In Section 6, we discuss the reproducibility of each experiment and evaluate the validity of the claims.
|
| 66 |
+
|
| 67 |
+
Beyond reproducing the above claims from the original paper, we propose two extensions. Both of these extensions are based on the transferability of the searched policies as claimed in Claim 3. We test the transferability of the policies against additional dataset and evaluate whether the best performing transformations are the same on this dataset.
|
| 68 |
+
|
| 69 |
+
Extension 1: Using the policies searched on one dataset and applying them to a new dataset can make reconstruction attacks against this new dataset infeasible
|
| 70 |
+
|
| 71 |
+
Extension 2: Since good policies share the same general qualities, as claimed by Claim 5, the five best transformations from Claim 6 are the same when using a different dataset.
|
| 72 |
+
|
| 73 |
+
6 In Section 5, we show the results for these extensions, and in Section 6, we relate them to the claims, experiments, and results from the original paper.
|
| 74 |
+
|
| 75 |
+
## 3 Finding privacy-preserving transformation policies
|
| 76 |
+
|
| 77 |
+
The original paper proposes an automatic search algorithm for finding privacy-preserving transformation policies. To better understand this main contribution, we take an in-depth look at what a transformation policy is and how good policies are found within a reasonable time.
|
| 78 |
+
|
| 79 |
+
### 3.1 Transformation policies
|
| 80 |
+
|
| 81 |
+
Transformations or augmentations have been widely used to improve model performance and generalizability in DL. In [5], transformations from AutoAugment [3] are repurposed to protect sensitive training data from reconstruction attacks. The library contains 50 different transformations, including rotation, crop, shift, inversion, brightness, and contrast. A transformation policy is a combination of $k$ such transformations applied to the training samples. In [5], $k = 3$ is chosen and the policies are denoted by the indices of the transformations within the AutoAugment library.
|
| 82 |
+
|
| 83 |
+
Consistently apply the best policy to the data would risk domain shift in the dataset. Therefore, the authors propose the hybrid strategy, where a policy is randomly selected from the candidate policies - this way, good privacy and accuracy are guaranteed [5].
|
| 84 |
+
|
| 85 |
+
### 3.2 Reducing the search-space
|
| 86 |
+
|
| 87 |
+
To find candidate policies, it is necessary to determine their effect on both privacy and accuracy. The transformations must be applied to training data, and a model must be trained. Because fully training a model is very expensive, the authors propose two metrics that serve as a proxy for the privacy preservation and accuracy of the fully trained model: privacy-score $\left( {S}_{pri}\right)$ and accuracy-score $\left( {S}_{acc}\right)$ . Low ${S}_{pri}$ entails the model has high privacy preservation potential, whereas high ${S}_{acc}$ means the model achieves good accuracy with the applied transformation policies. These metrics produce results on model that are trained with only ${10}\%$ of the data for only ${25}\%$ training iterations, reducing the search-space and making the policy search feasible in a reasonable time. Further details about the definition of ${S}_{pri}$ and ${S}_{acc}$ can be found in sections 4.2 and 4.3 of [5].
|
| 88 |
+
|
| 89 |
+
## 4 Experimental setup and code
|
| 90 |
+
|
| 91 |
+
To verify the claims made by the authors of [5], we reproduce their experiments. These experiments roughly fall into four categories: evaluating the effectiveness of the searched policies against reconstruction attacks, testing the transferability of the searched policies on different datasets and models, checking the impact on model efficiency, and studying the semantics behind the different transformations. Multiple models must be trained on augmented and un-augmented data for all these categories. For the attacks, the approach from [6] is applied. Section 5 provides a detailed description of the experiments and shows the results.
|
| 92 |
+
|
| 93 |
+
To reproduce the experiments performed by the authors, we used their existing codebase ${}^{2}$ , which is implemented in PyTorch [14]. We refactored parts of this code and re-implemented the rest to our own version written in PyTorch Lightning ${}^{5}$ , which leverages the interface advantages of the Lightning framework to make running experiments and logging results more intuitive. Main benefit of doing so is that more experiments can be tested with finer clarity and control of the setup. Our refactoring is this study's main contribution, and the codebase is publicly available at https://anonymous.4open.science/r/MLRC2021-0454.
|
| 94 |
+
|
| 95 |
+
---
|
| 96 |
+
|
| 97 |
+
https://github.com/DeepVoltaire/AutoAugment
|
| 98 |
+
|
| 99 |
+
https://github.com/gaow0007/ATSPrivacy
|
| 100 |
+
|
| 101 |
+
${}^{3}$ https://github.com/PyTorchLightning/pytorch-lightning
|
| 102 |
+
|
| 103 |
+
---
|
| 104 |
+
|
| 105 |
+
### 4.1 Datasets
|
| 106 |
+
|
| 107 |
+
The experiments in [5] are performed on two datasets, CIFAR-100 [III], and Fashion-MNIST5 [IT]. CIFAR-100 contains60,000color images of size ${32} \times {32}$ , from 100 classes. The test set is used as the validation set, consistent with the authors’ codebase. On the other hand, the Fashion-MNIST dataset contains70,000grey-scale images of ${28} \times {28}$ resolution from 10 classes. Again the test set is used as a validation set. We run experiments on one additional dataset in our extensions - Tiny ImageNet2005 [4]. It contains 120,000,64 $\times {64}$ RGB images of 200 different classes. However, a tiny version of the dataset is introduced in the original paper for policy-search purposes. This dataset version contains ${10}\%$ of the original samples, using the same distribution. It’s later used to train the models for the evaluation of ${S}_{pri}$ and ${S}_{acc}$ in the search algorithm.
|
| 108 |
+
|
| 109 |
+
### 4.2 Model descriptions
|
| 110 |
+
|
| 111 |
+
## We use the following models:
|
| 112 |
+
|
| 113 |
+
- ResNet20-4, a variation of ResNet20 [8] that has four times the number of channels also used in [6]. The total number of parameters is ${4.4}\mathrm{M}$ .
|
| 114 |
+
|
| 115 |
+
- ConvNet [6] - 8-layer Convolutional Neural Network, with batch normalization and a ReLU layer after each convolution layer. For this model the total number of parameters is ${3.7}\mathrm{M}$ .
|
| 116 |
+
|
| 117 |
+
The original codebase uses the implementation of both models from the repository of [6]. Our models are reimplemented in Pytorch Lightning. Both models were compared with the models from the original codebase in terms of accuracy; they achieved comparable results.
|
| 118 |
+
|
| 119 |
+
### 4.3 Hyperparameters
|
| 120 |
+
|
| 121 |
+
For policy search, we used ${C}_{\max } = {1500}$ and max policies equal to 10 . The batch size was 128, and the number of transforms in policy was 3 . For training, the batch size was also 128 and the number of epochs was 60 (see Section 4.4). To obtain a semi-trained network, we used a subset of 10% of the training dataset. The attack is performed on image with index 0 , and we reused the remaining setups according to the original paper e.g. "inversed" (default attack). Except for Figure 4, where a default config was used, with a number of maximum iterations changed to 2500. For further experiments, we followed the same conventions.
|
| 122 |
+
|
| 123 |
+
### 4.4 Computational requirements
|
| 124 |
+
|
| 125 |
+
We ran our experiments using Nvidia GeForce GTX 1080 GPU. The policy search took approximately 10 hours. The training of one model took approximately $2\mathrm{\;h}{40}\mathrm{\;{min}}$ using the original approach. However, training for 60 epochs achieves the same accuracy but in 50 minutes. It is because there exist periods of plateau, while Ir is not scheduled yet to drop. One attack with 2500 iterations took approximately 5 minutes, so measuring the correlation between ${S}_{pri}$ and PSNR took 8.5 hours (with policy search).
|
| 126 |
+
|
| 127 |
+
## 5 Experiments and results
|
| 128 |
+
|
| 129 |
+
### 5.1 Results reproducing original paper
|
| 130 |
+
|
| 131 |
+
Experiment 1 A reconstruction attack on 100 images from the CIFAR-100 validation set is performed with and without a searched transformation policy applied. We document the optimization process of the attack in terms of GradSim. The model used is ResNet20 trained on the tiny dataset for 50 epochs. The results of this experiment are shown in Figure 2, which shows a very similar result to the original paper. In addition to the original figure, we show the standard deviation over the 100 images, since GradSim can differ significantly from image to image. When taking the average of multiple runs, it can be seen that the privacy-aware transform does indeed make the GradSim convergence more difficult.
|
| 132 |
+
|
| 133 |
+
---
|
| 134 |
+
|
| 135 |
+
https://www.cs.toronto.edu/~kriz/cifar.html
|
| 136 |
+
|
| 137 |
+
5 https://github.com/zalandoresearch/fashion-mnist
|
| 138 |
+
|
| 139 |
+
${}^{6}$ http://cs231n.stanford.edu/tiny-imagenet-200.zip
|
| 140 |
+
|
| 141 |
+
${}^{7}$ https://github.com/JonasGeiping/invertinggradients
|
| 142 |
+
|
| 143 |
+
---
|
| 144 |
+
|
| 145 |
+

|
| 146 |
+
|
| 147 |
+
Figure 2: Optimization process of reconstruction attack with and without searched policy
|
| 148 |
+
|
| 149 |
+

|
| 150 |
+
|
| 151 |
+
Figure 4: Correlation between ${S}_{pri}$ and PSNR
|
| 152 |
+
|
| 153 |
+
Experiment 2 A visual comparison between reconstructed images with and without a searched transformation policy applied is performed for both ResNet20 and ConvNet on images from CIFAR-100 and Fashion-MNIST. The optimizer used in the attack is Adam+Cosine. The images, the resulting reconstructions, and their PSNR values are shown in the left half of Figure 5. The results from the original paper are shown at the right side of Figure 5. As can be seen, the images used and PSNR values reported are different. This is due to the fact that it was too expensive to identify the exact same images and PSNR values differ quite severely depending on the image used. However, for all 12 images, we observe a less pronounced visual effect of the transformation policy as well as a smaller gap in PSNR values between the reconstructions with and without the policies applied. This implicates that the effect shown in the original paper is not as severe for all images, although the images we selected may be particularly easy to reconstruct.
|
| 154 |
+
|
| 155 |
+

|
| 156 |
+
|
| 157 |
+
Figure 5: Visualization results for reconstruction attacks on different datasets and models with associated PSNR values. Our results above and original results below.
|
| 158 |
+
|
| 159 |
+
Experiment 3 To gain further insight into the effectiveness of the different policies, we report the qualitative and quantitative results of Adam+Cosine attacks and model accuracy for the datasets and models in Figure 5. The results are calculated over 6 images as performing the experiment is very expensive and number wasn't stated in the paper. The policies considered and the results are listed in Table 1. Table 1 shows similar patterns to the original paper, where the
|
| 160 |
+
|
| 161 |
+
<table><tr><td>Policy</td><td>PSNR</td><td>PSNR (std)</td><td>Acc</td></tr><tr><td>None</td><td>12.15</td><td>2.06</td><td>78.11</td></tr><tr><td>Random</td><td>9.92</td><td>1.93</td><td>75.02</td></tr><tr><td>3-1-7</td><td>6.77</td><td>0.88</td><td>71.59</td></tr><tr><td>43-18-18</td><td>9.34</td><td>1.81</td><td>77.16</td></tr><tr><td>Hybrid</td><td>8.25</td><td>1.64</td><td>77.47</td></tr></table>
|
| 162 |
+
|
| 163 |
+
(a) CIFAR-100 + ResNet20
|
| 164 |
+
|
| 165 |
+
<table><tr><td>Policy</td><td>PSNR</td><td>PSNR (std)</td><td>Acc</td></tr><tr><td>None</td><td>11.44</td><td>2.93</td><td>72.97</td></tr><tr><td>Random</td><td>10.29</td><td>1.02</td><td>71.93</td></tr><tr><td>21-13-3</td><td>8.23</td><td>2.18</td><td>63.26</td></tr><tr><td>7-4-15</td><td>10.31</td><td>2.14</td><td>70.77</td></tr><tr><td>Hybrid</td><td>9.89</td><td>1.47</td><td>68.91</td></tr></table>
|
| 166 |
+
|
| 167 |
+
(b) CIFAR-100 + ConvNet
|
| 168 |
+
|
| 169 |
+
<table><tr><td>Policy</td><td>PSNR</td><td>PSNR (std)</td><td>Acc</td></tr><tr><td>None</td><td>9.81</td><td>4.41</td><td>95.19</td></tr><tr><td>Random</td><td>10.06</td><td>2.04</td><td>95.19</td></tr><tr><td>19-15-45</td><td>8.26</td><td>0.37</td><td>92.44</td></tr><tr><td>2-43-21</td><td>8.93</td><td>2.93</td><td>93.93</td></tr><tr><td>Hybrid</td><td>8.41</td><td>1.45</td><td>95.14</td></tr></table>
|
| 170 |
+
|
| 171 |
+
(c) FMINST + ResNet20
|
| 172 |
+
|
| 173 |
+
<table><tr><td>Policy</td><td>PSNR</td><td>PSNR (std)</td><td>Acc</td></tr><tr><td>None</td><td>9.52</td><td>3.27</td><td>94.61</td></tr><tr><td>Random</td><td>9.47</td><td>2.27</td><td>94.47</td></tr><tr><td>42-28-42</td><td>7.59</td><td>0.89</td><td>94.62</td></tr><tr><td>14-48-48</td><td>8.41</td><td>2.10</td><td>94.68</td></tr><tr><td>Hybrid</td><td>6.80</td><td>0.98</td><td>94.59</td></tr></table>
|
| 174 |
+
|
| 175 |
+
(d) FMNIST + ConvNet
|
| 176 |
+
|
| 177 |
+
Table 1: PSNR (db) (including mean and standard deviation over 6 images) and model accuracy (%) of different transformation configurations for each model and dataset. ${19} - 1 - {18}$ is the random policy. searched policies have low PSNR values compared to not using transformations. We do observe that PSNR values have a relatively high standard deviation, and during our experiments, we found that the policies do not form a good defense for some images. This problem will be further discussed in Section 6.
|
| 178 |
+
|
| 179 |
+
Experiment 4 The defensive qualities of the searched transformation policies are benchmarked against existing defenses from the literature [20] [16] under the Adam+Cosine attack. The results are shown in Table 6, Although the exact values differ slightly, the overall results are similar to the original paper, where all the existing defenses perform 171 worse than the hybrid strategy.
|
| 180 |
+
|
| 181 |
+
Experiment 5 This experiment concerns Claim 2. Because policies should be general, they are tested against various attack configurations. For this, we again use 6 images from the test set and perform the different attacks on the images without the transformation policies applied and with the hybrid strategy transformation policies applied. The results are shown in Table 2. As can be seen from the table, the hybrid strategy works well against all configurations of the reconstruction attack. This is in line with the results from the original paper.
|
| 182 |
+
|
| 183 |
+
<table><tr><td>Attack</td><td>None</td><td>None (std)</td><td>Hybrid</td><td>Hybrid (std)</td></tr><tr><td>LBFGS+L2</td><td>8.61</td><td>1.22</td><td>6.33</td><td>2.00</td></tr><tr><td>Adam+Cosine</td><td>12.15</td><td>2.06</td><td>8.25</td><td>1.64</td></tr><tr><td>LBFGS+Cosine</td><td>9.62</td><td>0.91</td><td>7.47</td><td>0.25</td></tr><tr><td>Adam+L1</td><td>9.48</td><td>0.71</td><td>6.43</td><td>0.16</td></tr><tr><td>Adam+L2</td><td>9.28</td><td>0.69</td><td>6.46</td><td>0.21</td></tr><tr><td>SGD+Cosine</td><td>12.60</td><td>2.07</td><td>8.03</td><td>1.47</td></tr></table>
|
| 184 |
+
|
| 185 |
+
Table 2: PSNR values (db) (including mean and standard deviation over 6 images) of reconstructed images with and without transformations
|
| 186 |
+
|
| 187 |
+
applied for different attack configurations
|
| 188 |
+
|
| 189 |
+
<table><tr><td>Policy</td><td>PSNR</td><td>PSNR std</td></tr><tr><td>None</td><td>15.39</td><td>2.78</td></tr><tr><td>3-1-7</td><td>8.47</td><td>0.85</td></tr><tr><td>43-18-18</td><td>10.97</td><td>1.06</td></tr><tr><td>Hybrid</td><td>8.95</td><td>0.90</td></tr></table>
|
| 190 |
+
|
| 191 |
+
Table 3: CIFAR100 with ResNet20
|
| 192 |
+
|
| 193 |
+
Experiment 6 This experiment concerns the transferability of Claim 3. To test this, the policies searched on CIFAR- 100 are applied to Fashion-MNIST using both ResNet20 and ConvNet. Reconstruction attacks are performed with the Adam+Cosine attack. The resulting PSNR values and accuracies are listed in Table 4. The results differ from the original. It can be seen that the transformation policies are not effective here.
|
| 194 |
+
|
| 195 |
+
<table><tr><td>Policy</td><td>PSNR</td><td>PSNR (std)</td><td>Acc</td></tr><tr><td>None</td><td>9.81</td><td>4.41</td><td>95.19</td></tr><tr><td>3-1-7</td><td>9.30</td><td>2.72</td><td>93.20</td></tr><tr><td>43-18-18</td><td>10.03</td><td>2.23</td><td>94.88</td></tr><tr><td>Hybrid</td><td>7.49</td><td>1.57</td><td>94.49</td></tr></table>
|
| 196 |
+
|
| 197 |
+
(a) FMNIST + ResNet20
|
| 198 |
+
|
| 199 |
+
<table><tr><td>Policy</td><td>PSNR</td><td>PSNR (std)</td><td>Acc</td></tr><tr><td>None</td><td>9.52</td><td>3.27</td><td>94.61</td></tr><tr><td>21-13-3</td><td>9.99</td><td>2.12</td><td>92.38</td></tr><tr><td>7-4-15</td><td>9.34</td><td>1.62</td><td>94.35</td></tr><tr><td>Hybrid</td><td>11.50</td><td>5.80</td><td>93.77</td></tr></table>
|
| 200 |
+
|
| 201 |
+
(b) FMNIST + ConvNet
|
| 202 |
+
|
| 203 |
+
Table 4: Resulting PSNR (dB) and accuracy (%) values for applying policies searched on CIFAR-100 to Fashion-MINST
|
| 204 |
+
|
| 205 |
+
Experiment 7 The following experiment is aimed at Claim 4. The authors state that applying the search policies has a negligible impact on training efficiency. We trained ResNet20 with the searched policies applied and documented the loss and accuracy convergence to test this. From Figure 6 it can be seen that indeed applying transformations has almost zero impact on the training efficiency. It is also noteworthy to observe that the training curves are almost identical compared with the results from the original work.
|
| 206 |
+
|
| 207 |
+

|
| 208 |
+
|
| 209 |
+
Figure 6: Convergence speed with and without transformations applied
|
| 210 |
+
|
| 211 |
+
185
|
| 212 |
+
|
| 213 |
+
Experiment 8 Claim 5 states that good transformation policies obfuscate details in the training samples but maintain high-order semantic information. As such, attackers will have trouble reconstructing high frequency information. We test this by comparing the attacker-defender gradient similarity during an attack of models trained with the searched policy, a random policy, and no policy applied. From Figure 7, it can be seen that in shallow layers, the gradients differ significantly, whereas in deep layers, the gradients are very similar. This implies that the transformations do indeed have the desired effect and is in line with the results from the original paper.
|
| 214 |
+
|
| 215 |
+

|
| 216 |
+
|
| 217 |
+
Figure 7: Reproduced results of gradient similarity during the reconstruction optimization, for CIFAR100 with ResNet20
|
| 218 |
+
|
| 219 |
+
191
|
| 220 |
+
|
| 221 |
+
Experiment 9 In Claim 6 the authors report their 5 top transformations. We test whether we can find the same ones by calculating the privacy score on the dataset for each individual augmentation and show the results in Figure 8a and 86. Out of the best 5 transformations reported in the original paper we found 4 overlapping ones.
|
| 222 |
+
|
| 223 |
+

|
| 224 |
+
|
| 225 |
+
Figure 8: Privacy scores of the 50 transformation functions in the augmentation library, best transformations are red.
|
| 226 |
+
|
| 227 |
+
194
|
| 228 |
+
|
| 229 |
+
Experiment 10 The final experiment reproducing the results from the original paper is aimed at Claim 7. The authors claim that their privacy-score ${S}_{pri}$ is linearly correlated with PSNR with a Pearson-coefficient of 0.697 . We test this by running attacks and evaluating ${S}_{pri}$ on the model trained on tiny cifar100 for 50 epochs and found a very different result. As shown in Figure 4 there is hardly any correlation (Pearson-coefficient is 0.123). This might be due to the fact that these 100 transformation policies are selected at random out of 127.550 possible options. This is a striking result nonetheless, which we discuss in-depth in Section 6.
|
| 230 |
+
|
| 231 |
+
### 5.2 Results beyond original paper
|
| 232 |
+
|
| 233 |
+
Extension 1 We extend the evaluation of the transferability of the searched policies by evaluating the performance of the policy searched on CIFAR-100 on Rescaled ImageNet. The resulting PSNR values and accuracies are shown in Table 5. As can be seen from the table, the hybrid strategy produces only $1\mathrm{\;{dB}}$ improvement in PSNR value, and accuracy decreases by more than 4%. This weakens the claim of transferability made by the authors.
|
| 234 |
+
|
| 235 |
+
Extension 2 We additionally extend the evaluation of the transferability of the searched policies by testing which transformations work best on a different dataset. Since good policies share the same general qualities, as stated in Claim 5, the five best transformations from Claim 6 can be expected to be the same when using a different dataset. For this experiment, we use the Rescaled ImageNet dataset. The resulting transformations are shown in Figure 8c. Out of the 5 best transformations on the Rescaled ImageNet 3 were also found on CIFAR-100 in both our results and the results from the original paper. This shows that, indeed, these transformations contain the desired qualities from Claim 6.
|
| 236 |
+
|
| 237 |
+
<table><tr><td>Policy</td><td>PSNR</td><td>PSNR (std)</td><td>Acc</td></tr><tr><td>None</td><td>8.96</td><td>1.25</td><td>61.44</td></tr><tr><td>Hybrid</td><td>7.92</td><td>0.79</td><td>57.38</td></tr></table>
|
| 238 |
+
|
| 239 |
+
Table 5: PSNR values (dB) and accuracies of policies searched on CIFAR-100 applied to Rescaled ImageNet
|
| 240 |
+
|
| 241 |
+
<table><tr><td>Defense</td><td>PSNR</td><td>PSNR (std)</td><td>Acc</td></tr><tr><td>Pruning (70%)</td><td>11.62</td><td>2.18</td><td>74.61</td></tr><tr><td>Pruning (95%)</td><td>10.41</td><td>1.32</td><td>67.91</td></tr><tr><td>Pruning (99%)</td><td>9.96</td><td>0.57</td><td>53.43</td></tr><tr><td>Laplacian $\left( {10}^{-3}\right)$</td><td>10.73</td><td>1.02</td><td>71.45</td></tr><tr><td>Laplacian $\left( {10}^{-2}\right)$</td><td>12.03</td><td>0.79</td><td>26.20</td></tr><tr><td>Gaussian $\left( {10}^{-3}\right)$</td><td>12.11</td><td>2.98</td><td>72.89</td></tr><tr><td>Gaussian $\left( {10}^{-2}\right)$</td><td>12.13</td><td>1.14</td><td>36.25</td></tr></table>
|
| 242 |
+
|
| 243 |
+
Table 6: Comparisons with existing defense methods under the Adam+Cosine attack
|
| 244 |
+
|
| 245 |
+
## 6 Discussion
|
| 246 |
+
|
| 247 |
+
Overall the results in [5] are reproducible, except Figure 4, with a large discrepancy between our result and the original one - we are still in contact with the authors on this issue. Nevertheless, augmentation policies tend to work as a defense mechanism rather well. For most images, an attacker using reconstruction attacks is unable to find privacy-sensitive information. However, the standard deviation of our results is more than 25% in some settings, and we consider this a valuable metric to contribute. Some images are vulnerable to the attack even with the proposed defense mechanism, and it is as of yet unclear to us which types of images are more vulnerable than others. This issue must be developed further in future research to make the approach widely applicable in real-world use-cases where private data is at stake.
|
| 248 |
+
|
| 249 |
+
Additionally, we made observations in the codebase that, to the best of our knowledge, were not reported in the paper or any other accompanying documentation. The first was the fact that the loss of the training module was multiplied by a factor of 0.5 . This is not a fundamental flaw during the training phase, as it simply produces smaller gradients and therefore leads to a reduced effective learning rate. However, during the reconstruction attacks, the loss used by the attacker was not multiplied by this factor. This makes the attacker in practice use a different loss function from the one used to generate the gradient that it is attempting to match. This may therefore make reconstruction more difficult. Furthermore, we found that two other undocumented augmentations were added in all experiments, namely a random crop and random horizontal flip. Without these, the accuracy of our models decreased by over 10%. We are in contact with the authors regarding these observations, they acknowledged the halved loss as a bug.
|
| 250 |
+
|
| 251 |
+
### 6.1 What was easy
|
| 252 |
+
|
| 253 |
+
The explanation of the general idea and solution of the paper was very clearly put and easy to follow. The codebase contained a README with instructions on how to run some of the paper's experiments, and these instructions could be followed without significant problems. The code produced results as seen in the paper.
|
| 254 |
+
|
| 255 |
+
### 6.2 What was difficult
|
| 256 |
+
|
| 257 |
+
The most challenging part about reproduction was the unclear description of experiments in the paper and limited clarity in the codebase. Code in the repository was uncommented, used many global variables and many layers of indirection. Many chunks of code were not used, making it harder to follow. Some experimental settings and metrics were not implemented, and some experiment configurations led to fatal errors.
|
| 258 |
+
|
| 259 |
+
It was very unclear which steps were originally followed to obtain Figure 4. Despite the authors' helpful comment on which model was used, we were not able to reproduce the correlation, potentially due to randomness in a vast search space(127,550)and the limited sample size(100). Furthermore, the paper does not state how many images were used to produce the PSNR values in the tables. Finally, undocumented augmentations were added in some but not all settings, which was cause for some delay until this was found to be the cause for a 10% accuracy-gap with the authors’ results.
|
| 260 |
+
|
| 261 |
+
### 6.3 Communication with original authors
|
| 262 |
+
|
| 263 |
+
We contacted the authors about multiple clarifications regarding implementation details and notation in the paper. The authors responded promptly and answered almost all of our questions in the first round of contact. We are still in contact on two points. Firstly, regarding our reproduction of Figure 4. Since we got such differing results for this critical part of the authors' work, we are looking to investigate this further and possibly resolve the discrepancy with them. Secondly, we offered our refactoring of the codebase to the authors as a contribution to their work.
|
| 264 |
+
|
| 265 |
+
References
|
| 266 |
+
|
| 267 |
+
[1] M. Abadi, A. Chu, I. Goodfellow, H. B. McMahan, I. Mironov, K. Talwar, and L. Zhang. Deep learning with differential privacy. In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, pages 308-318, 2016.
|
| 268 |
+
|
| 269 |
+
[2] T. S. Brisimi, R. Chen, T. Mela, A. Olshevsky, I. C. Paschalidis, and W. Shi. Federated learning of predictive models from federated electronic health records. International journal of medical informatics, 112:59-67, 2018.
|
| 270 |
+
|
| 271 |
+
[3] E. D. Cubuk, B. Zoph, D. Mane, V. Vasudevan, and Q. V. Le. Autoaugment: Learning augmentation strategies from data. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 113-123, 2019.
|
| 272 |
+
|
| 273 |
+
[4] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. ImageNet: A Large-Scale Hierarchical Image Database. In CVPR09, 2009.
|
| 274 |
+
|
| 275 |
+
[5] W. Gao, S. Guo, T. Zhang, H. Qiu, Y. Wen, and Y. Liu. Privacy-preserving collaborative learning with automatic transformation search. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 114-123, 2021.
|
| 276 |
+
|
| 277 |
+
[6] J. Geiping, H. Bauermeister, H. Dröge, and M. Moeller. Inverting gradients-how easy is it to break privacy in federated learning? arXiv preprint arXiv:2003.14053, 2020.
|
| 278 |
+
|
| 279 |
+
[7] S. Guo, T. Zhang, X. Xie, L. Ma, T. Xiang, and Y. Liu. Towards byzantine-resilient learning in decentralized systems. arXiv preprint arXiv:2002.08569, 2020.
|
| 280 |
+
|
| 281 |
+
[8] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770-778, 2016.
|
| 282 |
+
|
| 283 |
+
[9] A. Horé and D. Ziou. Image quality metrics: Psnr vs. ssim. In 2010 20th International Conference on Pattern Recognition, pages 2366-2369, 2010.
|
| 284 |
+
|
| 285 |
+
[10] J. Kang, Z. Xiong, D. Niyato, Y. Zou, Y. Zhang, and M. Guizani. Reliable federated learning for mobile networks. CoRR, abs/1910.06837, 2019.
|
| 286 |
+
|
| 287 |
+
[11] A. Krizhevsky, G. Hinton, et al. Learning multiple layers of features from tiny images. 2009.
|
| 288 |
+
|
| 289 |
+
[12] L. Melis, C. Song, E. De Cristofaro, and V. Shmatikov. Exploiting unintended feature leakage in collaborative learning. In 2019 IEEE Symposium on Security and Privacy (SP), pages 691-706. IEEE, 2019.
|
| 290 |
+
|
| 291 |
+
[13] S. Niknam, H. S. Dhillon, and J. H. Reed. Federated learning for wireless communications: Motivation, opportunities, and challenges. IEEE Communications Magazine, 58(6):46-51, 2020.
|
| 292 |
+
|
| 293 |
+
[14] A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, A. Desmaison, A. Kopf, E. Yang, Z. DeVito, M. Raison, A. Tejani, S. Chilamkurthy, B. Steiner, L. Fang, J. Bai, and S. Chintala. Pytorch: An imperative style, high-performance deep learning library. In Advances in Neural Information Processing Systems 32, pages 8024-8035. Curran Associates, Inc., 2019.
|
| 294 |
+
|
| 295 |
+
[15] S. M. Stigler. Francis Galton's Account of the Invention of Correlation. Statistical Science, 4(2):73 - 79, 1989.
|
| 296 |
+
|
| 297 |
+
[16] W. Wei, L. Liu, M. Loper, K. H. Chow, M. E. Gursoy, S. Truex, and Y. Wu. A framework for evaluating gradient leakage attacks in federated learning. CoRR, abs/2004.10397, 2020.
|
| 298 |
+
|
| 299 |
+
[17] H. Xiao, K. Rasul, and R. Vollgraf. Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms, 2017.
|
| 300 |
+
|
| 301 |
+
[18] Q. Yang, Y. Liu, T. Chen, and Y. Tong. Federated machine learning: Concept and applications. ACM Transactions on Intelligent Systems and Technology (TIST), 10(2):1-19, 2019.
|
| 302 |
+
|
| 303 |
+
[19] B. Zhao, K. R. Mopuri, and H. Bilen. idlg: Improved deep leakage from gradients. arXiv preprint arXiv:2001.02610, 2020.
|
| 304 |
+
|
| 305 |
+
251
|
| 306 |
+
|
| 307 |
+
[20] L. Zhu, Z. Liu, and S. Han. Deep leakage from gradients. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc., 2019.
|
| 308 |
+
|
| 309 |
+
292 293
|
papers/ML_Reproducibility_Challenge/ML_Reproducibility_Challenge 2021/ML_Reproducibility_Challenge 2021 Fall/SY84JTG73CK/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,436 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ REPLICATION STUDY OF "PRIVACY-PRESERVING COLLABORATIVE LEARNING WITH AUTOMATIC TRANSFORMATION SEARCH"
|
| 2 |
+
|
| 3 |
+
Anonymous Author(s)
|
| 4 |
+
|
| 5 |
+
Affiliation
|
| 6 |
+
|
| 7 |
+
Address
|
| 8 |
+
|
| 9 |
+
email 1
|
| 10 |
+
|
| 11 |
+
§ REPRODUCIBILITY SUMMARY
|
| 12 |
+
|
| 13 |
+
§ 2 SCOPE OF REPRODUCIBILITY
|
| 14 |
+
|
| 15 |
+
We evaluate the reproducibility of this paper, which proposes an automatic search algorithm to find privacy preserving transformation policies in the setting of federated learning. To achieve this we test all the main claims made by the authors by rerunning the experiments and reporting the reproduced results. We further extend their work to a new 6 dataset.
|
| 16 |
+
|
| 17 |
+
§ 7 METHODOLOGY
|
| 18 |
+
|
| 19 |
+
We perform all experiments using the model architectures and hyperparameters proposed by the authors. We use the same datasets and extend their work to include one new dataset. A codebase was available which enables us to reproduce some of the results. However we deliver a contribution by fully re-implementing the codebase in PyTorch Lightning to ensure all components are modular, and experiments can be easily executed and extended, to the benefit of future research using the authors' method. All experiments are performed on Nvidia GTX 1080 GPUs.
|
| 20 |
+
|
| 21 |
+
§ RESULTS
|
| 22 |
+
|
| 23 |
+
Overall we find the same results as the authors: searched transformation policies can defend users in federated learning from reconstruction attacks. These transformations also have negligible impact on training efficiency and model accuracy. However we do not observe the reported correlation between the authors privacy-score and PSNR. We are in contact with the authors about this. Also we find that the results differ greatly from image to image, with standard deviations in PSNR values of over 25% the value. This means that for some specific images the method is not effective.
|
| 24 |
+
|
| 25 |
+
§ 19 WHAT WAS EASY
|
| 26 |
+
|
| 27 |
+
Paper was clearly written and the general idea was easy to follow. There was a codebase available in PyTorch and part of the experiments were reproducible using this code.
|
| 28 |
+
|
| 29 |
+
§ 22 WHAT WAS DIFFICULT
|
| 30 |
+
|
| 31 |
+
The codebase was not clearly structured and has to be altered to produce results for most experiments reported in the paper. The reimplementation of the codebase was non-trivial due to otherwise undocumented details in the code having a large impact on outcomes.
|
| 32 |
+
|
| 33 |
+
§ 26 COMMUNICATION WITH ORIGINAL AUTHORS
|
| 34 |
+
|
| 35 |
+
The authors were contacted on multiple issues regarding implementation details and notation in the paper. Most of these were resolved swiftly and constructively. On two issues we remain in contact with the authors at this time.
|
| 36 |
+
|
| 37 |
+
§ 1 INTRODUCTION
|
| 38 |
+
|
| 39 |
+
Collaborative learning systems allow multiple users to jointly train a Deep Learning (DL) model. Each user has their own training data which is to calculate local gradients [18] [21][12]. These local gradients are then shared among all users to update the parameters of the shared DL model, without the need of sensitive data leaving user's device. Primary benefit of federated learning is its capacity to improve the generalization of the resulting model, while maintaining privacy over the training data of individual users. This is especially important as confidentiality quickly becomes an essential quality of DL models [1]. Because of that, federated learning is used in applications from mobile networks [10] to autonomous driving [13] and health care [2].
|
| 40 |
+
|
| 41 |
+
However, this privacy benefit can be undone by reconstruction attacks as proposed by [6] [19] [20]. These attacks make it possible to reconstruct the original private training samples of users from the shared gradients of the federated learning system. This poses a considerable threat to the privacy of users of federated learning systems and the confidentiality of their data samples.
|
| 42 |
+
|
| 43 |
+
The paper subject to this reproducibility study proposes a novel approach to mitigate the threat from reconstruction attacks by augmenting the local training data of the user, before calculating the gradients [5]. Furthermore, the authors develop an automatic search algorithm to find the optimal transformation policies to augment the data and propose two novel metrics, ${S}_{pri}$ and ${S}_{acc}$ , to increase the efficiency of this search.
|
| 44 |
+
|
| 45 |
+
In this reproducibility report, we evaluate the main claims made by the authors of [5] by reproducing their experiments. Moreover, we assess the availability of hyperparameters and other information needed for reproducibility, as well as discuss the usability of the provided codebase. We also extend the experimental setup towards a new dataset.
|
| 46 |
+
|
| 47 |
+
§ 2 SCOPE OF REPRODUCIBILITY
|
| 48 |
+
|
| 49 |
+
The main goal of the original paper is to develop an automatic search algorithm to find transformation policies that can defend privacy-sensitive training data against reconstruction attacks in a federated learning system. To achieve this, authors devise two novel metrics, described in Section 3.2. The main claims made in the paper are the following:
|
| 50 |
+
|
| 51 |
+
* Claim 1: by augmenting training samples with carefully-selected transformation policies, reconstruction attacks become infeasible
|
| 52 |
+
|
| 53 |
+
* Claim 2: the proposed search algorithm can find good and general policies, i.e. policies that are able to defeat multiple variants of reconstruction attacks
|
| 54 |
+
|
| 55 |
+
* Claim 3: the found policies are highly transferable; good policies searched for one dataset are also suitable for another datasets
|
| 56 |
+
|
| 57 |
+
* Claim 4: the found policies have negligible impact on the training efficiency
|
| 58 |
+
|
| 59 |
+
* Claim 5: in general, a good policy is made up of transformations that distort the details of the training samples, while maintaining the semantic information
|
| 60 |
+
|
| 61 |
+
* Claim 6: the five transformations that work best are horizontal shifting (9), brightness (9), brightness (6), contrast (7) and contrast (6) (number inside the brackets represents the intensity of the applied transformation)
|
| 62 |
+
|
| 63 |
+
* Claim 7: ${S}_{pri}$ is a good measure of privacy; it is linearly correlated to Peak signal-to-noise ratio (PSNR) [9] with a Pearson Coefficient [15] of 0.697
|
| 64 |
+
|
| 65 |
+
Each of these claims is supported by the results of one or more experiments in [5], represented in the tables and figures. In this reproducibility study, we rerun the experiments and reproduce the resulting tables and figures. In Section 5, we list which experiments support which claims. In Section 6, we discuss the reproducibility of each experiment and evaluate the validity of the claims.
|
| 66 |
+
|
| 67 |
+
Beyond reproducing the above claims from the original paper, we propose two extensions. Both of these extensions are based on the transferability of the searched policies as claimed in Claim 3. We test the transferability of the policies against additional dataset and evaluate whether the best performing transformations are the same on this dataset.
|
| 68 |
+
|
| 69 |
+
Extension 1: Using the policies searched on one dataset and applying them to a new dataset can make reconstruction attacks against this new dataset infeasible
|
| 70 |
+
|
| 71 |
+
Extension 2: Since good policies share the same general qualities, as claimed by Claim 5, the five best transformations from Claim 6 are the same when using a different dataset.
|
| 72 |
+
|
| 73 |
+
6 In Section 5, we show the results for these extensions, and in Section 6, we relate them to the claims, experiments, and results from the original paper.
|
| 74 |
+
|
| 75 |
+
§ 3 FINDING PRIVACY-PRESERVING TRANSFORMATION POLICIES
|
| 76 |
+
|
| 77 |
+
The original paper proposes an automatic search algorithm for finding privacy-preserving transformation policies. To better understand this main contribution, we take an in-depth look at what a transformation policy is and how good policies are found within a reasonable time.
|
| 78 |
+
|
| 79 |
+
§ 3.1 TRANSFORMATION POLICIES
|
| 80 |
+
|
| 81 |
+
Transformations or augmentations have been widely used to improve model performance and generalizability in DL. In [5], transformations from AutoAugment [3] are repurposed to protect sensitive training data from reconstruction attacks. The library contains 50 different transformations, including rotation, crop, shift, inversion, brightness, and contrast. A transformation policy is a combination of $k$ such transformations applied to the training samples. In [5], $k = 3$ is chosen and the policies are denoted by the indices of the transformations within the AutoAugment library.
|
| 82 |
+
|
| 83 |
+
Consistently apply the best policy to the data would risk domain shift in the dataset. Therefore, the authors propose the hybrid strategy, where a policy is randomly selected from the candidate policies - this way, good privacy and accuracy are guaranteed [5].
|
| 84 |
+
|
| 85 |
+
§ 3.2 REDUCING THE SEARCH-SPACE
|
| 86 |
+
|
| 87 |
+
To find candidate policies, it is necessary to determine their effect on both privacy and accuracy. The transformations must be applied to training data, and a model must be trained. Because fully training a model is very expensive, the authors propose two metrics that serve as a proxy for the privacy preservation and accuracy of the fully trained model: privacy-score $\left( {S}_{pri}\right)$ and accuracy-score $\left( {S}_{acc}\right)$ . Low ${S}_{pri}$ entails the model has high privacy preservation potential, whereas high ${S}_{acc}$ means the model achieves good accuracy with the applied transformation policies. These metrics produce results on model that are trained with only ${10}\%$ of the data for only ${25}\%$ training iterations, reducing the search-space and making the policy search feasible in a reasonable time. Further details about the definition of ${S}_{pri}$ and ${S}_{acc}$ can be found in sections 4.2 and 4.3 of [5].
|
| 88 |
+
|
| 89 |
+
§ 4 EXPERIMENTAL SETUP AND CODE
|
| 90 |
+
|
| 91 |
+
To verify the claims made by the authors of [5], we reproduce their experiments. These experiments roughly fall into four categories: evaluating the effectiveness of the searched policies against reconstruction attacks, testing the transferability of the searched policies on different datasets and models, checking the impact on model efficiency, and studying the semantics behind the different transformations. Multiple models must be trained on augmented and un-augmented data for all these categories. For the attacks, the approach from [6] is applied. Section 5 provides a detailed description of the experiments and shows the results.
|
| 92 |
+
|
| 93 |
+
To reproduce the experiments performed by the authors, we used their existing codebase ${}^{2}$ , which is implemented in PyTorch [14]. We refactored parts of this code and re-implemented the rest to our own version written in PyTorch Lightning ${}^{5}$ , which leverages the interface advantages of the Lightning framework to make running experiments and logging results more intuitive. Main benefit of doing so is that more experiments can be tested with finer clarity and control of the setup. Our refactoring is this study's main contribution, and the codebase is publicly available at https://anonymous.4open.science/r/MLRC2021-0454.
|
| 94 |
+
|
| 95 |
+
https://github.com/DeepVoltaire/AutoAugment
|
| 96 |
+
|
| 97 |
+
https://github.com/gaow0007/ATSPrivacy
|
| 98 |
+
|
| 99 |
+
${}^{3}$ https://github.com/PyTorchLightning/pytorch-lightning
|
| 100 |
+
|
| 101 |
+
§ 4.1 DATASETS
|
| 102 |
+
|
| 103 |
+
The experiments in [5] are performed on two datasets, CIFAR-100 [III], and Fashion-MNIST5 [IT]. CIFAR-100 contains60,000color images of size ${32} \times {32}$ , from 100 classes. The test set is used as the validation set, consistent with the authors’ codebase. On the other hand, the Fashion-MNIST dataset contains70,000grey-scale images of ${28} \times {28}$ resolution from 10 classes. Again the test set is used as a validation set. We run experiments on one additional dataset in our extensions - Tiny ImageNet2005 [4]. It contains 120,000,64 $\times {64}$ RGB images of 200 different classes. However, a tiny version of the dataset is introduced in the original paper for policy-search purposes. This dataset version contains ${10}\%$ of the original samples, using the same distribution. It’s later used to train the models for the evaluation of ${S}_{pri}$ and ${S}_{acc}$ in the search algorithm.
|
| 104 |
+
|
| 105 |
+
§ 4.2 MODEL DESCRIPTIONS
|
| 106 |
+
|
| 107 |
+
§ WE USE THE FOLLOWING MODELS:
|
| 108 |
+
|
| 109 |
+
* ResNet20-4, a variation of ResNet20 [8] that has four times the number of channels also used in [6]. The total number of parameters is ${4.4}\mathrm{M}$ .
|
| 110 |
+
|
| 111 |
+
* ConvNet [6] - 8-layer Convolutional Neural Network, with batch normalization and a ReLU layer after each convolution layer. For this model the total number of parameters is ${3.7}\mathrm{M}$ .
|
| 112 |
+
|
| 113 |
+
The original codebase uses the implementation of both models from the repository of [6]. Our models are reimplemented in Pytorch Lightning. Both models were compared with the models from the original codebase in terms of accuracy; they achieved comparable results.
|
| 114 |
+
|
| 115 |
+
§ 4.3 HYPERPARAMETERS
|
| 116 |
+
|
| 117 |
+
For policy search, we used ${C}_{\max } = {1500}$ and max policies equal to 10 . The batch size was 128, and the number of transforms in policy was 3 . For training, the batch size was also 128 and the number of epochs was 60 (see Section 4.4). To obtain a semi-trained network, we used a subset of 10% of the training dataset. The attack is performed on image with index 0, and we reused the remaining setups according to the original paper e.g. "inversed" (default attack). Except for Figure 4, where a default config was used, with a number of maximum iterations changed to 2500. For further experiments, we followed the same conventions.
|
| 118 |
+
|
| 119 |
+
§ 4.4 COMPUTATIONAL REQUIREMENTS
|
| 120 |
+
|
| 121 |
+
We ran our experiments using Nvidia GeForce GTX 1080 GPU. The policy search took approximately 10 hours. The training of one model took approximately $2\mathrm{\;h}{40}\mathrm{\;{min}}$ using the original approach. However, training for 60 epochs achieves the same accuracy but in 50 minutes. It is because there exist periods of plateau, while Ir is not scheduled yet to drop. One attack with 2500 iterations took approximately 5 minutes, so measuring the correlation between ${S}_{pri}$ and PSNR took 8.5 hours (with policy search).
|
| 122 |
+
|
| 123 |
+
§ 5 EXPERIMENTS AND RESULTS
|
| 124 |
+
|
| 125 |
+
§ 5.1 RESULTS REPRODUCING ORIGINAL PAPER
|
| 126 |
+
|
| 127 |
+
Experiment 1 A reconstruction attack on 100 images from the CIFAR-100 validation set is performed with and without a searched transformation policy applied. We document the optimization process of the attack in terms of GradSim. The model used is ResNet20 trained on the tiny dataset for 50 epochs. The results of this experiment are shown in Figure 2, which shows a very similar result to the original paper. In addition to the original figure, we show the standard deviation over the 100 images, since GradSim can differ significantly from image to image. When taking the average of multiple runs, it can be seen that the privacy-aware transform does indeed make the GradSim convergence more difficult.
|
| 128 |
+
|
| 129 |
+
https://www.cs.toronto.edu/k̃riz/cifar.html
|
| 130 |
+
|
| 131 |
+
5 https://github.com/zalandoresearch/fashion-mnist
|
| 132 |
+
|
| 133 |
+
${}^{6}$ http://cs231n.stanford.edu/tiny-imagenet-200.zip
|
| 134 |
+
|
| 135 |
+
${}^{7}$ https://github.com/JonasGeiping/invertinggradients
|
| 136 |
+
|
| 137 |
+
* WI privacy aware transform 1.0 w/. privacy-aware transform GradSim 0.6 0.2 5/20 10/20 15/20 20/20 (b) Result from original paper We say transform (a) Reproduced result
|
| 138 |
+
|
| 139 |
+
Figure 2: Optimization process of reconstruction attack with and without searched policy
|
| 140 |
+
|
| 141 |
+
10.0 0.35 0.50 (b) Original results (a) Reproduced results
|
| 142 |
+
|
| 143 |
+
Figure 4: Correlation between ${S}_{pri}$ and PSNR
|
| 144 |
+
|
| 145 |
+
Experiment 2 A visual comparison between reconstructed images with and without a searched transformation policy applied is performed for both ResNet20 and ConvNet on images from CIFAR-100 and Fashion-MNIST. The optimizer used in the attack is Adam+Cosine. The images, the resulting reconstructions, and their PSNR values are shown in the left half of Figure 5. The results from the original paper are shown at the right side of Figure 5. As can be seen, the images used and PSNR values reported are different. This is due to the fact that it was too expensive to identify the exact same images and PSNR values differ quite severely depending on the image used. However, for all 12 images, we observe a less pronounced visual effect of the transformation policy as well as a smaller gap in PSNR values between the reconstructions with and without the policies applied. This implicates that the effect shown in the original paper is not as severe for all images, although the images we selected may be particularly easy to reconstruct.
|
| 146 |
+
|
| 147 |
+
FMNIST(d) 15.61dB ${6.97}\mathrm{\;{dB}}$ ${7.02}\mathrm{{dB}}$ 8.24dB ${6.04}\mathrm{\;{dB}}$ FMNIST(h) FMNIST with ResNet20 with ConvNet with ResNet20 with ConvNet with ResNet20 with ConvNet with ResNet20 with ConvNet
|
| 148 |
+
|
| 149 |
+
Figure 5: Visualization results for reconstruction attacks on different datasets and models with associated PSNR values. Our results above and original results below.
|
| 150 |
+
|
| 151 |
+
Experiment 3 To gain further insight into the effectiveness of the different policies, we report the qualitative and quantitative results of Adam+Cosine attacks and model accuracy for the datasets and models in Figure 5. The results are calculated over 6 images as performing the experiment is very expensive and number wasn't stated in the paper. The policies considered and the results are listed in Table 1. Table 1 shows similar patterns to the original paper, where the
|
| 152 |
+
|
| 153 |
+
max width=
|
| 154 |
+
|
| 155 |
+
Policy PSNR PSNR (std) Acc
|
| 156 |
+
|
| 157 |
+
1-4
|
| 158 |
+
None 12.15 2.06 78.11
|
| 159 |
+
|
| 160 |
+
1-4
|
| 161 |
+
Random 9.92 1.93 75.02
|
| 162 |
+
|
| 163 |
+
1-4
|
| 164 |
+
3-1-7 6.77 0.88 71.59
|
| 165 |
+
|
| 166 |
+
1-4
|
| 167 |
+
43-18-18 9.34 1.81 77.16
|
| 168 |
+
|
| 169 |
+
1-4
|
| 170 |
+
Hybrid 8.25 1.64 77.47
|
| 171 |
+
|
| 172 |
+
1-4
|
| 173 |
+
|
| 174 |
+
(a) CIFAR-100 + ResNet20
|
| 175 |
+
|
| 176 |
+
max width=
|
| 177 |
+
|
| 178 |
+
Policy PSNR PSNR (std) Acc
|
| 179 |
+
|
| 180 |
+
1-4
|
| 181 |
+
None 11.44 2.93 72.97
|
| 182 |
+
|
| 183 |
+
1-4
|
| 184 |
+
Random 10.29 1.02 71.93
|
| 185 |
+
|
| 186 |
+
1-4
|
| 187 |
+
21-13-3 8.23 2.18 63.26
|
| 188 |
+
|
| 189 |
+
1-4
|
| 190 |
+
7-4-15 10.31 2.14 70.77
|
| 191 |
+
|
| 192 |
+
1-4
|
| 193 |
+
Hybrid 9.89 1.47 68.91
|
| 194 |
+
|
| 195 |
+
1-4
|
| 196 |
+
|
| 197 |
+
(b) CIFAR-100 + ConvNet
|
| 198 |
+
|
| 199 |
+
max width=
|
| 200 |
+
|
| 201 |
+
Policy PSNR PSNR (std) Acc
|
| 202 |
+
|
| 203 |
+
1-4
|
| 204 |
+
None 9.81 4.41 95.19
|
| 205 |
+
|
| 206 |
+
1-4
|
| 207 |
+
Random 10.06 2.04 95.19
|
| 208 |
+
|
| 209 |
+
1-4
|
| 210 |
+
19-15-45 8.26 0.37 92.44
|
| 211 |
+
|
| 212 |
+
1-4
|
| 213 |
+
2-43-21 8.93 2.93 93.93
|
| 214 |
+
|
| 215 |
+
1-4
|
| 216 |
+
Hybrid 8.41 1.45 95.14
|
| 217 |
+
|
| 218 |
+
1-4
|
| 219 |
+
|
| 220 |
+
(c) FMINST + ResNet20
|
| 221 |
+
|
| 222 |
+
max width=
|
| 223 |
+
|
| 224 |
+
Policy PSNR PSNR (std) Acc
|
| 225 |
+
|
| 226 |
+
1-4
|
| 227 |
+
None 9.52 3.27 94.61
|
| 228 |
+
|
| 229 |
+
1-4
|
| 230 |
+
Random 9.47 2.27 94.47
|
| 231 |
+
|
| 232 |
+
1-4
|
| 233 |
+
42-28-42 7.59 0.89 94.62
|
| 234 |
+
|
| 235 |
+
1-4
|
| 236 |
+
14-48-48 8.41 2.10 94.68
|
| 237 |
+
|
| 238 |
+
1-4
|
| 239 |
+
Hybrid 6.80 0.98 94.59
|
| 240 |
+
|
| 241 |
+
1-4
|
| 242 |
+
|
| 243 |
+
(d) FMNIST + ConvNet
|
| 244 |
+
|
| 245 |
+
Table 1: PSNR (db) (including mean and standard deviation over 6 images) and model accuracy (%) of different transformation configurations for each model and dataset. ${19} - 1 - {18}$ is the random policy. searched policies have low PSNR values compared to not using transformations. We do observe that PSNR values have a relatively high standard deviation, and during our experiments, we found that the policies do not form a good defense for some images. This problem will be further discussed in Section 6.
|
| 246 |
+
|
| 247 |
+
Experiment 4 The defensive qualities of the searched transformation policies are benchmarked against existing defenses from the literature [20] [16] under the Adam+Cosine attack. The results are shown in Table 6, Although the exact values differ slightly, the overall results are similar to the original paper, where all the existing defenses perform 171 worse than the hybrid strategy.
|
| 248 |
+
|
| 249 |
+
Experiment 5 This experiment concerns Claim 2. Because policies should be general, they are tested against various attack configurations. For this, we again use 6 images from the test set and perform the different attacks on the images without the transformation policies applied and with the hybrid strategy transformation policies applied. The results are shown in Table 2. As can be seen from the table, the hybrid strategy works well against all configurations of the reconstruction attack. This is in line with the results from the original paper.
|
| 250 |
+
|
| 251 |
+
max width=
|
| 252 |
+
|
| 253 |
+
Attack None None (std) Hybrid Hybrid (std)
|
| 254 |
+
|
| 255 |
+
1-5
|
| 256 |
+
LBFGS+L2 8.61 1.22 6.33 2.00
|
| 257 |
+
|
| 258 |
+
1-5
|
| 259 |
+
Adam+Cosine 12.15 2.06 8.25 1.64
|
| 260 |
+
|
| 261 |
+
1-5
|
| 262 |
+
LBFGS+Cosine 9.62 0.91 7.47 0.25
|
| 263 |
+
|
| 264 |
+
1-5
|
| 265 |
+
Adam+L1 9.48 0.71 6.43 0.16
|
| 266 |
+
|
| 267 |
+
1-5
|
| 268 |
+
Adam+L2 9.28 0.69 6.46 0.21
|
| 269 |
+
|
| 270 |
+
1-5
|
| 271 |
+
SGD+Cosine 12.60 2.07 8.03 1.47
|
| 272 |
+
|
| 273 |
+
1-5
|
| 274 |
+
|
| 275 |
+
Table 2: PSNR values (db) (including mean and standard deviation over 6 images) of reconstructed images with and without transformations
|
| 276 |
+
|
| 277 |
+
applied for different attack configurations
|
| 278 |
+
|
| 279 |
+
max width=
|
| 280 |
+
|
| 281 |
+
Policy PSNR PSNR std
|
| 282 |
+
|
| 283 |
+
1-3
|
| 284 |
+
None 15.39 2.78
|
| 285 |
+
|
| 286 |
+
1-3
|
| 287 |
+
3-1-7 8.47 0.85
|
| 288 |
+
|
| 289 |
+
1-3
|
| 290 |
+
43-18-18 10.97 1.06
|
| 291 |
+
|
| 292 |
+
1-3
|
| 293 |
+
Hybrid 8.95 0.90
|
| 294 |
+
|
| 295 |
+
1-3
|
| 296 |
+
|
| 297 |
+
Table 3: CIFAR100 with ResNet20
|
| 298 |
+
|
| 299 |
+
Experiment 6 This experiment concerns the transferability of Claim 3. To test this, the policies searched on CIFAR- 100 are applied to Fashion-MNIST using both ResNet20 and ConvNet. Reconstruction attacks are performed with the Adam+Cosine attack. The resulting PSNR values and accuracies are listed in Table 4. The results differ from the original. It can be seen that the transformation policies are not effective here.
|
| 300 |
+
|
| 301 |
+
max width=
|
| 302 |
+
|
| 303 |
+
Policy PSNR PSNR (std) Acc
|
| 304 |
+
|
| 305 |
+
1-4
|
| 306 |
+
None 9.81 4.41 95.19
|
| 307 |
+
|
| 308 |
+
1-4
|
| 309 |
+
3-1-7 9.30 2.72 93.20
|
| 310 |
+
|
| 311 |
+
1-4
|
| 312 |
+
43-18-18 10.03 2.23 94.88
|
| 313 |
+
|
| 314 |
+
1-4
|
| 315 |
+
Hybrid 7.49 1.57 94.49
|
| 316 |
+
|
| 317 |
+
1-4
|
| 318 |
+
|
| 319 |
+
(a) FMNIST + ResNet20
|
| 320 |
+
|
| 321 |
+
max width=
|
| 322 |
+
|
| 323 |
+
Policy PSNR PSNR (std) Acc
|
| 324 |
+
|
| 325 |
+
1-4
|
| 326 |
+
None 9.52 3.27 94.61
|
| 327 |
+
|
| 328 |
+
1-4
|
| 329 |
+
21-13-3 9.99 2.12 92.38
|
| 330 |
+
|
| 331 |
+
1-4
|
| 332 |
+
7-4-15 9.34 1.62 94.35
|
| 333 |
+
|
| 334 |
+
1-4
|
| 335 |
+
Hybrid 11.50 5.80 93.77
|
| 336 |
+
|
| 337 |
+
1-4
|
| 338 |
+
|
| 339 |
+
(b) FMNIST + ConvNet
|
| 340 |
+
|
| 341 |
+
Table 4: Resulting PSNR (dB) and accuracy (%) values for applying policies searched on CIFAR-100 to Fashion-MINST
|
| 342 |
+
|
| 343 |
+
Experiment 7 The following experiment is aimed at Claim 4. The authors state that applying the search policies has a negligible impact on training efficiency. We trained ResNet20 with the searched policies applied and documented the loss and accuracy convergence to test this. From Figure 6 it can be seen that indeed applying transformations has almost zero impact on the training efficiency. It is also noteworthy to observe that the training curves are almost identical compared with the results from the original work.
|
| 344 |
+
|
| 345 |
+
1.75 w/o tranform - train loss tranform - train los: 0.8 w/o tranform - valid loss 0.4 w/o tranform - train acc tranform - train acc Epoch Epoch (b) Original results 0.8 1.50 1.25 § 1.00 0.75 0.56 0.25 fpoch fpoth (a) Reproduced results
|
| 346 |
+
|
| 347 |
+
Figure 6: Convergence speed with and without transformations applied
|
| 348 |
+
|
| 349 |
+
185
|
| 350 |
+
|
| 351 |
+
Experiment 8 Claim 5 states that good transformation policies obfuscate details in the training samples but maintain high-order semantic information. As such, attackers will have trouble reconstructing high frequency information. We test this by comparing the attacker-defender gradient similarity during an attack of models trained with the searched policy, a random policy, and no policy applied. From Figure 7, it can be seen that in shallow layers, the gradients differ significantly, whereas in deep layers, the gradients are very similar. This implies that the transformations do indeed have the desired effect and is in line with the results from the original paper.
|
| 352 |
+
|
| 353 |
+
0.8 10 10 0.8 0.6 w/o transforms w/o transform random policy random policy searched policy searched policy 1000 2000 3000 4000 1000 2000 3000 4000 5000 Iteration Iteration (b) Deep layers 0.6 0.4 w/o transform w/o transform -0.2 random policy random policy searched policy searched policy 4000 5000 0 1000 2000 4000 5000 Iteration Iteration (a) Shallow layers
|
| 354 |
+
|
| 355 |
+
Figure 7: Reproduced results of gradient similarity during the reconstruction optimization, for CIFAR100 with ResNet20
|
| 356 |
+
|
| 357 |
+
191
|
| 358 |
+
|
| 359 |
+
Experiment 9 In Claim 6 the authors report their 5 top transformations. We test whether we can find the same ones by calculating the privacy score on the dataset for each individual augmentation and show the results in Figure 8a and 86. Out of the best 5 transformations reported in the original paper we found 4 overlapping ones.
|
| 360 |
+
|
| 361 |
+
0.45 0.550 0.475 0.325 0.250 30 20 Transformation Index (b) Original results (c) Results on Tiny ImageNet 0.40 0.40 0.30 0.30 0 10 20 0.25 (a) Reproduced results
|
| 362 |
+
|
| 363 |
+
Figure 8: Privacy scores of the 50 transformation functions in the augmentation library, best transformations are red.
|
| 364 |
+
|
| 365 |
+
194
|
| 366 |
+
|
| 367 |
+
Experiment 10 The final experiment reproducing the results from the original paper is aimed at Claim 7. The authors claim that their privacy-score ${S}_{pri}$ is linearly correlated with PSNR with a Pearson-coefficient of 0.697 . We test this by running attacks and evaluating ${S}_{pri}$ on the model trained on tiny cifar100 for 50 epochs and found a very different result. As shown in Figure 4 there is hardly any correlation (Pearson-coefficient is 0.123). This might be due to the fact that these 100 transformation policies are selected at random out of 127.550 possible options. This is a striking result nonetheless, which we discuss in-depth in Section 6.
|
| 368 |
+
|
| 369 |
+
§ 5.2 RESULTS BEYOND ORIGINAL PAPER
|
| 370 |
+
|
| 371 |
+
Extension 1 We extend the evaluation of the transferability of the searched policies by evaluating the performance of the policy searched on CIFAR-100 on Rescaled ImageNet. The resulting PSNR values and accuracies are shown in Table 5. As can be seen from the table, the hybrid strategy produces only $1\mathrm{\;{dB}}$ improvement in PSNR value, and accuracy decreases by more than 4%. This weakens the claim of transferability made by the authors.
|
| 372 |
+
|
| 373 |
+
Extension 2 We additionally extend the evaluation of the transferability of the searched policies by testing which transformations work best on a different dataset. Since good policies share the same general qualities, as stated in Claim 5, the five best transformations from Claim 6 can be expected to be the same when using a different dataset. For this experiment, we use the Rescaled ImageNet dataset. The resulting transformations are shown in Figure 8c. Out of the 5 best transformations on the Rescaled ImageNet 3 were also found on CIFAR-100 in both our results and the results from the original paper. This shows that, indeed, these transformations contain the desired qualities from Claim 6.
|
| 374 |
+
|
| 375 |
+
max width=
|
| 376 |
+
|
| 377 |
+
Policy PSNR PSNR (std) Acc
|
| 378 |
+
|
| 379 |
+
1-4
|
| 380 |
+
None 8.96 1.25 61.44
|
| 381 |
+
|
| 382 |
+
1-4
|
| 383 |
+
Hybrid 7.92 0.79 57.38
|
| 384 |
+
|
| 385 |
+
1-4
|
| 386 |
+
|
| 387 |
+
Table 5: PSNR values (dB) and accuracies of policies searched on CIFAR-100 applied to Rescaled ImageNet
|
| 388 |
+
|
| 389 |
+
max width=
|
| 390 |
+
|
| 391 |
+
Defense PSNR PSNR (std) Acc
|
| 392 |
+
|
| 393 |
+
1-4
|
| 394 |
+
Pruning (70%) 11.62 2.18 74.61
|
| 395 |
+
|
| 396 |
+
1-4
|
| 397 |
+
Pruning (95%) 10.41 1.32 67.91
|
| 398 |
+
|
| 399 |
+
1-4
|
| 400 |
+
Pruning (99%) 9.96 0.57 53.43
|
| 401 |
+
|
| 402 |
+
1-4
|
| 403 |
+
Laplacian $\left( {10}^{-3}\right)$ 10.73 1.02 71.45
|
| 404 |
+
|
| 405 |
+
1-4
|
| 406 |
+
Laplacian $\left( {10}^{-2}\right)$ 12.03 0.79 26.20
|
| 407 |
+
|
| 408 |
+
1-4
|
| 409 |
+
Gaussian $\left( {10}^{-3}\right)$ 12.11 2.98 72.89
|
| 410 |
+
|
| 411 |
+
1-4
|
| 412 |
+
Gaussian $\left( {10}^{-2}\right)$ 12.13 1.14 36.25
|
| 413 |
+
|
| 414 |
+
1-4
|
| 415 |
+
|
| 416 |
+
Table 6: Comparisons with existing defense methods under the Adam+Cosine attack
|
| 417 |
+
|
| 418 |
+
§ 6 DISCUSSION
|
| 419 |
+
|
| 420 |
+
Overall the results in [5] are reproducible, except Figure 4, with a large discrepancy between our result and the original one - we are still in contact with the authors on this issue. Nevertheless, augmentation policies tend to work as a defense mechanism rather well. For most images, an attacker using reconstruction attacks is unable to find privacy-sensitive information. However, the standard deviation of our results is more than 25% in some settings, and we consider this a valuable metric to contribute. Some images are vulnerable to the attack even with the proposed defense mechanism, and it is as of yet unclear to us which types of images are more vulnerable than others. This issue must be developed further in future research to make the approach widely applicable in real-world use-cases where private data is at stake.
|
| 421 |
+
|
| 422 |
+
Additionally, we made observations in the codebase that, to the best of our knowledge, were not reported in the paper or any other accompanying documentation. The first was the fact that the loss of the training module was multiplied by a factor of 0.5 . This is not a fundamental flaw during the training phase, as it simply produces smaller gradients and therefore leads to a reduced effective learning rate. However, during the reconstruction attacks, the loss used by the attacker was not multiplied by this factor. This makes the attacker in practice use a different loss function from the one used to generate the gradient that it is attempting to match. This may therefore make reconstruction more difficult. Furthermore, we found that two other undocumented augmentations were added in all experiments, namely a random crop and random horizontal flip. Without these, the accuracy of our models decreased by over 10%. We are in contact with the authors regarding these observations, they acknowledged the halved loss as a bug.
|
| 423 |
+
|
| 424 |
+
§ 6.1 WHAT WAS EASY
|
| 425 |
+
|
| 426 |
+
The explanation of the general idea and solution of the paper was very clearly put and easy to follow. The codebase contained a README with instructions on how to run some of the paper's experiments, and these instructions could be followed without significant problems. The code produced results as seen in the paper.
|
| 427 |
+
|
| 428 |
+
§ 6.2 WHAT WAS DIFFICULT
|
| 429 |
+
|
| 430 |
+
The most challenging part about reproduction was the unclear description of experiments in the paper and limited clarity in the codebase. Code in the repository was uncommented, used many global variables and many layers of indirection. Many chunks of code were not used, making it harder to follow. Some experimental settings and metrics were not implemented, and some experiment configurations led to fatal errors.
|
| 431 |
+
|
| 432 |
+
It was very unclear which steps were originally followed to obtain Figure 4. Despite the authors' helpful comment on which model was used, we were not able to reproduce the correlation, potentially due to randomness in a vast search space(127,550)and the limited sample size(100). Furthermore, the paper does not state how many images were used to produce the PSNR values in the tables. Finally, undocumented augmentations were added in some but not all settings, which was cause for some delay until this was found to be the cause for a 10% accuracy-gap with the authors’ results.
|
| 433 |
+
|
| 434 |
+
§ 6.3 COMMUNICATION WITH ORIGINAL AUTHORS
|
| 435 |
+
|
| 436 |
+
We contacted the authors about multiple clarifications regarding implementation details and notation in the paper. The authors responded promptly and answered almost all of our questions in the first round of contact. We are still in contact on two points. Firstly, regarding our reproduction of Figure 4. Since we got such differing results for this critical part of the authors' work, we are looking to investigate this further and possibly resolve the discrepancy with them. Secondly, we offered our refactoring of the codebase to the authors as a contribution to their work.
|
papers/ML_Reproducibility_Challenge/ML_Reproducibility_Challenge 2021/ML_Reproducibility_Challenge 2021 Fall/SYUxyazQh0Y/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,304 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# [Re] Explaining in Style: Training a GAN to explain a classifier in StyleSpace
|
| 2 |
+
|
| 3 |
+
Anonymous Author(s)
|
| 4 |
+
|
| 5 |
+
Affiliation
|
| 6 |
+
|
| 7 |
+
Address
|
| 8 |
+
|
| 9 |
+
email
|
| 10 |
+
|
| 11 |
+
1
|
| 12 |
+
|
| 13 |
+
## Reproducibility Summary
|
| 14 |
+
|
| 15 |
+
## 2 Scope of Reproducibility
|
| 16 |
+
|
| 17 |
+
StylEx is an approach for classifier-conditioned training of a StyleGAN2 [6], intending to capture classifier-specific attributes in its disentangled StyleSpace [15]. Attributes can be adjusted to generate counterfactual explanations of 5 the classifier decisions. Stylex is domain and classifier-agnostic, while its explanations are claimed to be human- 6 interpretable, distinct, coherent and sufficient to produce flipped classifier decisions. We verify these claims by 7 reproducing a selection of the experiments in the paper.
|
| 18 |
+
|
| 19 |
+
## B Methodology
|
| 20 |
+
|
| 21 |
+
We verified a selection of the experimental results on the code available by the authors. However, a significant part of the training procedure, network architecture and hyperparameter configurations were missing. As such, we reimplemented 1 the model and available TensorFlow code to PyTorch, to enable easier reproducibility on the proposed case studies. 2 All experiments were run in approximately 20-50 GPU hours per dataset, depending on the batch size, gradient accumulation and GPU.
|
| 22 |
+
|
| 23 |
+
## Results
|
| 24 |
+
|
| 25 |
+
We verified that the publicly available pretrained model has a ’sufficiency’ measure within $1\%$ of the value reported in the paper. Additionally, we evaluate the Fréchet inception distance (FID) scores of images generated by the released model. We show that the FID score increases with the number of attributes used to generate a counterfactual explanation. Custom models were trained on three datasets, with a reduced image dimensionality $\left( {64}^{2}\right)$ . Additionally, a user study was conducted to evaluate the distinctiveness and coherence of the images. We report a significantly lower accuracy on the identification of the extracted attributes and 'sufficiency' scores on our model.
|
| 26 |
+
|
| 27 |
+
## What was easy
|
| 28 |
+
|
| 29 |
+
It was easy to run the provided Jupyter Notebook, and verify the results of the pretrained models on the FFHQ dataset. Extending an existing StyleGAN2 implementation to fit this study was relatively easy.
|
| 30 |
+
|
| 31 |
+
## 24 What was difficult
|
| 32 |
+
|
| 33 |
+
Reproducing the experiments on the same scale as the authors, as well as the development of the full training procedure, model architecture and hyperparameters, particularly due to underspecification in the original paper. Additionally, the 27 conversion of code from Tensorflow to PyTorch.
|
| 34 |
+
|
| 35 |
+
## 28 Communication with original authors
|
| 36 |
+
|
| 37 |
+
We corresponded with the first author of the paper through several emails. Through our mail contact, additional details were released on the network architecture, the training procedure and the hyperparameter configurations.
|
| 38 |
+
|
| 39 |
+

|
| 40 |
+
|
| 41 |
+
Figure 1: Top-1 automatically detected attributes for perceived-gender classifiers (left: version 1, right: version 2) and perceived-health of leaves classifiers (middle). Similarly to the original paper, the counterfactual images are marked by a frame. Displayed probabilities correspond to the person being male for perceived gender and the leaf being healthy for perceived health of leaf. More attributes can be found in the appendix.
|
| 42 |
+
|
| 43 |
+
## 1 Introduction
|
| 44 |
+
|
| 45 |
+
Existing post-hoc visual explainability measures, such as heatmaps[13], can highlight regions that influence the decision. However, they do not visualize non-spatial localized attributes nor do they indicate how these areas may be changed to influence the classification. Counterfactual explanations, which are statements of the form "Had the input $\mathbf{x}$ been $\widetilde{\mathbf{x}}$ , the classifier output would have been $\widetilde{\mathbf{y}}$ instead of $\mathbf{y}$ ", have been proposed as an alternative which both specifies the important features and naturally explains how it can be altered to achieve an alternative outcome.
|
| 46 |
+
|
| 47 |
+
As such, these explanations are promising as they can provide a suggestive recourse to non-domain experts in a machine learning-based decision system. The effectiveness of these methods strongly depend on the intuitive difference that humans observe; therefore one of the primary objectives is to find these attributes. Secondary objectives involve the visualization and control of the impact of these features on the classifier output.
|
| 48 |
+
|
| 49 |
+
In this work, we reproduce the paper 'Explaining in Style: Explaining a GAN in StyleSpace' [8]. The paper proposes a novel method for explaining the classification of a given image, by altering discovered human-interpretable features discovered to affect the classification output. We reimplemented the model in PyTorch together with the training procedure, as the original TensorFlow implementation lacked the training procedure code. We performed training on the FFHQ and PlantVillage dataset using a lower resolution. Using our own implementation, we check whether the results are consistent with the descriptions provided in the paper. We strengthen this with the addition of a human-grounded evaluation of the generated images. Additionally, we used the FID measure to evaluate the image quality of the counterfactual generated images.
|
| 50 |
+
|
| 51 |
+
## 2 Scope of Reproducibility
|
| 52 |
+
|
| 53 |
+
The StylEx model, in addition to the AttFind algorithm defined in the paper, is presented as a viable option for generating counterfactual explanations of black-box classifiers. The StylEx model aims to make individual classifier-relevant, through a novel training procedure which is outlined in 3 .
|
| 54 |
+
|
| 55 |
+
As no benchmark metrics exist to evaluate and assess attribute-based counterfactual explanations, the authors propose three evaluation criteria themselves: 1) visual coherence, 2) distinctness and 3) ’effect of attributes on classification’ (sufficiency). We reformulate these criteria as the main claims of the paper in the following manner:
|
| 56 |
+
|
| 57 |
+
1. Visual Coherence: Attributes detected by StylEx should be clearly identifiable by humans.
|
| 58 |
+
|
| 59 |
+
2. Distinctness: The attributes extracted by StylEx should be distinct.
|
| 60 |
+
|
| 61 |
+
3. Sufficiency: Changing attributes should result in a change of classifier output, where changing multiple attributes has a cumulative effect.
|
| 62 |
+
|
| 63 |
+
## 3 Methodology
|
| 64 |
+
|
| 65 |
+
To evaluate claim 1 and 2, the authors conduct a user study in two parts. To evaluate claim 3, they study the percentage of flipped classifications when modifying top- $k$ (in their case $k = {10}$ ) attributes. To reproduce these claims, we conduct the same experiments, albeit at a lower dimensionality of ${64}^{2}$ . The complex network architecture of StyleGAN, as well as the encoder both require a significant number of training epochs until its convergence and thus, training these at the full resolution of ${256}^{2}$ is extremely computationally expensive.
|
| 66 |
+
|
| 67 |
+
We verify sufficiency scores of the released model, by making use of the supplied Jupyter Notebook. However, several crucial elements were missing, which included details on the training procedure the omission of hyperparameter configurations and details on the optimization procedure. As such, we ported the available TensorFlow code to PyTorch, to enable easier reproducibility on the proposed case studies.
|
| 68 |
+
|
| 69 |
+
We reimplemented the StylEx model in PyTorch, using an open-source StyleGAN2 implementation as a starting point
|
| 70 |
+
|
| 71 |
+
For running our code, we have made use of an NVIDIA GTX 1080 Ti, RTX 2070 Super and a laptop RTX 3060 graphics card, running on different machines. In the conduction of the user study, we have made use of the online survey tool Qualtrics [1].
|
| 72 |
+
|
| 73 |
+

|
| 74 |
+
|
| 75 |
+
Figure 2: Stylex network architecture: with the respective classifier $C$ , generator $G$ , discriminator $D$ and encoder $E$ . For clarification we have slightly adapted the visualization to include the StyleVectorizer which obtains the latent vector $w$ from $z$ [5], after learning that the authors have used alternating training [1]
|
| 76 |
+
|
| 77 |
+
### 3.1 Model descriptions
|
| 78 |
+
|
| 79 |
+
In addition to a pretrained classifier $C$ , StylEx is comprised of three trainable elements, which are a 1) generator $G$ ,2) a discriminator $D$ and 3) an encoder $E$ . The $D$ and $G$ follow the StyleGAN2 architecture, with minor alterations to $D$ which will be explained below. Figure 2 provides an overview of the network architecture.
|
| 80 |
+
|
| 81 |
+
Some design details were unspecified or omitted in the original paper. We contacted the authors to provide clarification on these issues, which are stated as follows:
|
| 82 |
+
|
| 83 |
+
1. StylEx is trained using both encoder input and noise input transformed through StyleGAN2's mapping network, using alternating steps;
|
| 84 |
+
|
| 85 |
+
2. The output of $D$ is a weighted sum of the 2-dimensional output of its last layer with the input probabilities of the 1) original image if using the encoder, 2) randomly sampled image if using noise input;
|
| 86 |
+
|
| 87 |
+
3. ${\mathcal{L}}_{rec}$ and ${\mathcal{L}}_{cls}$ are only calculated during the generator training steps.
|
| 88 |
+
|
| 89 |
+
---
|
| 90 |
+
|
| 91 |
+
${}^{1}$ https://github.com/lucidrains/stylegan2-pytorch
|
| 92 |
+
|
| 93 |
+
---
|
| 94 |
+
|
| 95 |
+
The GAN is trained jointly with the encoder, which embeds an image into the $W$ latent space of StyleGAN2, forming a latent vector $w$ . A recent observation by [14] highlighted the disentanglement of this space (appropriately called, the StyleSpace) that is used to extract classifier-specific attributes. Logits of the original image $C\left( x\right)$ are then appended to $w$ , to condition the training on classifier inputs. The current architecture includes a StyleVectorizer that obtains the latent vector $W$ from $Z$ , which is sampled from a normal distribution. In alternating steps the generator was fed input from the encoder and input from the StyleVectorizer mapping network [5]. The original authors noticed a slight improvement in image quality using alternating training, compared to only using the encoder input.
|
| 96 |
+
|
| 97 |
+
We note that we used two slightly different implementation choices for training our models. The first implementation does not include the discriminator change mentioned above, while the second implementation does and uses probabilities instead of logits for concatenation to $w$ . We call these two choices ’Model 1’ and ’Model 2’ in results on datasets where we have trained both. We additionally noted that the MobileNet classifier 'Model 1' was trained with did not perform well on the faces. This is why, for both faces models, a ResNet classifier was used to perform the AttFind algorithm.
|
| 98 |
+
|
| 99 |
+
This expanded latent vector $w$ , either obtained by the encoder or StyleVectorizer, is passed on to the StyleGAN2, where it is transformed into the StyleSpace by a set of concurrent affine transformations to style vectors ${s}_{0},\ldots ,{s}_{n}$ . These style vectors are used to generate novel images, that aim to reconstruct the original image as closely as possible. Several losses are used to quantitatively assess the convergence of the training procedure. The cumulative training loss for the algorithm is a sum of losses, denoted as follows:
|
| 100 |
+
|
| 101 |
+
$$
|
| 102 |
+
{\operatorname{StylEx}}_{\text{Loss }} = {\mathcal{L}}_{\text{adv }} + {\mathcal{L}}_{\text{reg }} + {\mathcal{L}}_{\text{rec }} + {\mathcal{L}}_{\text{cls }}. \tag{1}
|
| 103 |
+
$$
|
| 104 |
+
|
| 105 |
+
A logistic adversarial loss [2] ${\mathcal{L}}_{adv}$ is used as in standard GAN training, followed by the regularization loss ${\mathcal{L}}_{reg}$ , as described in the original StyleGAN [6] paper. The reconstruction loss ${\mathcal{L}}_{\text{rec }}$ is given by the sum of ${\mathcal{L}}_{\text{rec }}^{x} + {\mathcal{L}}_{\text{rec }}^{w} + {\mathcal{L}}_{\text{LPIPS }}$ , where the first two terms are the $\mathbf{{L1}}$ distance between original and reconstructed input, and the original and reconstructed $w$ latent vector, respectively. The ${\mathcal{L}}_{LPIPS}$ term is the LPIPS distance between original and reconstructed input, as described in [17]. This loss ensures that reconstructed images resemble the original input as close as possible, to serve as an input for generating counterfactual examples. The classifier loss is defined as the Kullback-Leibler divergence between the original input image $X$ and the generated new image $G\left( {E\left( X\right) , C\left( X\right) }\right)$ , defined as follows: ${\mathcal{L}}_{cls} = {D}_{KL}\left\lbrack {\left| {C\left( {x}^{\prime }\right) }\right| \mid C\left( x\right) }\right\rbrack$ . This loss ensures that the generator does not disregard image attributes that are important for the classification.
|
| 106 |
+
|
| 107 |
+
To extract classifier-specific attributes, the AttFind algorithm is proposed in the paper. As input, it takes the trained model $D$ and a set of $N$ images whose predicted label do not match the target label $y$ . For each class label, AttFind encodes the images and iteratively tries to find a set ${S}_{y}$ of $M$ style coordinates that represent the largest possible shift to the opposing class. Next to this, it finds the set of directions ${D}_{y} \in {1}^{M}$ that indicate to which class the direction needs to be adjusted to flip the classifier decision. In each iteration, it considers all style coordinates $K$ and determines the coordinate with the largest effect. All images where changing this coordinate results in a large effect on their probability are removed from the iteration. The process is repeated until no images are left, or until $M$ attributes are found.
|
| 108 |
+
|
| 109 |
+
### 3.2 Datasets
|
| 110 |
+
|
| 111 |
+
We reproduce a selection of the findings of the authors on two of the given datasets in our PyTorch re-implementation:
|
| 112 |
+
|
| 113 |
+
1. CelebA [9] The original Large-scale CelebFaces Attributes (CelebA) dataset ${}^{2}$ contains 200000 image entries. each containing 40 attribute annotations. We have trained classifiers on both the gender and age attribute.
|
| 114 |
+
|
| 115 |
+
2. FFHQ [11] The original Flickr-Faces-HQ dataset containing 70000 images of human faces. This dataset was used for StylEx training, while the pretrained classifier was trained on the CelebA dataset, following the procedure of the original paper. ${}^{3}$
|
| 116 |
+
|
| 117 |
+
3. Plant-Village: This dataset contains 54303 entries, with 38 categories. This dataset was used to train the classifier to learn the difference between sick and healthy leaves.
|
| 118 |
+
|
| 119 |
+
For the classification tasks, the FFHQ dataset was split in train/validation/test sets of 70/15/15, while the Plant-Village retained a proportion of ${70}/{20}/{10}$ .
|
| 120 |
+
|
| 121 |
+
---
|
| 122 |
+
|
| 123 |
+
${}^{2}$ https://www.kaggle.com/jessicali9530/celeba-dataset
|
| 124 |
+
|
| 125 |
+
${}^{3}$ This is a detail that was revealed through contact with the authors.
|
| 126 |
+
|
| 127 |
+
---
|
| 128 |
+
|
| 129 |
+
### 3.3 Hyperparameters
|
| 130 |
+
|
| 131 |
+
Original research: For the partial reproduction of Table 3 of the original paper, we limited ourselves to a sample of $n = {250}$ images, rather than the $n = {1000}$ randomly sampled images, as denoted in the Jupyter Notebook.
|
| 132 |
+
|
| 133 |
+
Reimplementation: The computational costs of training StylEx precluded an in-depth hyperparameter search. For all modules except the encoder, we found a learning rate of ${2e} - 4$ for the ADAM optimizer performs well, with ${\beta }_{1} = {0.5}$ and ${\beta }_{2} = {0.9}$ . We found the training to diverge unless the encoder learning rate was lowered significantly to ${1e} - 5$ . We ascribe this difference to the significantly smaller input size in our models, or subtle implementation differences in the original paper which we don't have access to.
|
| 134 |
+
|
| 135 |
+
The classifier used in the paper was MobileNetV1 [4], but we opted for a MobileNetV2 or ResNet-18. The authors asserted that the use of advanced networks identified more subtle cues from the datasets on the classification problems at hand, and for this purpose, we opted for ResNet-18. Additionally, we observed that the MobileNet model did not perform well on the CelebA dataset for gender classification on this image size. The components of the ${\mathcal{L}}_{\text{rec }}$ loss were scaled according to authors’ suggestion in our correspondence: 0.1 for ${\mathcal{L}}_{\text{rec }}^{x}$ and ${\mathcal{L}}_{\text{LPIPS }},1$ for ${\mathcal{L}}_{\text{rec }}^{w}$ . Other loss components were not scaled.
|
| 136 |
+
|
| 137 |
+
On the local GPUs, we used a batch size of 4 with 8 gradient accumulation steps, while we use a batch size of 16 with 4 gradient accumulation steps on the computing cluster. For the training of the MobileNet V2 classifier and the ResNet-18 classifier, we have set the learning rate to ${lr} = {1e} - 4$ , used a batch size of 128 and used the Adam [7] with default PyTorch parameters.
|
| 138 |
+
|
| 139 |
+
### 3.4 Experimental setup and code
|
| 140 |
+
|
| 141 |
+
We aimed to follow the experimental setup as closely as possible for our experiments. Our PyTorch implementation is available on GitHub ${}^{4}$ to further support and advance reproducibility in machine learning research. The repository provides explanations to run the described experiments.
|
| 142 |
+
|
| 143 |
+
### 3.5 Computational requirements
|
| 144 |
+
|
| 145 |
+
Our models were trained on three different machines, which were a 1) laptop NVIDIA RTX 3060, 2) an NVIDIA RTX 2070 Super and a 3) computing cluster containing GTX 1080 Ti GPUs. It must be noted that the first machine made use of the Windows operating system, while the latter two are Linux-based. For both the FFHQ dataset as well as the Plant-Village dataset, training was done until convergence, which was reached in ${150}\mathrm{\;K}$ training steps for the FFHQ dataset and 260000 training steps on the Plant-Village dataset.
|
| 146 |
+
|
| 147 |
+
On the local GPUs, a batch size of 4 (RTX 3060) and 8 (RTX 2070 Super) was used alongside a gradient accumulation for 8 (RTX 3060) and 2 (RTX 2070 Super) steps. On the computing cluster, a batch size of 16 was kept, with a gradient accumulation parameter of 4 . Depending on the hyperparameters of the batch size, and gradient accumulation, the computational time to run the experiments ranged between 20-50 GPU hours. Training for 150000 steps took 20 hours on an RTX 2070 Super.
|
| 148 |
+
|
| 149 |
+
## 4 Results
|
| 150 |
+
|
| 151 |
+
### 4.1 Results reproducing original paper
|
| 152 |
+
|
| 153 |
+
#### 4.1.1 Sufficiency
|
| 154 |
+
|
| 155 |
+
We calculate the percentage of flipped classifications after changing the top-10 attributes found by the AttFind procedure. The results can be seen in table 1. Our results on the author’s model is within 1% of the accuracy reported in the paper. Our models perform show significantly worse performance on both perceived age (51% vs 93.9%) and plant healthiness (30% vs 91.2%), showing that the attributes discovered are not very relevant for classification.
|
| 156 |
+
|
| 157 |
+
---
|
| 158 |
+
|
| 159 |
+
${}^{4}$ https://anonymous.4open.science/r/Explaining-In-Style-Reproducibility-Study-5665
|
| 160 |
+
|
| 161 |
+
---
|
| 162 |
+
|
| 163 |
+
<table><tr><td/><td>Ours</td></tr><tr><td>Perceived Gender</td><td>94.8%</td></tr><tr><td>Perceived Gender (Model 1, $s = 2$ )</td><td>51%</td></tr><tr><td>Perceived Gender (Model 2, $s = 1$ )</td><td>21%</td></tr><tr><td>Plants $\left( {s = 2}\right)$</td><td>30%</td></tr></table>
|
| 164 |
+
|
| 165 |
+
Table 1: Percentage of flipped classifications on different datasets. Row in italics shows our experiment on the author's supplied model. $s$ represents the shift size used to generate the results. The shift sizes have been decided by qualitatively looking at the produced images.
|
| 166 |
+
|
| 167 |
+
#### 4.1.2 Coherency and Distinctness
|
| 168 |
+
|
| 169 |
+
Similar to the original paper, we have conducted a user study $\left( {n = {54}}\right)$ to evaluate the distinctiveness of the found attributes and coherence of the generated images. The user study was divided into two parts - 1 ) a classification study and a 2) verbal description study, following a similar setup as presented in [16]. For the classification study, users are shown four animations in a grid format, each corresponding to a modification of a given attribute. In the verbal description study, the users were asked to look at four animations, and consequently describe in 1-4 words the changing attribute.
|
| 170 |
+
|
| 171 |
+
We have done this for the plant dataset as well as the FFHQ datasets. The order of the datasets was randomized to avoid biases and learning effects. All participants are undergraduate and graduate students who have some affinity and knowledge of machine learning. None of them have self-reported colourblindness. In [4], a few examples can be found on the posed questions and the type of provided answers. Although our results seem to slightly outperform the results by Wu et al. (2021) on the perceived age classifier, it does not seem to outperform the method posed by Lang et al. (2021).
|
| 172 |
+
|
| 173 |
+
<table><tr><td/><td>Wu et al.</td><td>Lang et al.</td><td>Ours</td></tr><tr><td>Perceived Gender</td><td>0.783 (±0.186)</td><td>0.96 (±0.047)</td><td>Model 1: ${0.52}\left( {\pm {0.2081}}\right)$ Model 2: 0.79 (±0.1599)</td></tr><tr><td>Plants</td><td>${0.91}\left( {\pm {0.081}}\right)$</td><td>0.916 (±0.081)</td><td>0.66 (±0.323)</td></tr></table>
|
| 174 |
+
|
| 175 |
+
Table 2: User study results. Partial reproduction of Table 2 of the original paper, on a subset of the datasets
|
| 176 |
+
|
| 177 |
+
### 4.2 Results beyond original paper
|
| 178 |
+
|
| 179 |
+
To investigate the impact of attribute perturbation on the quality of the generated images, we compute the FID [3] between the original images and the generated images using [12]. We perturbed the images with increasingly more attributes in a cumulative fashion, starting from the 0th attribute which corresponds to only encoding and decoding the image. For the pretrained model from the original authors, we used the provided subset of 250 latents and their corresponding original images that were found in FFHQ. For our own models, we used subsets of 100 images (500 images for model 2) due to computational constraints with regards to running the AttFind algorithm. Our results, seen in 3, show that FID increases with the number of perturbed attributes. This result is not surprising, as changing an attribute can cause combinations of features not commonly seen in the original data distribution (e.g. young boy with lipstick). Moreover, for our reproducibility study, we noticed that perturbing more attributes at once, resulted in more artefacts, which also could have caused the FID to increase. 5 Discussion
|
| 180 |
+
|
| 181 |
+

|
| 182 |
+
|
| 183 |
+
Figure 3: FID scores after perturbing top- $k$ attributes.
|
| 184 |
+
|
| 185 |
+
Our experimental results support the claims posed in the original paper -
|
| 186 |
+
|
| 187 |
+
the attributes detected by StylEx are identifiable by humans to a certain degree, distinct and sufficient. However, due to the significantly lower resolution and poorer image quality of the models, these results are not comparable to the ones posed in the original paper.
|
| 188 |
+
|
| 189 |
+
Reflection on our reproduction study An important insight obtained during the conduction of the study is that the provided code did not cover the entire scope of the paper. Through a thorough study of both the code as well as the paper, we quickly noted discrepancies and missing elements that were fundamental - such as the network architecture, scaling of the losses and the hyperparameter configurations - to the original research.
|
| 190 |
+
|
| 191 |
+
We believe that researchers could enhance transparency and reproducibility in machine learning research by the addition of a reproducibility statement within their research, including the used hardware, software and details relevant to the proposed study (e.g. such as clarifications on the exact network architecture). Moreover, it is important to detail hyperparameter search spaces, final parameter settings for all the used architectures and baselines. We believe that transparency is fundamental to stimulate the large-scale deployment of machine learning algorithms.
|
| 192 |
+
|
| 193 |
+
### 5.1 What was easy
|
| 194 |
+
|
| 195 |
+
It was relatively easy to run the code as the provided Jupyter Notebook by the authors. The provided notebook was thoroughly documented and written in consistent coding styles, making the interpretation of the notebook easier. The provided notebook lacked the elements to fully reproduce the research; the training procedure of the network was missing, only one pretrained model was provided and four datasets were missing that we were required to add. As such, we had to partially re-implement the framework in PyTorch, while the original implementation was provided in TensorFlow. The addition of new datasets in our framework to accommodate the experiments was a relatively easy task.
|
| 196 |
+
|
| 197 |
+
### 5.2 What was difficult
|
| 198 |
+
|
| 199 |
+
Reproducing the experiments at the same computational scale as the authors was deemed to be the largest challenge, given the limited computational resources we had available. For the training of the model, the original authors made use of $8\mathrm{{NVIDIA}}\mathrm{V}{100}\mathrm{\;s}$ , which took the original authors a week to train at the full resolution of ${256}^{2}$ , whereas we were restricted to the use of the computing cluster, Colab/Kaggle and our local GPUs. Due to this limitation, we had to scale down the resolution of the new images across the different datasets significantly. We scaled down the resolution of the generated images across the different datasets to a resolution of ${64}^{2}$ , which limited the fidelity of the results. Additionally, we experienced the following issues with the original paper:
|
| 200 |
+
|
| 201 |
+
1. Little to no hyperparameters were given in the paper, e.g. on the scaling of the losses, the learning rates etc.
|
| 202 |
+
|
| 203 |
+
2. Ambiguities about the training procedure: the classifier in the notebook was trained on CelebA, instead of the FFHQ dataset, which we did not expect. This appeared to be a design choice by the authors, as the CelebA dataset contained labels, which the network could leverage information from. Additionally, softmax logits appeared to be added to the discriminator - which was not mentioned explicitly in the paper - but appeared to follow the cGAN [10] training procedure.
|
| 204 |
+
|
| 205 |
+
3. Ambiguities on the network architecture: It was not entirely clear what the dimensionality and the function was of the $Z$ vector, as the paper did not explicitly mention this.
|
| 206 |
+
|
| 207 |
+
4. Ambiguities about the preprocessing pipeline of the images before it enters the encoder/classifier - in contact with the authors, they appeared to scale the RGB values from $\left\lbrack {-1,1}\right\rbrack$
|
| 208 |
+
|
| 209 |
+
The original authors did provide the hyperparameter configurations early on, which slightly reduced the time to explore the different possibilities, but the provided learning rate for example was too high for us. Additionally, the conversion of the AttFind algorithm from TensorFlow to PyTorch also proved to be a somewhat difficult exercise. The challenge predominantly laid on the integration of this algorithm within the new PyTorch codebase, which required a thorough understanding of the internal workings of the algorithm.
|
| 210 |
+
|
| 211 |
+
## 244 5.3 Communication with original authors
|
| 212 |
+
|
| 213 |
+
Three emails were sent to the first author of the paper. In these emails, we have asked for additional details of the proposed network architecture, hyperparameter configurations and the training procedure of the networks. These details were not noted in the paper, nor the provided code. Answers to these questions were provided promptly. Unfortunately, they were not able to share their code for the training procedure, as it contained too many internal dependencies from 249 their perspective.
|
| 214 |
+
|
| 215 |
+
## References
|
| 216 |
+
|
| 217 |
+
[1] Qualtrics. https://www.qualtrics.com.Accessed: 2022-02-03.
|
| 218 |
+
|
| 219 |
+
[2] Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial networks, 2014.
|
| 220 |
+
|
| 221 |
+
[3] Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems, 30, 2017.
|
| 222 |
+
|
| 223 |
+
[4] Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, and Hartwig Adam. Mobilenets: Efficient convolutional neural networks for mobile vision applications, 2017.
|
| 224 |
+
|
| 225 |
+
[5] Tero Karras, Samuli Laine, and Timo Aila. A style-based generator architecture for generative adversarial networks. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 4401-4410, 2019.
|
| 226 |
+
|
| 227 |
+
[6] Tero Karras, Samuli Laine, Miika Aittala, Janne Hellsten, Jaakko Lehtinen, and Timo Aila. Analyzing and improving the image quality of stylegan, 2020.
|
| 228 |
+
|
| 229 |
+
[7] Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization, 2017.
|
| 230 |
+
|
| 231 |
+
[8] Oran Lang, Yossi Gandelsman, Michal Yarom, Yoav Wald, Gal Elidan, Avinatan Hassidim, William T. Freeman, Phillip Isola, Amir Globerson, Michal Irani, and Inbar Mosseri. Explaining in style: Training a gan to explain a classifier in stylespace. arXiv preprint arXiv:2104.13369, 2021.
|
| 232 |
+
|
| 233 |
+
[9] Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. Deep learning face attributes in the wild. In Proceedings of International Conference on Computer Vision (ICCV), December 2015.
|
| 234 |
+
|
| 235 |
+
[10] Mehdi Mirza and Simon Osindero. Conditional generative adversarial nets, 2014.
|
| 236 |
+
|
| 237 |
+
[11] Roy Or-El, Soumyadip Sengupta, Ohad Fried, Eli Shechtman, and Ira Kemelmacher-Shlizerman. Lifespan age transformation synthesis, 2020.
|
| 238 |
+
|
| 239 |
+
[12] Maximilian Seitzer. pytorch-fid: FID Score for PyTorch. https://github.com/mseitzer/pytorch-fid, August 2020. Version 0.2.1.
|
| 240 |
+
|
| 241 |
+
[13] Ramprasaath R. Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, and Dhruv Batra. Grad-cam: Visual explanations from deep networks via gradient-based localization. In 2017 IEEE International Conference on Computer Vision (ICCV), pages 618-626, 2017. doi: 10.1109/ICCV.2017.74.
|
| 242 |
+
|
| 243 |
+
[14] Zongze Wu, Dani Lischinski, and Eli Shechtman. Stylespace analysis: Disentangled controls for stylegan image generation, 2020.
|
| 244 |
+
|
| 245 |
+
[15] Zongze Wu, Dani Lischinski, and Eli Shechtman. Stylespace analysis: Disentangled controls for stylegan image generation. In 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 12858-12867, 2021. doi: 10.1109/CVPR46437.2021.01267.
|
| 246 |
+
|
| 247 |
+
[16] Chih-Kuan Yeh, Been Kim, Sercan O. Arik, Chun-Liang Li, Tomas Pfister, and Pradeep Ravikumar. On completeness-aware concept-based explanations in deep neural networks, 2020.
|
| 248 |
+
|
| 249 |
+
[17] R. Zhang, P. Isola, A. A. Efros, E. Shechtman, and O. Wang. The unreasonable effectiveness of deep features as a perceptual metric. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 586-595, Los Alamitos, CA, USA, jun 2018. IEEE Computer Society. doi: 10.1109/CVPR.2018.00068. URL https://doi.ieeecomputersociety.org/10.1109/CVPR.2018.00068.
|
| 250 |
+
|
| 251 |
+
## A User Study
|
| 252 |
+
|
| 253 |
+
### A.1 Classification Study
|
| 254 |
+
|
| 255 |
+
The participants were provided with the following instructions for the classification study:
|
| 256 |
+
|
| 257 |
+
- Look at the animations on the left. Both are examples of the same transformation (change in the image).
|
| 258 |
+
|
| 259 |
+
- Then look at the two candidates on the right, A (top-right) and B (bottom-right).
|
| 260 |
+
|
| 261 |
+
- Choose which one does a similar transformation to those on the left.
|
| 262 |
+
|
| 263 |
+

|
| 264 |
+
|
| 265 |
+
Figure 4: Sample question in the classification study, on the plants dataset.
|
| 266 |
+
|
| 267 |
+
Correct answer: B
|
| 268 |
+
|
| 269 |
+
Accuracy: 20/54 participants were correct.
|
| 270 |
+
|
| 271 |
+
### A.2 Verbal Description Study
|
| 272 |
+
|
| 273 |
+
The participants were provided with the following instructions for the verbal description study:
|
| 274 |
+
|
| 275 |
+
- Look at the animation.
|
| 276 |
+
|
| 277 |
+
- Describe in 1-4 words the single most prominent attribute that changes for all images.
|
| 278 |
+
|
| 279 |
+

|
| 280 |
+
|
| 281 |
+
Figure 5: Sample question in the verbal description study, on the plants dataset.
|
| 282 |
+
|
| 283 |
+
301 Users description: lighting, colour/color, brightness, changes
|
| 284 |
+
|
| 285 |
+
302 Most common word: lighting
|
| 286 |
+
|
| 287 |
+
B Top attributes
|
| 288 |
+
|
| 289 |
+
304 B. 1 FFHQ - Model 1
|
| 290 |
+
|
| 291 |
+

|
| 292 |
+
|
| 293 |
+
Figure 6: Perceived Age - Model 1. Classifier-specific interpretable attributes
|
| 294 |
+
|
| 295 |
+

|
| 296 |
+
|
| 297 |
+
Figure 7: Perceived Health. Classifier-specific interpretable attributes
|
| 298 |
+
|
| 299 |
+
306
|
| 300 |
+
|
| 301 |
+

|
| 302 |
+
|
| 303 |
+
Figure 8: Perceived Age - Model 2. Classifier-specific interpretable attributes
|
| 304 |
+
|
papers/ML_Reproducibility_Challenge/ML_Reproducibility_Challenge 2021/ML_Reproducibility_Challenge 2021 Fall/SYUxyazQh0Y/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,227 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ [RE] EXPLAINING IN STYLE: TRAINING A GAN TO EXPLAIN A CLASSIFIER IN STYLESPACE
|
| 2 |
+
|
| 3 |
+
Anonymous Author(s)
|
| 4 |
+
|
| 5 |
+
Affiliation
|
| 6 |
+
|
| 7 |
+
Address
|
| 8 |
+
|
| 9 |
+
email
|
| 10 |
+
|
| 11 |
+
1
|
| 12 |
+
|
| 13 |
+
§ REPRODUCIBILITY SUMMARY
|
| 14 |
+
|
| 15 |
+
§ 2 SCOPE OF REPRODUCIBILITY
|
| 16 |
+
|
| 17 |
+
StylEx is an approach for classifier-conditioned training of a StyleGAN2 [6], intending to capture classifier-specific attributes in its disentangled StyleSpace [15]. Attributes can be adjusted to generate counterfactual explanations of 5 the classifier decisions. Stylex is domain and classifier-agnostic, while its explanations are claimed to be human- 6 interpretable, distinct, coherent and sufficient to produce flipped classifier decisions. We verify these claims by 7 reproducing a selection of the experiments in the paper.
|
| 18 |
+
|
| 19 |
+
§ B METHODOLOGY
|
| 20 |
+
|
| 21 |
+
We verified a selection of the experimental results on the code available by the authors. However, a significant part of the training procedure, network architecture and hyperparameter configurations were missing. As such, we reimplemented 1 the model and available TensorFlow code to PyTorch, to enable easier reproducibility on the proposed case studies. 2 All experiments were run in approximately 20-50 GPU hours per dataset, depending on the batch size, gradient accumulation and GPU.
|
| 22 |
+
|
| 23 |
+
§ RESULTS
|
| 24 |
+
|
| 25 |
+
We verified that the publicly available pretrained model has a ’sufficiency’ measure within $1\%$ of the value reported in the paper. Additionally, we evaluate the Fréchet inception distance (FID) scores of images generated by the released model. We show that the FID score increases with the number of attributes used to generate a counterfactual explanation. Custom models were trained on three datasets, with a reduced image dimensionality $\left( {64}^{2}\right)$ . Additionally, a user study was conducted to evaluate the distinctiveness and coherence of the images. We report a significantly lower accuracy on the identification of the extracted attributes and 'sufficiency' scores on our model.
|
| 26 |
+
|
| 27 |
+
§ WHAT WAS EASY
|
| 28 |
+
|
| 29 |
+
It was easy to run the provided Jupyter Notebook, and verify the results of the pretrained models on the FFHQ dataset. Extending an existing StyleGAN2 implementation to fit this study was relatively easy.
|
| 30 |
+
|
| 31 |
+
§ 24 WHAT WAS DIFFICULT
|
| 32 |
+
|
| 33 |
+
Reproducing the experiments on the same scale as the authors, as well as the development of the full training procedure, model architecture and hyperparameters, particularly due to underspecification in the original paper. Additionally, the 27 conversion of code from Tensorflow to PyTorch.
|
| 34 |
+
|
| 35 |
+
§ 28 COMMUNICATION WITH ORIGINAL AUTHORS
|
| 36 |
+
|
| 37 |
+
We corresponded with the first author of the paper through several emails. Through our mail contact, additional details were released on the network architecture, the training procedure and the hyperparameter configurations.
|
| 38 |
+
|
| 39 |
+
0.29 0.30 0.32 0.00 0.19 0.53 Perceived 1.00 0.73 1.00 Perceived ("Blight middle of leaf") ("Unknown") Perceived More Female Perceived 0.90 0.99 0.85 Perceived More Male Perceived ("Eyebrow Thickness")
|
| 40 |
+
|
| 41 |
+
Figure 1: Top-1 automatically detected attributes for perceived-gender classifiers (left: version 1, right: version 2) and perceived-health of leaves classifiers (middle). Similarly to the original paper, the counterfactual images are marked by a frame. Displayed probabilities correspond to the person being male for perceived gender and the leaf being healthy for perceived health of leaf. More attributes can be found in the appendix.
|
| 42 |
+
|
| 43 |
+
§ 1 INTRODUCTION
|
| 44 |
+
|
| 45 |
+
Existing post-hoc visual explainability measures, such as heatmaps[13], can highlight regions that influence the decision. However, they do not visualize non-spatial localized attributes nor do they indicate how these areas may be changed to influence the classification. Counterfactual explanations, which are statements of the form "Had the input $\mathbf{x}$ been $\widetilde{\mathbf{x}}$ , the classifier output would have been $\widetilde{\mathbf{y}}$ instead of $\mathbf{y}$ ", have been proposed as an alternative which both specifies the important features and naturally explains how it can be altered to achieve an alternative outcome.
|
| 46 |
+
|
| 47 |
+
As such, these explanations are promising as they can provide a suggestive recourse to non-domain experts in a machine learning-based decision system. The effectiveness of these methods strongly depend on the intuitive difference that humans observe; therefore one of the primary objectives is to find these attributes. Secondary objectives involve the visualization and control of the impact of these features on the classifier output.
|
| 48 |
+
|
| 49 |
+
In this work, we reproduce the paper 'Explaining in Style: Explaining a GAN in StyleSpace' [8]. The paper proposes a novel method for explaining the classification of a given image, by altering discovered human-interpretable features discovered to affect the classification output. We reimplemented the model in PyTorch together with the training procedure, as the original TensorFlow implementation lacked the training procedure code. We performed training on the FFHQ and PlantVillage dataset using a lower resolution. Using our own implementation, we check whether the results are consistent with the descriptions provided in the paper. We strengthen this with the addition of a human-grounded evaluation of the generated images. Additionally, we used the FID measure to evaluate the image quality of the counterfactual generated images.
|
| 50 |
+
|
| 51 |
+
§ 2 SCOPE OF REPRODUCIBILITY
|
| 52 |
+
|
| 53 |
+
The StylEx model, in addition to the AttFind algorithm defined in the paper, is presented as a viable option for generating counterfactual explanations of black-box classifiers. The StylEx model aims to make individual classifier-relevant, through a novel training procedure which is outlined in 3 .
|
| 54 |
+
|
| 55 |
+
As no benchmark metrics exist to evaluate and assess attribute-based counterfactual explanations, the authors propose three evaluation criteria themselves: 1) visual coherence, 2) distinctness and 3) ’effect of attributes on classification’ (sufficiency). We reformulate these criteria as the main claims of the paper in the following manner:
|
| 56 |
+
|
| 57 |
+
1. Visual Coherence: Attributes detected by StylEx should be clearly identifiable by humans.
|
| 58 |
+
|
| 59 |
+
2. Distinctness: The attributes extracted by StylEx should be distinct.
|
| 60 |
+
|
| 61 |
+
3. Sufficiency: Changing attributes should result in a change of classifier output, where changing multiple attributes has a cumulative effect.
|
| 62 |
+
|
| 63 |
+
§ 3 METHODOLOGY
|
| 64 |
+
|
| 65 |
+
To evaluate claim 1 and 2, the authors conduct a user study in two parts. To evaluate claim 3, they study the percentage of flipped classifications when modifying top- $k$ (in their case $k = {10}$ ) attributes. To reproduce these claims, we conduct the same experiments, albeit at a lower dimensionality of ${64}^{2}$ . The complex network architecture of StyleGAN, as well as the encoder both require a significant number of training epochs until its convergence and thus, training these at the full resolution of ${256}^{2}$ is extremely computationally expensive.
|
| 66 |
+
|
| 67 |
+
We verify sufficiency scores of the released model, by making use of the supplied Jupyter Notebook. However, several crucial elements were missing, which included details on the training procedure the omission of hyperparameter configurations and details on the optimization procedure. As such, we ported the available TensorFlow code to PyTorch, to enable easier reproducibility on the proposed case studies.
|
| 68 |
+
|
| 69 |
+
We reimplemented the StylEx model in PyTorch, using an open-source StyleGAN2 implementation as a starting point
|
| 70 |
+
|
| 71 |
+
For running our code, we have made use of an NVIDIA GTX 1080 Ti, RTX 2070 Super and a laptop RTX 3060 graphics card, running on different machines. In the conduction of the user study, we have made use of the online survey tool Qualtrics [1].
|
| 72 |
+
|
| 73 |
+
$Z \sim N\left( {\mu ,{\sigma }^{2}}\right)$ StyleSpace 3 Generator $G$ $G\left( {E\left( X\right) ,C\left( X\right) }\right)$ StyleVectorizer $C\left( {G\left( x\right) }\right)$ — ${L}_{rec}^{w}$ - ${L}_{cl}$ , Encoder Classifier $E$ $C$ ${L}_{rec}^{x}$ $X$
|
| 74 |
+
|
| 75 |
+
Figure 2: Stylex network architecture: with the respective classifier $C$ , generator $G$ , discriminator $D$ and encoder $E$ . For clarification we have slightly adapted the visualization to include the StyleVectorizer which obtains the latent vector $w$ from $z$ [5], after learning that the authors have used alternating training [1]
|
| 76 |
+
|
| 77 |
+
§ 3.1 MODEL DESCRIPTIONS
|
| 78 |
+
|
| 79 |
+
In addition to a pretrained classifier $C$ , StylEx is comprised of three trainable elements, which are a 1) generator $G$ ,2) a discriminator $D$ and 3) an encoder $E$ . The $D$ and $G$ follow the StyleGAN2 architecture, with minor alterations to $D$ which will be explained below. Figure 2 provides an overview of the network architecture.
|
| 80 |
+
|
| 81 |
+
Some design details were unspecified or omitted in the original paper. We contacted the authors to provide clarification on these issues, which are stated as follows:
|
| 82 |
+
|
| 83 |
+
1. StylEx is trained using both encoder input and noise input transformed through StyleGAN2's mapping network, using alternating steps;
|
| 84 |
+
|
| 85 |
+
2. The output of $D$ is a weighted sum of the 2-dimensional output of its last layer with the input probabilities of the 1) original image if using the encoder, 2) randomly sampled image if using noise input;
|
| 86 |
+
|
| 87 |
+
3. ${\mathcal{L}}_{rec}$ and ${\mathcal{L}}_{cls}$ are only calculated during the generator training steps.
|
| 88 |
+
|
| 89 |
+
${}^{1}$ https://github.com/lucidrains/stylegan2-pytorch
|
| 90 |
+
|
| 91 |
+
The GAN is trained jointly with the encoder, which embeds an image into the $W$ latent space of StyleGAN2, forming a latent vector $w$ . A recent observation by [14] highlighted the disentanglement of this space (appropriately called, the StyleSpace) that is used to extract classifier-specific attributes. Logits of the original image $C\left( x\right)$ are then appended to $w$ , to condition the training on classifier inputs. The current architecture includes a StyleVectorizer that obtains the latent vector $W$ from $Z$ , which is sampled from a normal distribution. In alternating steps the generator was fed input from the encoder and input from the StyleVectorizer mapping network [5]. The original authors noticed a slight improvement in image quality using alternating training, compared to only using the encoder input.
|
| 92 |
+
|
| 93 |
+
We note that we used two slightly different implementation choices for training our models. The first implementation does not include the discriminator change mentioned above, while the second implementation does and uses probabilities instead of logits for concatenation to $w$ . We call these two choices ’Model 1’ and ’Model 2’ in results on datasets where we have trained both. We additionally noted that the MobileNet classifier 'Model 1' was trained with did not perform well on the faces. This is why, for both faces models, a ResNet classifier was used to perform the AttFind algorithm.
|
| 94 |
+
|
| 95 |
+
This expanded latent vector $w$ , either obtained by the encoder or StyleVectorizer, is passed on to the StyleGAN2, where it is transformed into the StyleSpace by a set of concurrent affine transformations to style vectors ${s}_{0},\ldots ,{s}_{n}$ . These style vectors are used to generate novel images, that aim to reconstruct the original image as closely as possible. Several losses are used to quantitatively assess the convergence of the training procedure. The cumulative training loss for the algorithm is a sum of losses, denoted as follows:
|
| 96 |
+
|
| 97 |
+
$$
|
| 98 |
+
{\operatorname{StylEx}}_{\text{ Loss }} = {\mathcal{L}}_{\text{ adv }} + {\mathcal{L}}_{\text{ reg }} + {\mathcal{L}}_{\text{ rec }} + {\mathcal{L}}_{\text{ cls }}. \tag{1}
|
| 99 |
+
$$
|
| 100 |
+
|
| 101 |
+
A logistic adversarial loss [2] ${\mathcal{L}}_{adv}$ is used as in standard GAN training, followed by the regularization loss ${\mathcal{L}}_{reg}$ , as described in the original StyleGAN [6] paper. The reconstruction loss ${\mathcal{L}}_{\text{ rec }}$ is given by the sum of ${\mathcal{L}}_{\text{ rec }}^{x} + {\mathcal{L}}_{\text{ rec }}^{w} + {\mathcal{L}}_{\text{ LPIPS }}$ , where the first two terms are the $\mathbf{{L1}}$ distance between original and reconstructed input, and the original and reconstructed $w$ latent vector, respectively. The ${\mathcal{L}}_{LPIPS}$ term is the LPIPS distance between original and reconstructed input, as described in [17]. This loss ensures that reconstructed images resemble the original input as close as possible, to serve as an input for generating counterfactual examples. The classifier loss is defined as the Kullback-Leibler divergence between the original input image $X$ and the generated new image $G\left( {E\left( X\right) ,C\left( X\right) }\right)$ , defined as follows: ${\mathcal{L}}_{cls} = {D}_{KL}\left\lbrack {\left| {C\left( {x}^{\prime }\right) }\right| \mid C\left( x\right) }\right\rbrack$ . This loss ensures that the generator does not disregard image attributes that are important for the classification.
|
| 102 |
+
|
| 103 |
+
To extract classifier-specific attributes, the AttFind algorithm is proposed in the paper. As input, it takes the trained model $D$ and a set of $N$ images whose predicted label do not match the target label $y$ . For each class label, AttFind encodes the images and iteratively tries to find a set ${S}_{y}$ of $M$ style coordinates that represent the largest possible shift to the opposing class. Next to this, it finds the set of directions ${D}_{y} \in {1}^{M}$ that indicate to which class the direction needs to be adjusted to flip the classifier decision. In each iteration, it considers all style coordinates $K$ and determines the coordinate with the largest effect. All images where changing this coordinate results in a large effect on their probability are removed from the iteration. The process is repeated until no images are left, or until $M$ attributes are found.
|
| 104 |
+
|
| 105 |
+
§ 3.2 DATASETS
|
| 106 |
+
|
| 107 |
+
We reproduce a selection of the findings of the authors on two of the given datasets in our PyTorch re-implementation:
|
| 108 |
+
|
| 109 |
+
1. CelebA [9] The original Large-scale CelebFaces Attributes (CelebA) dataset ${}^{2}$ contains 200000 image entries. each containing 40 attribute annotations. We have trained classifiers on both the gender and age attribute.
|
| 110 |
+
|
| 111 |
+
2. FFHQ [11] The original Flickr-Faces-HQ dataset containing 70000 images of human faces. This dataset was used for StylEx training, while the pretrained classifier was trained on the CelebA dataset, following the procedure of the original paper. ${}^{3}$
|
| 112 |
+
|
| 113 |
+
3. Plant-Village: This dataset contains 54303 entries, with 38 categories. This dataset was used to train the classifier to learn the difference between sick and healthy leaves.
|
| 114 |
+
|
| 115 |
+
For the classification tasks, the FFHQ dataset was split in train/validation/test sets of 70/15/15, while the Plant-Village retained a proportion of ${70}/{20}/{10}$ .
|
| 116 |
+
|
| 117 |
+
${}^{2}$ https://www.kaggle.com/jessicali9530/celeba-dataset
|
| 118 |
+
|
| 119 |
+
${}^{3}$ This is a detail that was revealed through contact with the authors.
|
| 120 |
+
|
| 121 |
+
§ 3.3 HYPERPARAMETERS
|
| 122 |
+
|
| 123 |
+
Original research: For the partial reproduction of Table 3 of the original paper, we limited ourselves to a sample of $n = {250}$ images, rather than the $n = {1000}$ randomly sampled images, as denoted in the Jupyter Notebook.
|
| 124 |
+
|
| 125 |
+
Reimplementation: The computational costs of training StylEx precluded an in-depth hyperparameter search. For all modules except the encoder, we found a learning rate of ${2e} - 4$ for the ADAM optimizer performs well, with ${\beta }_{1} = {0.5}$ and ${\beta }_{2} = {0.9}$ . We found the training to diverge unless the encoder learning rate was lowered significantly to ${1e} - 5$ . We ascribe this difference to the significantly smaller input size in our models, or subtle implementation differences in the original paper which we don't have access to.
|
| 126 |
+
|
| 127 |
+
The classifier used in the paper was MobileNetV1 [4], but we opted for a MobileNetV2 or ResNet-18. The authors asserted that the use of advanced networks identified more subtle cues from the datasets on the classification problems at hand, and for this purpose, we opted for ResNet-18. Additionally, we observed that the MobileNet model did not perform well on the CelebA dataset for gender classification on this image size. The components of the ${\mathcal{L}}_{\text{ rec }}$ loss were scaled according to authors’ suggestion in our correspondence: 0.1 for ${\mathcal{L}}_{\text{ rec }}^{x}$ and ${\mathcal{L}}_{\text{ LPIPS }},1$ for ${\mathcal{L}}_{\text{ rec }}^{w}$ . Other loss components were not scaled.
|
| 128 |
+
|
| 129 |
+
On the local GPUs, we used a batch size of 4 with 8 gradient accumulation steps, while we use a batch size of 16 with 4 gradient accumulation steps on the computing cluster. For the training of the MobileNet V2 classifier and the ResNet-18 classifier, we have set the learning rate to ${lr} = {1e} - 4$ , used a batch size of 128 and used the Adam [7] with default PyTorch parameters.
|
| 130 |
+
|
| 131 |
+
§ 3.4 EXPERIMENTAL SETUP AND CODE
|
| 132 |
+
|
| 133 |
+
We aimed to follow the experimental setup as closely as possible for our experiments. Our PyTorch implementation is available on GitHub ${}^{4}$ to further support and advance reproducibility in machine learning research. The repository provides explanations to run the described experiments.
|
| 134 |
+
|
| 135 |
+
§ 3.5 COMPUTATIONAL REQUIREMENTS
|
| 136 |
+
|
| 137 |
+
Our models were trained on three different machines, which were a 1) laptop NVIDIA RTX 3060, 2) an NVIDIA RTX 2070 Super and a 3) computing cluster containing GTX 1080 Ti GPUs. It must be noted that the first machine made use of the Windows operating system, while the latter two are Linux-based. For both the FFHQ dataset as well as the Plant-Village dataset, training was done until convergence, which was reached in ${150}\mathrm{\;K}$ training steps for the FFHQ dataset and 260000 training steps on the Plant-Village dataset.
|
| 138 |
+
|
| 139 |
+
On the local GPUs, a batch size of 4 (RTX 3060) and 8 (RTX 2070 Super) was used alongside a gradient accumulation for 8 (RTX 3060) and 2 (RTX 2070 Super) steps. On the computing cluster, a batch size of 16 was kept, with a gradient accumulation parameter of 4 . Depending on the hyperparameters of the batch size, and gradient accumulation, the computational time to run the experiments ranged between 20-50 GPU hours. Training for 150000 steps took 20 hours on an RTX 2070 Super.
|
| 140 |
+
|
| 141 |
+
§ 4 RESULTS
|
| 142 |
+
|
| 143 |
+
§ 4.1 RESULTS REPRODUCING ORIGINAL PAPER
|
| 144 |
+
|
| 145 |
+
§ 4.1.1 SUFFICIENCY
|
| 146 |
+
|
| 147 |
+
We calculate the percentage of flipped classifications after changing the top-10 attributes found by the AttFind procedure. The results can be seen in table 1. Our results on the author’s model is within 1% of the accuracy reported in the paper. Our models perform show significantly worse performance on both perceived age (51% vs 93.9%) and plant healthiness (30% vs 91.2%), showing that the attributes discovered are not very relevant for classification.
|
| 148 |
+
|
| 149 |
+
${}^{4}$ https://anonymous.4open.science/r/Explaining-In-Style-Reproducibility-Study-5665
|
| 150 |
+
|
| 151 |
+
max width=
|
| 152 |
+
|
| 153 |
+
X Ours
|
| 154 |
+
|
| 155 |
+
1-2
|
| 156 |
+
Perceived Gender 94.8%
|
| 157 |
+
|
| 158 |
+
1-2
|
| 159 |
+
Perceived Gender (Model 1, $s = 2$ ) 51%
|
| 160 |
+
|
| 161 |
+
1-2
|
| 162 |
+
Perceived Gender (Model 2, $s = 1$ ) 21%
|
| 163 |
+
|
| 164 |
+
1-2
|
| 165 |
+
Plants $\left( {s = 2}\right)$ 30%
|
| 166 |
+
|
| 167 |
+
1-2
|
| 168 |
+
|
| 169 |
+
Table 1: Percentage of flipped classifications on different datasets. Row in italics shows our experiment on the author's supplied model. $s$ represents the shift size used to generate the results. The shift sizes have been decided by qualitatively looking at the produced images.
|
| 170 |
+
|
| 171 |
+
§ 4.1.2 COHERENCY AND DISTINCTNESS
|
| 172 |
+
|
| 173 |
+
Similar to the original paper, we have conducted a user study $\left( {n = {54}}\right)$ to evaluate the distinctiveness of the found attributes and coherence of the generated images. The user study was divided into two parts - 1 ) a classification study and a 2) verbal description study, following a similar setup as presented in [16]. For the classification study, users are shown four animations in a grid format, each corresponding to a modification of a given attribute. In the verbal description study, the users were asked to look at four animations, and consequently describe in 1-4 words the changing attribute.
|
| 174 |
+
|
| 175 |
+
We have done this for the plant dataset as well as the FFHQ datasets. The order of the datasets was randomized to avoid biases and learning effects. All participants are undergraduate and graduate students who have some affinity and knowledge of machine learning. None of them have self-reported colourblindness. In [4], a few examples can be found on the posed questions and the type of provided answers. Although our results seem to slightly outperform the results by Wu et al. (2021) on the perceived age classifier, it does not seem to outperform the method posed by Lang et al. (2021).
|
| 176 |
+
|
| 177 |
+
max width=
|
| 178 |
+
|
| 179 |
+
X Wu et al. Lang et al. Ours
|
| 180 |
+
|
| 181 |
+
1-4
|
| 182 |
+
Perceived Gender 0.783 (±0.186) 0.96 (±0.047) Model 1: ${0.52}\left( {\pm {0.2081}}\right)$ Model 2: 0.79 (±0.1599)
|
| 183 |
+
|
| 184 |
+
1-4
|
| 185 |
+
Plants ${0.91}\left( {\pm {0.081}}\right)$ 0.916 (±0.081) 0.66 (±0.323)
|
| 186 |
+
|
| 187 |
+
1-4
|
| 188 |
+
|
| 189 |
+
Table 2: User study results. Partial reproduction of Table 2 of the original paper, on a subset of the datasets
|
| 190 |
+
|
| 191 |
+
§ 4.2 RESULTS BEYOND ORIGINAL PAPER
|
| 192 |
+
|
| 193 |
+
To investigate the impact of attribute perturbation on the quality of the generated images, we compute the FID [3] between the original images and the generated images using [12]. We perturbed the images with increasingly more attributes in a cumulative fashion, starting from the 0th attribute which corresponds to only encoding and decoding the image. For the pretrained model from the original authors, we used the provided subset of 250 latents and their corresponding original images that were found in FFHQ. For our own models, we used subsets of 100 images (500 images for model 2) due to computational constraints with regards to running the AttFind algorithm. Our results, seen in 3, show that FID increases with the number of perturbed attributes. This result is not surprising, as changing an attribute can cause combinations of features not commonly seen in the original data distribution (e.g. young boy with lipstick). Moreover, for our reproducibility study, we noticed that perturbing more attributes at once, resulted in more artefacts, which also could have caused the FID to increase. 5 Discussion
|
| 194 |
+
|
| 195 |
+
140 8 10 Number of perturbed attributes FFHQ - Age - Lang et al FFHQ - Old Model 130 FFHQ - New Model Plant-Village 120 110 FID 100 90 80 70
|
| 196 |
+
|
| 197 |
+
Figure 3: FID scores after perturbing top- $k$ attributes.
|
| 198 |
+
|
| 199 |
+
Our experimental results support the claims posed in the original paper -
|
| 200 |
+
|
| 201 |
+
the attributes detected by StylEx are identifiable by humans to a certain degree, distinct and sufficient. However, due to the significantly lower resolution and poorer image quality of the models, these results are not comparable to the ones posed in the original paper.
|
| 202 |
+
|
| 203 |
+
Reflection on our reproduction study An important insight obtained during the conduction of the study is that the provided code did not cover the entire scope of the paper. Through a thorough study of both the code as well as the paper, we quickly noted discrepancies and missing elements that were fundamental - such as the network architecture, scaling of the losses and the hyperparameter configurations - to the original research.
|
| 204 |
+
|
| 205 |
+
We believe that researchers could enhance transparency and reproducibility in machine learning research by the addition of a reproducibility statement within their research, including the used hardware, software and details relevant to the proposed study (e.g. such as clarifications on the exact network architecture). Moreover, it is important to detail hyperparameter search spaces, final parameter settings for all the used architectures and baselines. We believe that transparency is fundamental to stimulate the large-scale deployment of machine learning algorithms.
|
| 206 |
+
|
| 207 |
+
§ 5.1 WHAT WAS EASY
|
| 208 |
+
|
| 209 |
+
It was relatively easy to run the code as the provided Jupyter Notebook by the authors. The provided notebook was thoroughly documented and written in consistent coding styles, making the interpretation of the notebook easier. The provided notebook lacked the elements to fully reproduce the research; the training procedure of the network was missing, only one pretrained model was provided and four datasets were missing that we were required to add. As such, we had to partially re-implement the framework in PyTorch, while the original implementation was provided in TensorFlow. The addition of new datasets in our framework to accommodate the experiments was a relatively easy task.
|
| 210 |
+
|
| 211 |
+
§ 5.2 WHAT WAS DIFFICULT
|
| 212 |
+
|
| 213 |
+
Reproducing the experiments at the same computational scale as the authors was deemed to be the largest challenge, given the limited computational resources we had available. For the training of the model, the original authors made use of $8\mathrm{{NVIDIA}}\mathrm{V}{100}\mathrm{\;s}$ , which took the original authors a week to train at the full resolution of ${256}^{2}$ , whereas we were restricted to the use of the computing cluster, Colab/Kaggle and our local GPUs. Due to this limitation, we had to scale down the resolution of the new images across the different datasets significantly. We scaled down the resolution of the generated images across the different datasets to a resolution of ${64}^{2}$ , which limited the fidelity of the results. Additionally, we experienced the following issues with the original paper:
|
| 214 |
+
|
| 215 |
+
1. Little to no hyperparameters were given in the paper, e.g. on the scaling of the losses, the learning rates etc.
|
| 216 |
+
|
| 217 |
+
2. Ambiguities about the training procedure: the classifier in the notebook was trained on CelebA, instead of the FFHQ dataset, which we did not expect. This appeared to be a design choice by the authors, as the CelebA dataset contained labels, which the network could leverage information from. Additionally, softmax logits appeared to be added to the discriminator - which was not mentioned explicitly in the paper - but appeared to follow the cGAN [10] training procedure.
|
| 218 |
+
|
| 219 |
+
3. Ambiguities on the network architecture: It was not entirely clear what the dimensionality and the function was of the $Z$ vector, as the paper did not explicitly mention this.
|
| 220 |
+
|
| 221 |
+
4. Ambiguities about the preprocessing pipeline of the images before it enters the encoder/classifier - in contact with the authors, they appeared to scale the RGB values from $\left\lbrack {-1,1}\right\rbrack$
|
| 222 |
+
|
| 223 |
+
The original authors did provide the hyperparameter configurations early on, which slightly reduced the time to explore the different possibilities, but the provided learning rate for example was too high for us. Additionally, the conversion of the AttFind algorithm from TensorFlow to PyTorch also proved to be a somewhat difficult exercise. The challenge predominantly laid on the integration of this algorithm within the new PyTorch codebase, which required a thorough understanding of the internal workings of the algorithm.
|
| 224 |
+
|
| 225 |
+
§ 244 5.3 COMMUNICATION WITH ORIGINAL AUTHORS
|
| 226 |
+
|
| 227 |
+
Three emails were sent to the first author of the paper. In these emails, we have asked for additional details of the proposed network architecture, hyperparameter configurations and the training procedure of the networks. These details were not noted in the paper, nor the provided code. Answers to these questions were provided promptly. Unfortunately, they were not able to share their code for the training procedure, as it contained too many internal dependencies from 249 their perspective.
|
papers/ML_Reproducibility_Challenge/ML_Reproducibility_Challenge 2021/ML_Reproducibility_Challenge 2021 Fall/SZNMKnzQhAY/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,39 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
## Introduction
|
| 2 |
+
|
| 3 |
+
Wu et al in this paper explore an adaptive computation method that can be efficiently applied to an existing generative ODQA model. They find that, by replacing the encoder of generative ODQA models with their proposed adaptive passage encoder, they can train an effective adaptive computation policy without tuning the base model. This allows applying adaptive computation to large state-of-the-art generative models, which was previously challenging computation wise. Their experimental results show that their method produces more accurate results than a state-of-the-art generative model on both NaturalQuestions and TriviaQA, and it outperforms the previous AC method by a large margin. We want to reproduce some results of their expriments.
|
| 4 |
+
|
| 5 |
+
## Scope of Reproducibility
|
| 6 |
+
|
| 7 |
+
In this work we want to reproduce this paper (original paper) result. This paper proposed a APE-FiD-base model and compare its result with another paper (FiD-base model). Our goal is to examine these expriments. You can see our implementations in our github repository. All details described in README file completely and you can study it.
|
| 8 |
+
|
| 9 |
+
## Methodology
|
| 10 |
+
|
| 11 |
+
In this work, We used the author codes proposed in this github repository. We used google colab environment and our dedicated configurations was:
|
| 12 |
+
|
| 13 |
+
- Run Type: GPU
|
| 14 |
+
|
| 15 |
+
- Disk: ${80}\mathrm{{GB}}$
|
| 16 |
+
|
| 17 |
+
- RAM: 12 GB
|
| 18 |
+
|
| 19 |
+
Due to lack of memory resources, we had to run the experiments on $1/8$ of the original data, and this required making changes to the original code, which is presented in detail in our github repository. Each of our experiments lasted an hour and a half (Totally three hours).
|
| 20 |
+
|
| 21 |
+
## Results
|
| 22 |
+
|
| 23 |
+
We produced almost 50% of the results of the original paper. The results we obtained were lower than the results of the original paper (due to memory constraints we had to experiment on a small portion of the data) and therefore we expected the results to be different from the results of the original paper.
|
| 24 |
+
|
| 25 |
+
Of course, our results showed the superiority of the proposed method of the original paper, and we observed in our experiments this superiority of the proposed method in the test set data (see our github repository).
|
| 26 |
+
|
| 27 |
+
## What was easy
|
| 28 |
+
|
| 29 |
+
Model training was easy. The input parameters required for model training were well mentioned and also good error messages make error handling feasible. (Training of APE-FiD base model was easier train than the FiD-base model)
|
| 30 |
+
|
| 31 |
+
## What was difficult
|
| 32 |
+
|
| 33 |
+
One of our main problems was that we did not have access to the original paper data and we encountered a 403 error while downloading and we had to use the FiD-base data.
|
| 34 |
+
|
| 35 |
+
Also, data preprocessing was hard task for us. Due to the large volume of data we had to split the data and this splitting had to be done with the indexes in mind because the different parts of the data connected to each other and we had to pay attention to the original data format. Of course, we took this into account and preprocessed our data with respect to this consideration.
|
| 36 |
+
|
| 37 |
+
## Communication with original authors
|
| 38 |
+
|
| 39 |
+
We did not have any communication with original authors and just used their github repositories.
|
papers/ML_Reproducibility_Challenge/ML_Reproducibility_Challenge 2021/ML_Reproducibility_Challenge 2021 Fall/SZNMKnzQhAY/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,39 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ INTRODUCTION
|
| 2 |
+
|
| 3 |
+
Wu et al in this paper explore an adaptive computation method that can be efficiently applied to an existing generative ODQA model. They find that, by replacing the encoder of generative ODQA models with their proposed adaptive passage encoder, they can train an effective adaptive computation policy without tuning the base model. This allows applying adaptive computation to large state-of-the-art generative models, which was previously challenging computation wise. Their experimental results show that their method produces more accurate results than a state-of-the-art generative model on both NaturalQuestions and TriviaQA, and it outperforms the previous AC method by a large margin. We want to reproduce some results of their expriments.
|
| 4 |
+
|
| 5 |
+
§ SCOPE OF REPRODUCIBILITY
|
| 6 |
+
|
| 7 |
+
In this work we want to reproduce this paper (original paper) result. This paper proposed a APE-FiD-base model and compare its result with another paper (FiD-base model). Our goal is to examine these expriments. You can see our implementations in our github repository. All details described in README file completely and you can study it.
|
| 8 |
+
|
| 9 |
+
§ METHODOLOGY
|
| 10 |
+
|
| 11 |
+
In this work, We used the author codes proposed in this github repository. We used google colab environment and our dedicated configurations was:
|
| 12 |
+
|
| 13 |
+
* Run Type: GPU
|
| 14 |
+
|
| 15 |
+
* Disk: ${80}\mathrm{{GB}}$
|
| 16 |
+
|
| 17 |
+
* RAM: 12 GB
|
| 18 |
+
|
| 19 |
+
Due to lack of memory resources, we had to run the experiments on $1/8$ of the original data, and this required making changes to the original code, which is presented in detail in our github repository. Each of our experiments lasted an hour and a half (Totally three hours).
|
| 20 |
+
|
| 21 |
+
§ RESULTS
|
| 22 |
+
|
| 23 |
+
We produced almost 50% of the results of the original paper. The results we obtained were lower than the results of the original paper (due to memory constraints we had to experiment on a small portion of the data) and therefore we expected the results to be different from the results of the original paper.
|
| 24 |
+
|
| 25 |
+
Of course, our results showed the superiority of the proposed method of the original paper, and we observed in our experiments this superiority of the proposed method in the test set data (see our github repository).
|
| 26 |
+
|
| 27 |
+
§ WHAT WAS EASY
|
| 28 |
+
|
| 29 |
+
Model training was easy. The input parameters required for model training were well mentioned and also good error messages make error handling feasible. (Training of APE-FiD base model was easier train than the FiD-base model)
|
| 30 |
+
|
| 31 |
+
§ WHAT WAS DIFFICULT
|
| 32 |
+
|
| 33 |
+
One of our main problems was that we did not have access to the original paper data and we encountered a 403 error while downloading and we had to use the FiD-base data.
|
| 34 |
+
|
| 35 |
+
Also, data preprocessing was hard task for us. Due to the large volume of data we had to split the data and this splitting had to be done with the indexes in mind because the different parts of the data connected to each other and we had to pay attention to the original data format. Of course, we took this into account and preprocessed our data with respect to this consideration.
|
| 36 |
+
|
| 37 |
+
§ COMMUNICATION WITH ORIGINAL AUTHORS
|
| 38 |
+
|
| 39 |
+
We did not have any communication with original authors and just used their github repositories.
|
papers/ML_Reproducibility_Challenge/ML_Reproducibility_Challenge 2021/ML_Reproducibility_Challenge 2021 Fall/ScfP3G73CY/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,392 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# When Does Self-supervision Improve Few-shot Learning? - A Reproducibility Report
|
| 2 |
+
|
| 3 |
+
Anonymous Author(s)
|
| 4 |
+
|
| 5 |
+
Affiliation
|
| 6 |
+
|
| 7 |
+
Address
|
| 8 |
+
|
| 9 |
+
email
|
| 10 |
+
|
| 11 |
+
1
|
| 12 |
+
|
| 13 |
+
## Reproducibility Summary
|
| 14 |
+
|
| 15 |
+
## 2 Scope of Reproducibility
|
| 16 |
+
|
| 17 |
+
The paper investigates applying self-supervised learning (SSL) as a regularizer to meta-learning based few-shot learners. The authors claim that SSL tasks reduce the relative error of few-shot learners by $4\% - {27}\%$ even when the datasets 5 are small, and the improvements are greater when the amount of supervision is lesser or the task is more challenging. 5 Further, they observe that incorporating unlabelled images from other domains for SSL can hurt the performance, and propose a simple algorithm to select images for SSL from other domains to provide further improvements.
|
| 18 |
+
|
| 19 |
+
## B Methodology
|
| 20 |
+
|
| 21 |
+
We reimplement the algorithms in PyTorch, starting with the author's codebase as a reference. We had to correct several bugs in the author's codebase, and reimplement the domain selection algorithm from scratch since the codebase did not contain it. We conduct experiments involving combinations of supervised and self-supervised learning on multiple 2 datasets, on 2 different architectures and perform extensive hyperparameter sweeps to test the claim. We used 4 GTX 1080Ti GPUs throughout, and all our experiments including the sweeps took a total compute time of 980 GPU hours.
|
| 22 |
+
|
| 23 |
+
## Results
|
| 24 |
+
|
| 25 |
+
On the ResNet-18 architecture and an image size of 224 that the paper uses throughout, our results on 6 datasets overall verify the claim that SSL regularizes few-shot learners and provide higher gains with difficult tasks. Further, our results also verify that out-of-distribution images for SSL hurt the accuracy, and the domain selection algorithm that we implement from scratch also verifies the paper's claim that the algorithm can choose images from a large pool of unlabelled images from other domains, and improve the performance.
|
| 26 |
+
|
| 27 |
+
Going beyond the original paper, we also conduct SSL experiments on 5 datasets with the Conv-4-64 architecture with an image size of 84 , and find that self-supervision does not help boost the accuracy of few-shot learners in this setup. Further, we also show results on a practical real-world benchmark on cross-domain few-shot learning, and show that using self-supervision when training the base models degrades performance when evaluated on these tasks.
|
| 28 |
+
|
| 29 |
+
## What was easy
|
| 30 |
+
|
| 31 |
+
The paper was well written and easy to follow, and provided a clear description of the experiment. The author's code implementations were relatively easy to understand and mostly reflected the experiments described in the paper.
|
| 32 |
+
|
| 33 |
+
## 27 What was difficult
|
| 34 |
+
|
| 35 |
+
Since the codebase was not fully complete, it took us a lot of time to identify and solve bugs, and reimplement the algorithms not present in the code. Further, multiple datasets needed a lot of preprocessing to be used. The number of hyperparameters being too many but each proving to be important, and evaluating all the claims of the paper on 5 datasets and 2 architectures was difficult to the number of experiment configurations, resulting in a very high computational cost of 980 GPU hours.
|
| 36 |
+
|
| 37 |
+
## Communication with original authors
|
| 38 |
+
|
| 39 |
+
We maintained contact with the authors throughout the challenge to clarify several implementation details and questions regarding the domain selection algorithm. The authors were responsive and replied promptly with detailed explanations.
|
| 40 |
+
|
| 41 |
+
## 1 Introduction
|
| 42 |
+
|
| 43 |
+
Deep learning has made major advances, however this has been possible only due to the availablity of large annotated datasets for each task. Methods such as data augmentation and regularization alleviate overfitting in low-data regimes, but not completely. This motivated research in few-shot learning, in which we aim to build a classifier that should be adapted to learn new classes not seen in training, with very few samples in each class. In this work, we reproduce the paper "When Does Self-supervision Improve Few-shot Learning?" by Su et al. (13) (henceforth referred to as "the original paper" or "the paper") which investigates using self-supervised learning (SSL) in such low-data regimes to improve the performance of meta-learning based few-shot learners.
|
| 44 |
+
|
| 45 |
+
## 2 Scope of reproducibility
|
| 46 |
+
|
| 47 |
+
The paper claims that
|
| 48 |
+
|
| 49 |
+
- With no additional training data, adding self-supervised tasks such as jigsaw and rotation as an auxiliary task improves the performance of existing few-shot techniques on benchmarks across several different domains
|
| 50 |
+
|
| 51 |
+
- The benefits of self-supervision increase with the difficulty of the task, for example when training with a base dataset with less labelled data, or with images of lesser quality/resolution
|
| 52 |
+
|
| 53 |
+
- Additional unlabelled data from dissimilar domains, when used for self-supervision, negatively impacts the performance of few-shot learners
|
| 54 |
+
|
| 55 |
+
- The proposed domain selection algorithm can alleviate this issue by learning to pick images from a large and generic pool of images
|
| 56 |
+
|
| 57 |
+
We thoroughly reproduce all the experiments, and investigate whether the claims hold true, with the model and the six benchmark datasets used by the authors. Beyond the paper, we find that the results are biased towards the architecture used, and demonstrate that the gains do not hold when the input image size and architecture differ from those reported in the paper. We also report results on the more practical cross-domain few-shot learning setup, where we find that self-supervision does not help ImageNet-trained few-shot learners generalize to new domains better.
|
| 58 |
+
|
| 59 |
+
## 3 Methodology
|
| 60 |
+
|
| 61 |
+
The goal of a few-shot learner is to generalize is to learn representations of base classes that lead to good generalization on novel classes. To this end, the proposed framework combines meta learning approaches for few-shot learning with self-supervised learning. In general, learning consists of estimating functions $f$ , the feature extractor and $g$ , the classifier that minimize the empirical loss $\ell$ over the training data from base class ${D}_{s} = {\left\{ \left( {x}_{i},{y}_{i}\right) \right\} }_{i = 1}^{n}$ consisting of images ${x}_{i} \in \mathcal{X}$ and labels ${y}_{i} \in \mathcal{Y}$ , along with suitable regularization $\mathcal{R}$ . This can be written as:
|
| 62 |
+
|
| 63 |
+
$$
|
| 64 |
+
{L}_{s} = \mathop{\sum }\limits_{{\left( {{x}_{i},{y}_{i}}\right) \in {D}_{s}}}\ell \left( {g \circ f\left( {x}_{i}\right) ,{y}_{i}}\right) + \mathcal{R}\left( {f, g}\right)
|
| 65 |
+
$$
|
| 66 |
+
|
| 67 |
+
In the original paper, the meta-learning based prototypical networks (ProtoNet) are used as part of the supervised loss During meta-training, the ProtoNet computes the mean of the embeddings of all samples in a class. Then, a distance metric such as Euclidean distance or cosine distance is used to classify every query sample into one of the classes, using the distance from the class-prototypes. The loss over the query samples is backpropagated to the network, and this procedure is repeated for multiple episodes with $n$ randomly sampled classes in each episode, with $k$ examples in each class, hence referred to as the n-way k-shot setup. Hence the network meta-learns to provide useful class-prototypes from very few examples. At meta-test time, class prototypes are recomputed for classification and query examples are classified based on the distances to the class prototypes.
|
| 68 |
+
|
| 69 |
+
Apart from the supervised losses, the paper uses self-supervised losses ${\ell }_{s}s$ that are based on data $\left( {\widehat{x},\widehat{y}}\right)$ whose labels can be derived automatically without any human labelling:
|
| 70 |
+
|
| 71 |
+
$$
|
| 72 |
+
{L}_{ss} = \mathop{\sum }\limits_{{\left( {x}_{i}\right) \in {D}_{ss}}}\ell \left( {h \circ f\left( {\widehat{x}}_{i}\right) ,{\widehat{y}}_{i}}\right)
|
| 73 |
+
$$
|
| 74 |
+
|
| 75 |
+
5 The jigsaw task splits an image into 9 regions (3x3) and permutes the parts to obtain the input $\widehat{x}$ . The target label $\widehat{y}$ is 76 the index of the permuatation. The total number of indices are 9! which is reduced to 35 indices [cite - 41] by grouping the possible permutations to control the difficulty of the task. The rotation task rotates the image by an angle $\theta \in {0}^{ \circ },{90}^{ \circ },{180}^{ \circ },{270}^{ \circ }$ to obtain $\widehat{x}$ , with $\widehat{y}$ being the index of the angle.
|
| 76 |
+
|
| 77 |
+
The paper uses a weighted combination of the two losses $L = \left( {1 - \alpha }\right) * {L}_{s} + \left( \alpha \right) * {L}_{ss}$ . The paper studies self-supervised learning as a regularizer for representation learning, in the context of few-shot learning tasks.
|
| 78 |
+
|
| 79 |
+
The author also propose an algorithm to select images from a large-dataset for self-supervision when ${D}_{s}$ and ${D}_{ss}$ are 32 different. Here, a classifier is trained to distinguish the ResNet-101 features of images from ${D}_{s}$ and images from ${D}_{ss}$ , and the top- $k$ images according to the ratio $p\left( {x \in {D}_{s}}\right) /p\left( {x \in {D}_{p}}\right)$ are selected for self-supervision.
|
| 80 |
+
|
| 81 |
+
## 4 Experimental settings
|
| 82 |
+
|
| 83 |
+
### 4.1 Details regarding the code
|
| 84 |
+
|
| 85 |
+
The authors provide a public implementation of the code ${}^{1}$ , which is built upon a popular codebase ${}^{2}$ from Chen et al (1). We find that there are a lot of errors and bugs in the code, which took a lot of time to debug. This took up a considerable part of our time. Further, the code for the domain selection algorithm was not present, and hence we had to reimplement it from scratch. Our code ${}^{3}$ reuses multiple files from the original codebase, corrects several errors, provides easier interfaces to train and test models, and also provides an implementation of the domain selection algorithm. We also provide interfaces to train models with a different architecture, and to evaluate models in a cross-domain setup.
|
| 86 |
+
|
| 87 |
+
### 4.2 Model descriptions
|
| 88 |
+
|
| 89 |
+
The authors use a well-known architecture ResNet-18 for their experiments. The ResNet18 gives a 512-dimensional feature for each input. For the jigsaw task, a single fully-connected (fc) layer with 512-units is added on top. Nine patches of an image give nine 512-dimensional feature vectors, which are concatenated, and projected to 4096 dimensions using an fc layer, and then to a 35-dimensional output using another fc layer, corresponding to the 35 permutations for the jigsaw task.
|
| 90 |
+
|
| 91 |
+
For rotation prediction task, the 512-dimensional output of ResNet-18 is passed through three fc layers consecutively with 128,128,4 units. The 4 predictions of the last layer correspond to the four rotation angles. Between each fc layer, a ReLU activation and a dropout layer with a dropout probability of 0.5 are added. We leave this dropout probability as is, as it would result in too many hyperparameters that we would not have been able to optimize for every experimental setup.
|
| 92 |
+
|
| 93 |
+
Apart from the ResNet-18 architecture used in the paper, we use another architecture that is equally adapted in many few-shot learning papers (1) (12) (14) (3), the Conv-4-64 architecture, which is a simpler architecture with 3x3 kernel size and 64 filters at each layer. A similar extension is made for the jigsaw and rotation tasks. In multiple works in the literature, this architecture has been used to process ${84} \times {84}$ images, while the ResNet variants have been used to process ${224} \times {224}$ images. We follow the works and report results with the respective image sizes for each architecture. Both the architectures are represented diagrammatically in table 16 and table 15 respectively in the appendix.
|
| 94 |
+
|
| 95 |
+
### 4.3 Datasets
|
| 96 |
+
|
| 97 |
+
Following the few-shot setup, each dataset is split into three disjoint sets, each having a different set of classes. A model is trained on the base set, validated on the validation set, and tested on the test set. Following the paper, we experiment with multiple datasets across diverse domains and denote the number of classes in the base, val, test splits inside brackets: CUB-200-2011 (2)(64, 12, 20), Stanford Cars (6) (98, 49, 49), FGVC-aircraft (9) (50, 25, 25), Stanford dogs (5) (60, 30, 30), Oxford flowers (10) (51, 26, 26). These 5 datasets are henceforth referred to as "the smaller datasets". Apart from these, we also experiment with a benchmark dataset for few-shot learning, the miniImageNet dataset $\left( {16}\right) \left( {{64},{16},{20}}\right)$ . The original paper also reports results on Tiered-ImageNet, but we could only work with miniImageNet due to compute and time constraints.
|
| 98 |
+
|
| 99 |
+
We use the same base-validation-novel class split as the paper, which they provide in their official repository. Each class contains 3 files, one for each in base, val and novel, and lists the classes to be used, along with all the image paths for each class. These files follow from the repository of Chen et al (1) whose codebase they borrow.
|
| 100 |
+
|
| 101 |
+
---
|
| 102 |
+
|
| 103 |
+
https://github.com/cvl-umass/fsl_ssl
|
| 104 |
+
|
| 105 |
+
${}^{2}$ https://github.com/wyharveychen/CloserLookFewShot
|
| 106 |
+
|
| 107 |
+
${}^{3}$ https://github.com/ashok-arjun/fsl_ssl_working/
|
| 108 |
+
|
| 109 |
+
---
|
| 110 |
+
|
| 111 |
+
Among the small datasets, we found that there were no versions of flowers and cars dataset that could be used directly. Hence we had to preprocess the two datasets and contribute them to Kaggle for public use ${}^{4}{}^{5}$ . With the miniImageNet dataset, we found that all the directly-downloadable versions (11) (8) contained images resized to 84x84, however we needed a dataset that could be resized to either ${84} \times {84}$ or ${224} \times {224}$ adaptively. Hence, we had to download the ImageNet dataset(155GB)and process the dataset from scratch, which caused storage issues and also took up a significant part of our time. To this end, we also open-source the miniImageNet dataset with image sizes same as that in ImageNet, to save other researchers’ time in preprocessing the dataset from scratch ${}^{6}$ . To the best of our knowledge, we are the first to release such a version.
|
| 112 |
+
|
| 113 |
+
For the domain selection algorithm, the authors use the training sets of two large datasets - Open Images v5 (7) and iNaturalist (15), which are ${500}\mathrm{{GB}}$ and ${200}\mathrm{{GB}}$ in size respectively. These sizes far exceeded our storage capacity, and we instead could only use the validation sets of each of the datasets as unlabelled images for self-supervision.
|
| 114 |
+
|
| 115 |
+
### 4.4 Hyperparameters
|
| 116 |
+
|
| 117 |
+
We perform hyperparameter sweeps each having 10 runs, amounting to 130 runs in total. The hyperparameter sweeps were conducted using Weights and Biases. Each sweep uses random search to search over two hyperparameters:
|
| 118 |
+
|
| 119 |
+
- Learning Rate: uniform(0.0001,0.03)
|
| 120 |
+
|
| 121 |
+
- Batch normalization mode:
|
| 122 |
+
|
| 123 |
+
1. Use batch normalization, accumulate statistics throughout training, and use the statistics during testing
|
| 124 |
+
|
| 125 |
+
2. Use batch normalization, but do not track the running mean and variance during training; estimate them from batches during training and test
|
| 126 |
+
|
| 127 |
+
3. No batch normalization
|
| 128 |
+
|
| 129 |
+
- $\alpha$ , the weightage of the SSL term in the loss (only where self-supervision is applied)
|
| 130 |
+
|
| 131 |
+
We use these modes for batch-norm, as the paper (Page 21, Appendix A.5) states that especially for jigsaw tasks, the authors found batch-norm mode 2 to be optimal, as in jigsaw, the inputs contain both full-sized images and small patches, which might have different statistics. To verify this and for completeness, we conducted the search over the batch normalization modes also. All models are trained with the Adam optimizer with ${\beta }_{1} = {0.9}$ and ${\beta }_{2} = {0.999}$ .
|
| 132 |
+
|
| 133 |
+
We then use the configuration which gives the best validation accuracy computed for 100 epochs, computed over 600 randomly sampled episodes. We search hyperparameters for certain datasets only, and reuse the hyperparameters found for similar datasets due to computational constraints. The selected experiment configurations are given in the appendix due to space constraints.
|
| 134 |
+
|
| 135 |
+
Across 100% of our sweeps, we notice that $\alpha$ stays below 0.6, and does not go below 0.3 in our runs. Hence, we infer that an adequate amount of supervision is also needed for good performance, and too much self-supervision hurts accuracy. For the miniImageNet datset, we find the values close 0.3 work the best, which the paper reiterates. The paper reports that they use 0.5 for all the SSL experiments on the small datasets, which we confirm as our $\alpha$ term converges to values 0.4 and 0.6 for the small datasets. All of our reported results are with the best hyperparameters found. We report more details on the hyperparameter searches in Appendix.
|
| 136 |
+
|
| 137 |
+
### 4.5 Computational requirements
|
| 138 |
+
|
| 139 |
+
We used 4 Nvidia 1080Ti GPUs for all experiments. The run-times differ for each experiment configuration when incorporating self-supervision. We report the average epoch time for each experimental setup ( 1 epoch = 100 episodes) in table 6 in the appendix.
|
| 140 |
+
|
| 141 |
+
In general, among experiments involving self-supervised learning, rotation took the maximum amount of time. This is because 4 rotations of the same image are needed at every instance, which is more expensive than loading a single image. The jigsaw task took lesser time than rotation, and the combination of jigsaw and rotation took the highest amount of time per epoch. Since the paper reports results on the combination only for the first set of experiments (claim 1), we also do the same. Further, the computational time restricted us from performing more experiments combining the two.
|
| 142 |
+
|
| 143 |
+
---
|
| 144 |
+
|
| 145 |
+
${}^{4}$ https://www.kaggle.com/arjun2000ashok/vggflowers/
|
| 146 |
+
|
| 147 |
+
${}^{5}$ https://www.kaggle.com/hassiahk/stanford-cars-dataset-full
|
| 148 |
+
|
| 149 |
+
${}^{6}$ https://www.kaggle.com/arjunashok33/miniimagenet
|
| 150 |
+
|
| 151 |
+
---
|
| 152 |
+
|
| 153 |
+
In total, apart from the hyper-parameter sweeps, we perform 250 experiments, across different experimental setups and multiple datasets. All of these experiments took approximately 700 GPU hours. Along with the hyperparameter sweeps which were lesser in duration, the experiments took approximately 980 hours of compute time.
|
| 154 |
+
|
| 155 |
+
### 4.6 Experimental setup and code
|
| 156 |
+
|
| 157 |
+
Following the authors, we train, evaluate and report results on the 5-way 5-shot setting; we also explore 20-way 5-shot setting but we could not continue after a few runs, restricted by the large training and testing time of 20-way 5-shot models. Following the paper, we use 16 query examples to evaluate the models.
|
| 158 |
+
|
| 159 |
+
On verifying that the core claim of the paper (claim 1) for all the 5 small datasets, we choose 2 to 3 representative datasets for other experiments - CUB, dogs (representing natural images) and cars (representing the other group). We could not perform all the experiments on all the 5 datasets due to computational constraints. For the domain selection, we evaluate on all the 5 datasets to verify our implementation of the algorithm.
|
| 160 |
+
|
| 161 |
+
The batch size cannot be set in episodic few-shot learners, and are by default $n\_ {way} * \left( {n\_ \text{support} + n\_ \text{query}}\right)$ . We use 16 query images following the paper, and as a result, our batch sizes are 105 in 5-way 5-shot experiments, and 420 in 20-way 5-shot experiments. Following all previous work in few-shot learning, we sample 100 episodes (batches) per epoch, and conduct experiments on about 600 - 800 epochs. Following the paper, we use only 5 query images when training models for experiments that use lesser labelled data since the $\{ {20},{40},{60},{80}\} \%$ splits of dataset do not contain 16 query images in all classes.
|
| 162 |
+
|
| 163 |
+
In every iteration, an equal number of unlabelled images are sampled at random from the respective dataset(s) for self-supervised learning. Following our paper and the baseline from previous work (1) in few-shot learning and our original paper, we use the following data augmentation: For label and rotation predictions, images are first resized to 224 pixels for the shorter edge while maintaining aspect ratio, from which a central crop of 224 is obtained. For jigsaw puzzles, a random crop of 255 is done from the original image with random scaling between [0.5,1.0], then split into $3 \times 3$ regions, from which a random crop of size ${64} \times {64}$ is picked.
|
| 164 |
+
|
| 165 |
+
We implement the domain selection algorithm following the paper: For each dataset among the small datasets, we select negative images uniformly at random with 10 times the size of the positive images. The loss for the positive class is scaled by the inverse of its frequency to account for the significantly larger number of negative examples. We then train a binary logistic regression classifier using LBFGS for 10000 iterations and use the logits to compute the ratio $p\left( {x \in {D}_{s}}\right) /p\left( {x \in {D}_{p}}\right)$ . We then choose $k$ as ${80}\%$ of the total dataset size, and sample $k$ negative images to use as unlabelled samples.
|
| 166 |
+
|
| 167 |
+
For evaluation at meta-test time, we use 600 randomly sampled episodes, and report the mean accuracy and 95% confidence intervals. Due to the large number of experiments and the datasets across which the claims had to be verified, we could only perform one set of experiments in all sections, with the seed is set to 42 .
|
| 168 |
+
|
| 169 |
+
## 5 Results
|
| 170 |
+
|
| 171 |
+
### 5.1 Results reproducing original paper
|
| 172 |
+
|
| 173 |
+
Here, we consider the same architecture that the paper uses - ResNet-18, with an input image size of 224.
|
| 174 |
+
|
| 175 |
+
#### 5.1.1 Self-supervision improves few-shot learning
|
| 176 |
+
|
| 177 |
+
Here, we successfully verify claim 1 of the paper that with no additional unlabelled data, SSL improves few-shot learning when applied as an auxiliary task. We conduct experiments across all the 5 small datasets as well as the large-scale miniImageNet dataset. We also reproduce results on the baseline from (1), and MAML and MAML+Jigsaw. We could not reproduce results on MAML due to computational constraints. We present results in Figure 1, Table 1 and Table 2. All results are on 5-way 5-shot classification. We find that the jigsaw task leads to the best results on 3 out of 6 datasets.
|
| 178 |
+
|
| 179 |
+
#### 5.1.2 The benefits of self-supervision increase with the difficulty of the task
|
| 180 |
+
|
| 181 |
+
We successfully verify claim 2 of the authors that the relative gains of using SSL are more when the difficulty of the task is higher. The authors experiment with two types of difficult tasks: one with low-resolution/greyscale images as input, and another with less labelled data from the base training set. We experiment with 3 selected datasets and were successful in reproducing the results. We report the results on figures 2 and 4 . The exact numbers are given on tables 12 3 and 10 respectively in the appendix. We find that the claims of the paper hold true, and that self-supervision has higher gains in harder tasks.
|
| 182 |
+
|
| 183 |
+

|
| 184 |
+
|
| 185 |
+
Figure 1: Results on applying SSL tasks to Prototypical networks, across 6 datasets
|
| 186 |
+
|
| 187 |
+
<table><tr><td>Method</td><td>Accuracy</td></tr><tr><td>ProtoNet</td><td>${74.07} \pm {0.71}$</td></tr><tr><td>ProtoNet + Jigsaw</td><td>$\mathbf{{77.29} \pm {0.73}}$</td></tr><tr><td>ProtoNet + Rotation</td><td>74.93 ± 0.9</td></tr><tr><td>ProtoNet + Jigsaw + Rotation</td><td>${76.23} \pm {0.9}$</td></tr></table>
|
| 188 |
+
|
| 189 |
+
Table 1: miniImageNet Results with ResNet-18
|
| 190 |
+
|
| 191 |
+
<table><tr><td>Method</td><td>CUB</td><td>Cars</td><td>Aircrafts</td><td>Dogs</td><td>Flowers</td></tr><tr><td>Softmax</td><td>${81.92} \pm {0.54}$</td><td>${88.16} \pm {0.47}$</td><td>${89.57} \pm {0.38}$</td><td>${78.18} \pm {0.56}$</td><td>90.44 ± 0.47</td></tr><tr><td>Softmax + Jigsaw</td><td>83.96 ± 0.52</td><td>91.2 ± 0.49</td><td>${89.93} \pm {0.39}$</td><td>${78.3} \pm {0.57}$</td><td>${90.85} \pm {0.49}$</td></tr><tr><td>ProtoNet</td><td>${87.09} \pm {0.48}$</td><td>${91.0} \pm {0.41}$</td><td>$\mathbf{{91.90} \pm {0.35}}$</td><td>${83.52} \pm {0.54}$</td><td>${89.92} \pm {0.51}$</td></tr><tr><td>ProtoNet + Jigsaw</td><td>$\mathbf{{89.57} \pm {0.43}}$</td><td>${92.67} \pm {0.39}$</td><td>${91.72} \pm {0.39}$</td><td>${86.1} \pm {0.51}$</td><td>90.98 ± 0.47</td></tr><tr><td>ProtoNet + Rotation</td><td>${88.9} \pm {0.55}$</td><td>91.61 ± 0.40</td><td>${91.69} \pm {0.40}$</td><td>${83.94} \pm {0.58}$</td><td>90.12 ± 0.5</td></tr><tr><td>ProtoNet + Jigsaw + Rotation</td><td>${88.98} \pm {0.45}$</td><td>$\mathbf{{93.27} \pm {0.38}}$</td><td>${91.26} \pm {0.4}$</td><td>${85.29} \pm {0.54}$</td><td>${90.01} \pm {0.51}$</td></tr></table>
|
| 192 |
+
|
| 193 |
+
Table 2: ResNet-18's performance on the 5 small datasets.
|
| 194 |
+
|
| 195 |
+
#### 5.1.3 Unlabelled data for SSL from dissimilar domains negatively impacts the few-shot learner
|
| 196 |
+
|
| 197 |
+
Verifying claim 3 of the paper, we replace a portion of the labelled data, starting from 20% of the data to 80% of the data, with data from other domains. Here, we combine the data from all other datasets together, and sample images at random. We present results on 3 chosen datasets, again, to save computation and time for other results. Results are given in figure 3 and table 11 (appendix). The claim that using data from dissimilar domains for self-supervision is detrimental to few-shot classification holds true.
|
| 198 |
+
|
| 199 |
+
#### 5.1.4 The proposed domain selection algorithm can alleviate this issue by learning to pick images from a large and generic pool of images
|
| 200 |
+
|
| 201 |
+
To verify claim 4, we implement the domain selection algorithm from scratch, and verify it across all 5 small datasets as given in the paper, to make sure that we have got the implementation right. Results are presented in 5 and table 13 in the appendix. Results are shown on using only 20% of the labelled data for learning, only selecting images from other domains at random, and on using the proposed domain selection algorithm. We successfully verify and demonstrate that the algorithm proposed by the authors for selecting images from multiple dissimilar domains.
|
| 202 |
+
|
| 203 |
+
### 5.2 Results beyond original paper
|
| 204 |
+
|
| 205 |
+
#### 5.2.1 Results on a different architecture - Conv4
|
| 206 |
+
|
| 207 |
+
Here, we aim to investigate whether the claims of the paper hold when a small architecture that needs a smaller image size (84x84) is used. In particular, we investigate claim 1 of the paper extensively. Note that the authors do not report results with this architecture. Results are given in figure 6 and table 7 and table 9 (appendix). We find that the results do
|
| 208 |
+
|
| 209 |
+

|
| 210 |
+
|
| 211 |
+
Figure 2: Results of applying SSL when the amount of labelled data for supervision is lesser. The gains obtained by SSL grow with the amount of labelled data
|
| 212 |
+
|
| 213 |
+

|
| 214 |
+
|
| 215 |
+
Figure 3: Performance on tasks where a portion of the labelled data is replaced with data from other domains
|
| 216 |
+
|
| 217 |
+

|
| 218 |
+
|
| 219 |
+
Figure 4: Results of applying self-supervised learning on artificially constructed harder tasks.
|
| 220 |
+
|
| 221 |
+

|
| 222 |
+
|
| 223 |
+
Figure 5: Results of the domain selection algorithm
|
| 224 |
+
|
| 225 |
+

|
| 226 |
+
|
| 227 |
+
Figure 6: Results of using SSL with the Conv4 architecture
|
| 228 |
+
|
| 229 |
+
not hold true when a smaller architecture and image size is used, and that claim depends heavily on the architecture and image size. We present results across all the 5 small datasets for completeness, across both SSL tasks. To confirm our claims, we also rerun results with another seed, but get similar results (Table 8 in appendix). Apart from the reported
|
| 230 |
+
|
| 231 |
+
236 results with the optimal $\alpha$ found by hyperparameter search, we study the effect of $\alpha$ on the results, with the CUB and
|
| 232 |
+
|
| 233 |
+
237 cars datasets in tables 3 and 4 . Here we find that the value of $\alpha$ plays an important role in the performance, and that high values cause too much supervision when the model is small. Even across training and testing with multiple $\alpha$ values, we find that the self-supervision provides only a marginal boost in 1 out of 4 cases, invalidating claim 1 of the
|
| 234 |
+
|
| 235 |
+
240 paper that self-supervision provides a stable boost to few-shot learners.
|
| 236 |
+
|
| 237 |
+
<table><tr><td>Rotation</td><td>CUB</td><td>Cars</td></tr><tr><td>$\alpha = 0$ (no SSL)</td><td>$\mathbf{{77.72} \pm {0.71}}$</td><td>$\mathbf{{67.6} \pm {0.84}}$</td></tr><tr><td>$\alpha = {0.1}$</td><td>77.6 ± 0.73</td><td>${66.83} \pm {0.75}$</td></tr><tr><td>$\alpha = {0.3}$</td><td>77.22 ± 0.9</td><td>${65.53} \pm {0.73}$</td></tr><tr><td>$\alpha = {0.5}$</td><td>${75.04} \pm {0.81}$</td><td>60.74 ± 0.73</td></tr></table>
|
| 238 |
+
|
| 239 |
+
Table 3: Conv-4's performance on Rotation
|
| 240 |
+
|
| 241 |
+
<table><tr><td>Jigsaw</td><td>CUB</td><td>Cars</td></tr><tr><td>$\alpha = 0$ (no SSL)</td><td>$\mathbf{{77.72} \pm {0.71}}$</td><td>$\mathbf{{67.6} \pm {0.84}}$</td></tr><tr><td>$\alpha = {0.1}$</td><td>75.57 ± 0.73</td><td>${62.548} \pm {0.75}$</td></tr><tr><td>$\alpha = {0.3}$</td><td>${64.91} \pm {0.9}$</td><td>${51.83} \pm {0.73}$</td></tr><tr><td>$\alpha = {0.5}$</td><td>${75.04} \pm {0.81}$</td><td>60.74 ± 0.73</td></tr></table>
|
| 242 |
+
|
| 243 |
+
Table 4: Conv-4's performance on Jigsaw
|
| 244 |
+
|
| 245 |
+
#### 5.2.2 Results on cross-domain few-shot learning
|
| 246 |
+
|
| 247 |
+
In another effort to extend the paper's results, we test the results of our trained models on the BSCD-FSL benchmark for cross-domain few-shot learning, introduced by (4) with their code ${}^{7}$ . The benchmark requires ImageNet-based trained few-shot models to evaluated on four cross-domain datasets: CropDiseases, EuroSAT, ISIC2018, and ChestX datasets, which covers plant disease images, satellite images, dermoscopic images of skin lesions, and X-ray images, respectively. The selected datasets reflect real-world use cases for few-shot learning since collecting enough examples from above domains is often difficult, expensive, or in some cases not possible. We use this benchmark to find out if models trained with self-supervision provide gains over normal supervised models when tested on real-world datasets. We test our mini-ImageNet trained models on this benchmark, to find out if self-supervision improves results on cross-domain datasets. Results on the ResNet-18 models are reported in table 5 . Results on the Conv-4 models are deferred to the appendix table 14. We find that self-supervision results in learning heavily domain-specific representations, and that the results of the fully-supervised learner are much better than those with auxiliary tasks as self-supervision.
|
| 248 |
+
|
| 249 |
+
<table><tr><td>Method</td><td>ChestX</td><td>Crop Disease</td><td>EuroSAT</td><td>ISIC</td></tr><tr><td>ProtoNet</td><td>$\mathbf{{24.32} \pm {0.41}}$</td><td>$\mathbf{{83.36} \pm {0.63}}$</td><td>$\mathbf{{76.09} \pm {0.74}}$</td><td>${41.60} \pm {0.58}$</td></tr><tr><td>ProtoNet + Jigsaw</td><td>${23.97} \pm {0.39}$</td><td>77.86 ± 0.69</td><td>${72.72} \pm {0.68}$</td><td>41.22 ± 0.56</td></tr><tr><td>ProtoNet + Rotation</td><td>${23.84} \pm {0.39}$</td><td>79.11 ± 0.68</td><td>${72.47} \pm {0.69}$</td><td>$\mathbf{{43.79} \pm {0.61}}$</td></tr><tr><td>ProtoNet + Jigsaw + Rotation</td><td>${23.73} \pm {0.38}$</td><td>77.39 ± 0.68</td><td>${71.91} \pm {0.7}$</td><td>40.05 ± 0.55</td></tr></table>
|
| 250 |
+
|
| 251 |
+
Table 5: CDFSL Benchmark for ResNet-18
|
| 252 |
+
|
| 253 |
+
## 6 Discussion
|
| 254 |
+
|
| 255 |
+
We find that the central claims of the author as given in Section 2 hold true, when the same architecture is used. Considering the ResNet-18 model used in the paper with an input image size of 224, we find that self-supervision - in particular the jigsaw task, provides a boost in the case of small datasets. Experimentally, we verify claim 1 of the paper on all small datasets and miniImageNet. However, going beyond the paper's architecture, we find that the results depend heavily on the image size and architecture and do not give the same gains with Conv-4-64, another architecture common in the few-shot learning literature, with an input image size of 84. Further ablation reveals that the jigsaw task in particular has a strong influence in this setup, and the rotation task requires tuning the $\alpha$ parameter to even reach the accuracy of the fully-supervised model. Future work may investigate ways to boost the performance of few-shot classifiers when the input sizes are small, and may also find out better architectures to use when the input size is small. Future work may also experiment with other available architectures, and find out if self-supervision increases performances across all configurations.
|
| 256 |
+
|
| 257 |
+
Regarding claims 2 and 3 such as on harder tasks and scenarios with lesser labelled data in the base dataset, our experiments on selected datasets verify that the claims hold true, with the ResNet-18 backbone. Further, we verify claim 4 of the paper by implementing the domain selection algorithm from scratch and our experiments on all the 5 datasets show that relative gains are achieved. Future work may also investigate if the same claims hold true when different architectures were used.
|
| 258 |
+
|
| 259 |
+
Finally, we evaluate the miniImageNet-trained models on a more practical setting of cross-domain few-shot learning and find that SSL during the training time does not help few-shot learners generalize across domains better. Future work may investigate why applying SSL results in domain-specific features, and propose methods to apply SSL in a more domain-agnostic manner. We recommend future works in few-shot learning to train and evaluate in multiple architectures with different image sizes and verify their work more thoroughly.
|
| 260 |
+
|
| 261 |
+
### 6.1 Communication with original authors
|
| 262 |
+
|
| 263 |
+
We maintained communication with the authors throughout our implementation and training phase, spanning two months. We were able to clarify many implementation details in the original codebase, and the authors also re-ran an experiment on their side to test if the numbers match. Further, we recieved a lot of help regarding implementation of the domain selection algorithm, and could also confirm the implementation with them. We acknowledge and thank the authors for their help with the reproducibility of their paper.
|
| 264 |
+
|
| 265 |
+
---
|
| 266 |
+
|
| 267 |
+
${}^{7}$ https://github.com/IBM/cdfsl-benchmark
|
| 268 |
+
|
| 269 |
+
---
|
| 270 |
+
|
| 271 |
+
## References
|
| 272 |
+
|
| 273 |
+
[1] W.-Y. Chen, Y.-C. Liu, Z. Kira, Y.-C. F. Wang, and J.-B. Huang. A closer look at few-shot classification. In International Conference on Learning Representations, 2019.
|
| 274 |
+
|
| 275 |
+
[2] L. et al. 2015. Caltech-ucsd birds-200-2011.
|
| 276 |
+
|
| 277 |
+
[3] C. Finn, P. Abbeel, and S. Levine. Model-agnostic meta-learning for fast adaptation of deep networks. In International Conference on Machine Learning, pages 1126-1135. PMLR, 2017.
|
| 278 |
+
|
| 279 |
+
[4] Y. Guo, N. C. Codella, L. Karlinsky, J. V. Codella, J. R. Smith, K. Saenko, T. Rosing, and R. Feris. A broader study of cross-domain few-shot learning. ECCV, 2020.
|
| 280 |
+
|
| 281 |
+
[5] A. Khosla, N. Jayadevaprakash, B. Yao, and L. Fei-Fei. Novel dataset for fine-grained image categorization. In First Workshop on Fine-Grained Visual Categorization, IEEE Conference on Computer Vision and Pattern Recognition, Colorado Springs, CO, June 2011.
|
| 282 |
+
|
| 283 |
+
[6] J. Krause, M. Stark, J. Deng, and L. Fei-Fei. 3d object representations for fine-grained categorization. In 4th International IEEE Workshop on 3D Representation and Recognition (3dRR-13), Sydney, Australia, 2013.
|
| 284 |
+
|
| 285 |
+
[7] A. Kuznetsova, H. Rom, N. Alldrin, J. Uijlings, I. Krasin, J. Pont-Tuset, S. Kamali, S. Popov, M. Malloci, A. Kolesnikov, et al. The open images dataset v4: Unified image classification, object detection, and visual relationship detection at scale. arXiv preprint arXiv:1811.00982, 2018.
|
| 286 |
+
|
| 287 |
+
[8] Y. Liu. Tools for mini-imagenet dataset. https://github.com/yaoyao-liu/mini-imagenet-tools.
|
| 288 |
+
|
| 289 |
+
[9] S. Maji, J. Kannala, E. Rahtu, M. Blaschko, and A. Vedaldi. Fine-grained visual classification of aircraft. Technical report, 2013.
|
| 290 |
+
|
| 291 |
+
[10] M.-E. Nilsback and A. Zisserman. Automated flower classification over a large number of classes. In 2008 Sixth Indian Conference on Computer Vision, Graphics Image Processing, pages 722-729, 2008.
|
| 292 |
+
|
| 293 |
+
[11] M. Ren. few-shot-ssl-public. https://github.com/renmengye/few-shot-ssl-public.
|
| 294 |
+
|
| 295 |
+
[12] J. Snell, K. Swersky, and R. Zemel. Prototypical networks for few-shot learning. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc., 2017.
|
| 296 |
+
|
| 297 |
+
[13] J.-C. Su, S. Maji, and B. Hariharan. When does self-supervision improve few-shot learning? In European Conference on Computer Vision, pages 645-666. Springer, 2020.
|
| 298 |
+
|
| 299 |
+
[14] F. Sung, Y. Yang, L. Zhang, T. Xiang, P. H. Torr, and T. M. Hospedales. Learning to compare: Relation network for few-shot learning. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1199-1208, 2018.
|
| 300 |
+
|
| 301 |
+
[15] G. Van Horn, O. Mac Aodha, Y. Song, Y. Cui, C. Sun, A. Shepard, H. Adam, P. Perona, and S. Belongie. The inaturalist species classification and detection dataset. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 8769-8778, 2018.
|
| 302 |
+
|
| 303 |
+
[16] O. Vinyals, C. Blundell, T. Lillicrap, K. Kavukcuoglu, and D. Wierstra. Matching networks for one shot learning. In Proceedings of the 30th International Conference on Neural Information Processing Systems, NIPS'16, page 3637-3645, Red Hook, NY, USA, 2016. Curran Associates Inc.
|
| 304 |
+
|
| 305 |
+
283
|
| 306 |
+
|
| 307 |
+
## 7 Appendix
|
| 308 |
+
|
| 309 |
+
### 7.1 Seconds per epoch
|
| 310 |
+
|
| 311 |
+
Continuing section 4.5, we report the exact values per epoch across experiment configurations. We do so, since different 20 architectures and datasets may require training for different number of epochs, however the epoch time remains the 321 same across experiments.
|
| 312 |
+
|
| 313 |
+
<table><tr><td>Experiment</td><td>Setup (way, shot)</td><td>Seconds per epoch (Conv4 / ProtoNet</td></tr><tr><td>ProtoNet</td><td>(5,5)</td><td>20/25</td></tr><tr><td>ProtoNet</td><td>(20,5)</td><td>45/50</td></tr><tr><td>ProtoNet+Jigsaw</td><td>(5,5)</td><td>25/35</td></tr><tr><td>ProtoNet+Jigsaw</td><td>(20,5)</td><td>60/66</td></tr><tr><td>ProtoNet+Rotation</td><td>(5,5)</td><td>18/60</td></tr><tr><td>ProtoNet+Rotation</td><td>(20,5)</td><td>65/81</td></tr><tr><td>ProtoNet+Jigsaw+Rotation</td><td>(5,5)</td><td>42/70</td></tr><tr><td>ProtoNet+Jigsaw+Rotation</td><td>(20,5)</td><td>83/95</td></tr></table>
|
| 314 |
+
|
| 315 |
+
Table 6: Average seconds per epoch across experimental setups and ways
|
| 316 |
+
|
| 317 |
+
### 7.2 Hyperparameter sweeps
|
| 318 |
+
|
| 319 |
+
The selected experiment configs are as follows:
|
| 320 |
+
|
| 321 |
+
Each of the below experimental configurations are done for ProtoNet, ProtoNet+Jigsaw, ProtoNet+Rotation and ProtoNet+Jigsaw+Rotation ( 4 configurations) in the 5-way 5-shot setup. The sweeps optimize the learning rate and the mode of batch normalization, and $\alpha$ .
|
| 322 |
+
|
| 323 |
+
The last two parameters are optimized only when self-supervision is applied. This is because $\alpha = 0$ for fully supervised learners and we find that using batch norm modes 2,3 is highly detrimental to fully supervised learners.
|
| 324 |
+
|
| 325 |
+
- miniImageNet Conv4: 4 sweeps
|
| 326 |
+
|
| 327 |
+
- miniImageNet ResNet-18: 4 sweeps
|
| 328 |
+
|
| 329 |
+
- CUB Conv4: 4 sweeps, reused for flowers and dogs datasets
|
| 330 |
+
|
| 331 |
+
- Cars Conv4: 4 sweeps, reused for aircrafs dataset
|
| 332 |
+
|
| 333 |
+
- CUB ResNet-18: 4 sweeps, reused for flowers and dogs datasets
|
| 334 |
+
|
| 335 |
+
- Cars ResNet-18: 4 sweeps, reused for aircrafs dataset
|
| 336 |
+
|
| 337 |
+
Hence we do a total of 24 sweeps.
|
| 338 |
+
|
| 339 |
+
The sweeps and the exact hyperparameters obtained can be visualized at https://wandb.ai/meta-learners/ 7 FSL-SSL/sweeps. All the runs in the paper can be seen at https://wandb.ai/meta-learners.
|
| 340 |
+
|
| 341 |
+
### 387.3 Tables
|
| 342 |
+
|
| 343 |
+
## 339 7.3.1 Results on the applying self-supervision to few-shot learners
|
| 344 |
+
|
| 345 |
+
<table><tr><td>Method</td><td>CUB</td><td>Cars</td><td>Aircrafts</td><td>Dogs</td><td>Flowers</td></tr><tr><td>ProtoNet</td><td>$\mathbf{{77.72} \pm {0.48}}$</td><td>$\mathbf{{67.99} \pm {0.41}}$</td><td>$\mathbf{{76.16} \pm {0.69}}$</td><td>$\mathbf{{63.88} \pm {0.54}}$</td><td>$\mathbf{{85.29} \pm {0.51}}$</td></tr><tr><td>ProtoNet + Jigsaw</td><td>75.57 ± 0.7</td><td>${62.54} \pm {0.39}$</td><td>74.53 ± 0.68</td><td>${54.27} \pm {0.51}$</td><td>${84.4} \pm {0.47}$</td></tr><tr><td>ProtoNet + Rotation</td><td>77.5 ± 0.55</td><td>66.8 ± 0.40</td><td>74.16 ± 0.40</td><td>${60.74} \pm {0.58}$</td><td>${84.55} \pm {0.5}$</td></tr><tr><td>ProtoNet + Jigsaw + Rotation</td><td>${69.66} \pm {0.45}$</td><td>59.76 ± 0.77</td><td>${74.79} \pm {0.4}$</td><td>49.48 ± 0.54</td><td>${81.43} \pm {0.51}$</td></tr></table>
|
| 346 |
+
|
| 347 |
+
Table 7: Conv-4’s performance on few-shot learning tasks $\left( {\alpha = {0.5}}\right)$
|
| 348 |
+
|
| 349 |
+
<table><tr><td>Method</td><td>CUB</td><td>Cars</td></tr><tr><td>ProtoNet</td><td>${76.43} \pm {0.3}$</td><td>67.45 ± 0.85</td></tr><tr><td>ProtoNet + Jigsaw</td><td>${65.09} \pm {0.42}$</td><td>${60.39} \pm {0.76}$</td></tr><tr><td>ProtoNet + Rotation</td><td>${75.05} \pm {0.35}$</td><td>${66.61} \pm {0.6}$</td></tr></table>
|
| 350 |
+
|
| 351 |
+
Table 8: Conv4 results on CUB and cars with a different seed
|
| 352 |
+
|
| 353 |
+
<table><tr><td>Method</td><td>Conv-4</td></tr><tr><td>ProtoNet</td><td>$\mathbf{{66.78} \pm {0.84}}$</td></tr><tr><td>ProtoNet + Jigsaw</td><td>${64.94} \pm {0.75}$</td></tr><tr><td>ProtoNet + Rotation</td><td>${66.41} \pm {0.73}$</td></tr><tr><td>ProtoNet + Jigsaw + Rotation</td><td>${65.21} \pm {0.73}$</td></tr></table>
|
| 354 |
+
|
| 355 |
+
Table 9: miniImageNet Results on Conv4
|
| 356 |
+
|
| 357 |
+
## 340 7.3.2 Results on harder tasks
|
| 358 |
+
|
| 359 |
+
<table><tr><td>Method</td><td>20% CUB</td><td>20% Cars</td><td>20% Dogs</td></tr><tr><td>No SSL</td><td>$\mathbf{{73.61} \pm {0.71}}$</td><td>75.16 ± 0.84</td><td>${68.4} \pm {0.64}$</td></tr><tr><td>20% SSL</td><td>${70.84} \pm {0.73}$</td><td>${83.72} \pm {0.75}$</td><td>${68.13} \pm {0.9}$</td></tr><tr><td>40% SSL</td><td>${71.48} \pm {0.9}$</td><td>${83.87} \pm {0.73}$</td><td>${68.26} \pm {0.87}$</td></tr><tr><td>60% SSL</td><td>${70.71} \pm {0.81}$</td><td>$\mathbf{{84.12} \pm {0.73}}$</td><td>$\mathbf{{74.21} \pm {0.89}}$</td></tr><tr><td>80% SSL</td><td>${71.99} \pm {0.65}$</td><td>${84.04} \pm {0.78}$</td><td>${71.86} \pm {0.81}$</td></tr></table>
|
| 360 |
+
|
| 361 |
+
Table 10: Performance on tasks with lesser labelled data
|
| 362 |
+
|
| 363 |
+
<table><tr><td>Method</td><td>20% CUB</td><td>20% Cars</td><td>20% Dogs</td></tr><tr><td>No SSL</td><td>$\mathbf{{73.61} \pm {0.82}}$</td><td>75.16 ± 0.84</td><td>${68.4} \pm {0.64}$</td></tr><tr><td>20% SSL</td><td>${69.83} \pm {0.79}$</td><td>$\mathbf{{81.53} \pm {0.79}}$</td><td>${71.99} \pm {0.88}$</td></tr><tr><td>40% SSL</td><td>${71.08} \pm {0.83}$</td><td>${75.27} \pm {0.89}$</td><td>${72.24} \pm {0.85}$</td></tr><tr><td>60% SSL</td><td>${71.12} \pm {0.91}$</td><td>${76.39} \pm {0.89}$</td><td>$\mathbf{{73.11} \pm {0.83}}$</td></tr><tr><td>80% SSL</td><td>${68.48} \pm {0.87}$</td><td>${73.85} \pm {0.89}$</td><td>${72.03} \pm {0.91}$</td></tr></table>
|
| 364 |
+
|
| 365 |
+
Table 11: Performance when a portion of data replaced with data from other domains
|
| 366 |
+
|
| 367 |
+
<table><tr><td>Method</td><td>CUB Greyscale</td><td>Cars Low-resolution</td><td>Dogs Greyscale</td></tr><tr><td>ProtoNet</td><td>${82.88} \pm {0.56}$</td><td>${86.00} \pm {0.51}$</td><td>79.97 ± 0.54</td></tr><tr><td>ProtoNet + Jigsaw</td><td>$\mathbf{{85.44} \pm {0.52}}$</td><td>$\mathbf{{86.34} \pm {0.56}}$</td><td>$\mathbf{{82.82} \pm {0.50}}$</td></tr><tr><td>ProtoNet + Rotation</td><td>${83.51} \pm {0.55}$</td><td>${85.53} \pm {0.53}$</td><td>${81.74} \pm {0.59}$</td></tr></table>
|
| 368 |
+
|
| 369 |
+
Table 12: Performance on artificially constructed harder tasks
|
| 370 |
+
|
| 371 |
+
## 41 7.3.3 Results on domain selection
|
| 372 |
+
|
| 373 |
+
<table><tr><td>Method</td><td>CUB</td><td>Cars</td><td>Aircrafts</td><td>Dogs</td><td>Flowers</td></tr><tr><td>No SSL</td><td>${69.05} \pm {0.48}$</td><td>75.15 ± 0.41</td><td>${74.8} \pm {0.35}$</td><td>${68.4} \pm {0.54}$</td><td>76.34 ± 0.51</td></tr><tr><td>SSL Pool (Random)</td><td>71.11 ± 0.43</td><td>${75.27} \pm {0.39}$</td><td>${75.81} \pm {0.39}$</td><td>${68.38} \pm {0.51}$</td><td>79.71 ± 0.47</td></tr><tr><td>SSL Pool (Weight)</td><td>71.25 ± 0.55</td><td>$\mathbf{{75.65} \pm {0.40}}$</td><td>$\mathbf{{80.13} \pm {0.40}}$</td><td>$\mathbf{{70.66} \pm {0.58}}$</td><td>$\mathbf{{82.16} \pm {0.5}}$</td></tr></table>
|
| 374 |
+
|
| 375 |
+
Table 13: Domain selection results
|
| 376 |
+
|
| 377 |
+
## 342 7.3.4 Results on cross-domain few-shot learning
|
| 378 |
+
|
| 379 |
+
<table><tr><td>Method</td><td>ChestX</td><td>Crop Disease</td><td>EuroSAT</td><td>ISIC</td></tr><tr><td>ProtoNet</td><td>$\mathbf{{24.46} \pm {0.39}}$</td><td>$\mathbf{{80.45} \pm {0.66}}$</td><td>67.03 ± 0.7</td><td>${41.0} \pm {0.6}$</td></tr><tr><td>ProtoNet + Jigsaw</td><td>${24.07} \pm {0.4}$</td><td>${78.51} \pm {0.66}$</td><td>${64.69} \pm {0.7}$</td><td>39.81 ± 0.54</td></tr><tr><td>ProtoNet + Rotation</td><td>$\mathbf{{24.46} \pm {0.39}}$</td><td>79.30 ± 0.7</td><td>${66.50} \pm {0.71}$</td><td>39.54 ± 0.54</td></tr><tr><td>ProtoNet + Jigsaw + Rotation</td><td>${24.16} \pm {0.37}$</td><td>${78.67} \pm {0.66}$</td><td>$\mathbf{{67.60} \pm {0.66}}$</td><td>40.22 ± 0.54</td></tr></table>
|
| 380 |
+
|
| 381 |
+
Table 14: CDFSL Benchmark for Conv-4.
|
| 382 |
+
|
| 383 |
+
## 343 7.4 Architectures
|
| 384 |
+
|
| 385 |
+
<table><tr><td>Layer Name</td><td>Output Size</td><td>Conv-4-64</td></tr><tr><td>conv 1</td><td>${82} \times {82} \times {64}$</td><td>3 x 3, 64</td></tr><tr><td rowspan="2">conv2</td><td rowspan="2">${41} \times {41} \times {64}$</td><td>$2 \times 2$ , max pool, stride 2</td></tr><tr><td>3 x 3, 64</td></tr><tr><td rowspan="2">conv3</td><td rowspan="2">${18} \times {18} \times {64}$</td><td>$2 \times 2$ , max pool, stride 2</td></tr><tr><td>3 x 3, 64</td></tr><tr><td rowspan="2">conv4</td><td rowspan="2">$7 \times 7 \times {64}$</td><td>$2 \times 2$ , max pool, stride 2</td></tr><tr><td>3 x 3,64</td></tr><tr><td>average pool</td><td>$1 \times 1 \times {64}$</td><td>$7 \times 7$ average pool</td></tr><tr><td>fully connected</td><td>1024</td><td>64 x 1024 linear</td></tr><tr><td>fully connected</td><td>X</td><td>${1024} \times X$ linear</td></tr><tr><td>softmax</td><td>X</td><td/></tr></table>
|
| 386 |
+
|
| 387 |
+
Table 15: Conv-4 Architecture (X denotes the way)
|
| 388 |
+
|
| 389 |
+
<table><tr><td>Layer Name</td><td>Output Size</td><td>Conv-4-64</td></tr><tr><td>conv1</td><td>112 x 112 x 64</td><td>$7 \times 7,{64}$ , stride 2</td></tr><tr><td rowspan="2">conv2_x</td><td rowspan="2">56 x 56 x 64</td><td>$3 \times 3$ max pool, stride 2</td></tr><tr><td>[3 x 3, 64; 3 x 3, 64] x 2</td></tr><tr><td>conv3_x</td><td>28 x 28 x 128</td><td>[3 x 3, 128; 3 x 3, 128] x 2</td></tr><tr><td>conv4_x</td><td>14 x 14 x 256</td><td>[3 x 3, 256; 3 x 3, 256] x 2</td></tr><tr><td>conv5_x</td><td>7 x 7 x 512</td><td>[3 x 3, 512; 3 x 3, 512] x 2</td></tr><tr><td>average pool</td><td>$1 \times 1 \times {512}$</td><td>$7 \times 7$ average pool</td></tr><tr><td>fully connected</td><td>X</td><td>${512} \times X$ fully connections</td></tr><tr><td>softmax</td><td>X</td><td/></tr></table>
|
| 390 |
+
|
| 391 |
+
Table 16: ResNet-18 Architecture
|
| 392 |
+
|
papers/ML_Reproducibility_Challenge/ML_Reproducibility_Challenge 2021/ML_Reproducibility_Challenge 2021 Fall/Sczshz7h0K/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,179 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# [RE] An Implementation of Fair Robust Learning
|
| 2 |
+
|
| 3 |
+
Anonymous Author(s)
|
| 4 |
+
|
| 5 |
+
Affiliation
|
| 6 |
+
|
| 7 |
+
Address
|
| 8 |
+
|
| 9 |
+
email
|
| 10 |
+
|
| 11 |
+
1
|
| 12 |
+
|
| 13 |
+
## Reproducibility Summary
|
| 14 |
+
|
| 15 |
+
## Scope of Reproducibility
|
| 16 |
+
|
| 17 |
+
This work attempts to reproduce the results of the 2021 ICML paper To be Robust or to be Fair: Towards Fairness 4 in Adversarial Training. I first reproduce classwise accuracy and robustness discrepancies resulting from adversarial 5 training, and then implement the authors' proposed Fair Robust Learning (FRL) algorithms for correcting this bias.
|
| 18 |
+
|
| 19 |
+
## Methodology
|
| 20 |
+
|
| 21 |
+
In the spirit of education and public accessibility, this work attempts to replicate the results of the paper from first principles using Google Colab resources. To account for the limitations imposed by Colab, a much smaller model and dataset are used. All results can be replicated in approximately 10 GPU hours, within the usual timeout window of an active Colab session. Serialization is also built into the example notebooks in the case of crashes to prevent too much loss, and serialized models are also included in the repository to allow others to explore the results without having to run hours of code.
|
| 22 |
+
|
| 23 |
+
## Results
|
| 24 |
+
|
| 25 |
+
This work finds that (1) adversarial training does in fact lead to classwise performance discrepancies not only in standard error (accuracy) but also in attack robustness, (2) these discrepancies exacerbate existing biases in the model, (3) upweighting the standard and robust errors of poorly performing classes during training decreased this discrepancy for both both the standard error and robustness and (4) increasing the attack margin for poorly performing classes during training also decreased these discrepancies, at the cost of some performance. (1) (2) and (3) match the conclusions of the original paper, while (4) deviated in that it was unsuccessful in helping increasing the robustness the most poorly perfmoring classes. Because the model and datsets used were totally different from the original paper's, it is hard to to quantify the exact similarity of our results. Conceptually however, I find very similar conclusions.
|
| 26 |
+
|
| 27 |
+
## What was easy
|
| 28 |
+
|
| 29 |
+
It was easy to identify the unfairness resulting from existing adversarial training methods and implement the authors' FRL (reweight) and FRL (remargin) approaches for combating this bias. The algorithm and training approaches are well outlined in the original paper, and are relatively accessible even for those with little experience in adversarial training.
|
| 30 |
+
|
| 31 |
+
## What was difficult
|
| 32 |
+
|
| 33 |
+
Because of the resource limitations imposed, I was unable to successfully implement the suggested training process using the authors' specific model and dataset. Also, even with a smaller model and dataset it was difficult to thoroughly tune the hyperparameters of the model and algorithm.
|
| 34 |
+
|
| 35 |
+
## 30 Communication with original authors
|
| 36 |
+
|
| 37 |
+
I did not have contact with the authors during the process of this reproduction. I reached out for feedback once I had a draft of the report, but did not hear back.
|
| 38 |
+
|
| 39 |
+
Submitted to ML Reproducibility Challenge 2021. Do not distribute.
|
| 40 |
+
|
| 41 |
+
## 1 Introduction
|
| 42 |
+
|
| 43 |
+
The advent of adversarial examples (1) (2) has motivated the need for procedures which decrease the sensitivity to noise of learned models (which I will call adversarial robustness or simply robustness.) Once such method is adversarial training (3) (4), in which adversarial examples are generated during the training process and are mixed in with "clean" examples to create mixed training batches of both manipulated and unmanipulated images. Learning on these batches has been shown to improve the robustness of models to adversarial attacks, often at a slight cost to standard performance (accuracy.)
|
| 44 |
+
|
| 45 |
+
To be Robust or to be Fair: Towards Fairness in Adversarial Training identifies that adversarial training creates unfairness in the resulting robust model. While the overall robustness of the model improves, some classes in the resulting model are more robust to adversarial attacks than others. Not only are the robustness benefits unfairly distributed, so too are the standard performance losses; the classes which are less robust at the end of the procedure tend to be the ones which suffer more in terms of standard performance. Moreover, these classes tend to be the ones which were harder to learn before adversarial training. As Xu et al. describe it: "adversarial training tends to make the hard classes even harder to be classified or robustly classified."
|
| 46 |
+
|
| 47 |
+
Motivated by this unfairness, Xu et al. conduct a theoretical analysis of the problem to explain this empirically observed phenomenon. They then draw on (5) to describe robust error in terms of the sum of standard errors (i.e. the probability that a class will be incorrectly classified without manipulation) and boundary errors (i.e. the probability that there exists some $\epsilon$ -ball attack which can change a classifier’s decision on a given class.) Using this description, they reformulate the learning problem into a series of cost-sensitive classification problems that can be penalized for violating fairness constraints. With this reformulation, they present two FRL algorithms for making adversarial training more fair: one which upweights the error of classes which violate the fairness constraints during training, and one which increases the attack radius for classes which violate fairness constraints during training.
|
| 48 |
+
|
| 49 |
+
## 2 Scope of reproducibility
|
| 50 |
+
|
| 51 |
+
The focus of this reproduction will be attempting to demonstrate the following:
|
| 52 |
+
|
| 53 |
+
- Claim 1, which is supported by Experiment 1 in Figure 1, is that adversarial training creates unfair outcomes in terms of both robustness and standard error.
|
| 54 |
+
|
| 55 |
+
- Claim 2, which is also supported by Experiment 1 in Figure 1, is that this unfairness exacerbates existing biases in model performance.
|
| 56 |
+
|
| 57 |
+
- Claim 3, which is supported by Experiment 2 in Figure 2, is that upweighting the error of classes which violates fairness constraints (using the authors' FRL: reweight algorithm) can improve the both the standard errors for the most poorly performing classes, and to a lesser degree their robustness.
|
| 58 |
+
|
| 59 |
+
- Claim 4, which is explored by Experiment 3 in Figure 3, is that increasing the margin of attack for classes which violates fairness constraints (using the authors' FRL: remargin algorithm) can also improve the fairness of the model- perhaps more effectively than reweighting.
|
| 60 |
+
|
| 61 |
+
## 67 3 Methodology
|
| 62 |
+
|
| 63 |
+
3 As an educational exercise, I aimed to re-implement the authors' training approaches from their descriptions in the 9 paper. Because of the limitation imposed on the resources, however, I opted to use a simpler model and dataset in my experiments.
|
| 64 |
+
|
| 65 |
+
### 3.1 Model descriptions
|
| 66 |
+
|
| 67 |
+
2 The paper used the PreAct-ResNet18 and WRN28 architectures for their experimentation; I opted for the LeNet-5 73 architecture in the interest of efficiency. Though it is a much simpler model than the paper's originals, it provided enough complexity to conduct my experiments. 3.2 Datasets
|
| 68 |
+
|
| 69 |
+
The paper used the CIFAR10 and SVHN datasets for their experimentation; I used the Fashion-MNIST dataset. The train set is comprised of 60,000 examples, the test set 10,000 . Both have a uniform label distribution across all 10 classes. The original train and test sets are used Experiment 1, while Experiments 2 and 3 split the train set into an 80/20 train/validation set for the FRL process. The only preprocessing done was to resize the images from 28x28 to ${32} \times {32}$ . The data is freely available here.
|
| 70 |
+
|
| 71 |
+
### 3.3 Hyperparameters
|
| 72 |
+
|
| 73 |
+
The fairness tolerance hyperparameter was selected based on the recommendations in the paper (5%), as was the baseline $\epsilon$ (8/255 for the PGD attack.) For Experiment 1 I used a learning rate of 1e-3 for regular training and adversarial training, as the paper recommended. Due to resource constraints I had to limit the number of epochs I trained for to 15, and from convergence behavior I decayed the learning rate more often than the original paper (every 4 rounds by a factor of 3 , as opposed to every 40 rounds by a factor of 10.) For the simpler model and dataset, this worked well.
|
| 74 |
+
|
| 75 |
+
For Experiments 2 and 3 I used a baseline learning rate of 1e-4, which I selected based on unstable behavior at a rate of 1e-3. I suspect this is due to differences in the model and dataset used, as well as the way I implemented the reweighting and remargining systems.
|
| 76 |
+
|
| 77 |
+
I utilized the results of the fairness evaluation ( $\phi$ values) in the training process by applying a Softmax function to creating cross-entropy loss weightings, and as such the $\alpha$ values were different than the original paper’s. I tried a variety of $\alpha$ values in the space of(1,2,5,10,)and a variety of ratios of natural- $\alpha$ s to boundary- $\alpha$ s. The best results came from a ratio of 5:1 natural:boundary error weighting, which decreased the worst-case standard error by 25%, and the worst case robust error by ${11}\%$ .
|
| 78 |
+
|
| 79 |
+
### 3.4 Experimental setup and code
|
| 80 |
+
|
| 81 |
+
For Experiment 1, I defined the LeNet-5 architecture and trained a classifier on the Fashion-MNIST dataset for 15 epochs at a learning rate of 1e-3. I then adversarially trained a new LeNet-5 model using a PDG attack for the same number of epochs at the same learning rate, with a 50/50 mixture of clean and manipulated images. I then compared the classwise standard accuracy (i.e. ability to predict a "clean" image correctly) and robust accuracy (i.e. ability to predict a image correctly despite manipulation) of the natural model and adversarially trained model. The results are recorded in Figure 1.
|
| 82 |
+
|
| 83 |
+
For Experiments 2, I retrained the unfair adversarially-trained model under the FRL (reweight) paradigm. During this procedure, I recorded the overall and classwise standard and boundary errors of the model during each batch, and based on these errors I re-calculated loss weights for each class. The loss function used was the sum of the standard loss and the loss for adversarially manipulated images with respect to the predictions on their unmanipulated counterparts (corresponding to standard error and boundary error, respectively.) Classes were penalized based on violations of fairness constraints, i.e. how greatly they differed from the average standard and boundary errors for all classes. I ran 10 rounds of retraining, and then compared the original unfair adversarially trained model with its retrained counterpart, comparing classwise standard and robust accuracy. These results can be found in Figure 2.
|
| 84 |
+
|
| 85 |
+
Experiment 3 was much the same as Experiment 2, the only difference being that instead of simply upweighting the loss of classes which violated fairness constraints, the radius of a class' attack during training was increased or decreased based on the size of their violation. Again, I ran 10 rounds of retraining, and then compared the original unfair adversarially trained model with its retrained counterpart, comparing classwise standard and robust accuracy. These results can be found in Figure 3.
|
| 86 |
+
|
| 87 |
+
All the code for these experiments, as well as example notebooks that walk through the procedure, can be found here,
|
| 88 |
+
|
| 89 |
+
### 3.5 Computational requirements
|
| 90 |
+
|
| 91 |
+
As mentioned, I used Google Colab for all of the experimentation. As such, it is difficult to describe the exact hardware that was used, or to even be confident of the consistency of the hardware throughout this process. I did use GPU resources, though I cannot speak to any specific type.
|
| 92 |
+
|
| 93 |
+
1.0 10 STD Err 0.8 Accuracy 0.6 0.4 0.2 0.0 Class (b) Adversarially Trained 0.8 Accuracy 0.6 PGD Err STD Err 0.2 0.0 (a) Naturally trained
|
| 94 |
+
|
| 95 |
+
Figure 1: Adversarial training produces unfair outcomes across classes, and worsens existing performance discrepancies
|
| 96 |
+
|
| 97 |
+
Experiment 1 can be run in approximately 15 minutes of GPU time. Experiment 2 can be run in approximately 5 hours of GPU time (for all alpha-combinations) and Experiment 3 can be run in approximately 3 hours.
|
| 98 |
+
|
| 99 |
+
All three notebooks can sometimes be run in parallel, but not always. Colab can be a bit unpredictable.
|
| 100 |
+
|
| 101 |
+
## 4 Results
|
| 102 |
+
|
| 103 |
+
In my experiments, I found that:
|
| 104 |
+
|
| 105 |
+
- Adversarial training does in fact lead to classwise discrepancies in standard error and adversarial robustness, that the least robust classes in the resulting model are the ones the model originally had a hard time learning, and that the penalties to standard performance brought on by adversarial training exacerbate existing biases in model performance.
|
| 106 |
+
|
| 107 |
+
- Reweighting the natural and boundary errors to penalize classes violating fairness constraints during adversarial retraining can improve the fairness of the model with respect to standard error, and to a lesser degree robust error.
|
| 108 |
+
|
| 109 |
+
- Remargining the attack radius for classes violating fairness constraints during adversarial retraining can also improve the fairness (i.e. lower the variance across classes) of the model's robustness (at a cost to robust performance) as well as improve the standard error.
|
| 110 |
+
|
| 111 |
+
Most of these results agree with the paper's conclusions, although the results in Experiment 3 differ in that I was not able to improve the robustness of the model with remargining as well as I could with reweighting. The original paper showed the opposite: that reweighting was unable to improve robustness for the most poorly performing classes. One experiment I did not conduct was to try both reweighting and remargining together, which the authors suggest might be fruitful. I leave that as a further exercise.
|
| 112 |
+
|
| 113 |
+
### 4.1 Results reproducing original paper
|
| 114 |
+
|
| 115 |
+
#### 4.1.1 Result 1
|
| 116 |
+
|
| 117 |
+
The result of Experiment 1 (shown in Figure 1) relates to claims 1 and 2 in Section 2. I found that in the naturally trained model, the standard error is quite low and the adversarial error (PGD error) is quite high. The adversarial error is not quite as uniform as in the original paper, I suspect because of the simplicity of the dataset and model I used.
|
| 118 |
+
|
| 119 |
+
Still it is observable that after adversarial training, the model's adversarial error is much lower across the board, but not in a fair way. Certain classes are much more robust to attack than others, and in particular the classes which had poorer initial standard performance are the ones with worse adversarial robustness. Moreover, we can see that there are penalties to standard performance incurred as a result of adversarial training, and the classes which suffer the most are the ones the natural model already had a hard time learning.
|
| 120 |
+
|
| 121 |
+
1.0 STD Err 10 STD Err 0.8 Accuracy 0.6 0.4 0.2 0.0 Class (b) After the FRL (Reweight) procedure 0.8 Accuracy 0.6 0.2 0.0 (a) "Vanilla" adversarially-trained model
|
| 122 |
+
|
| 123 |
+
Figure 2: FRL Reweight is able to mitigate standard performance losses, while also increasing the robustness of the most difficult class
|
| 124 |
+
|
| 125 |
+
Indeed, as Xu et al. put it, "adversarial training tends to make the hard classes even harder to be classified or robustly classified." This is exactly what I found, even with a totally different model and dataset.
|
| 126 |
+
|
| 127 |
+
#### 4.1.2 Result 2
|
| 128 |
+
|
| 129 |
+
The result of Experiment 2 (shown in Figure 2) relates to claim 3 in Section 2. Here we can see the result of my best attempt at reweighting the loss of classes during adversarial retraining based on their violation of fairness constraints. As per the paper's FRL retraining algorithm, I began with an adversarially trained model and iteratively tried to retrain it, adjusting the loss of each class as I went depending on whether it violated fairness, and to what degree. As such, I compared the "vanilla" adversarially trained model with the resulting model after retraining.
|
| 130 |
+
|
| 131 |
+
I observed that for the hardest class to classify, there is a 25% reduction in standard error (bringing it nearly in line with the naturally trained model) and an 11% reduction in robust error. This is not totally free; we can observe, for example, that the standard and robust error for some of the easier classes suffers as a result. Still, the resulting model is fairer than it originally was.
|
| 132 |
+
|
| 133 |
+
These results seem relatively in-line with the original paper's, though again because of the different model and dataset selected it is hard to quantify the exact similarity. The overall conclusion is much the same though: reweighting is hugely successful in decreasing the classwise standard error discrepancies brought on by adversarial training, and to a lesser degree in decreasing classwise robustness discrepancies.
|
| 134 |
+
|
| 135 |
+
#### 4.1.3 Result 3
|
| 136 |
+
|
| 137 |
+
The result of Experiment 3 (shown in Figure 3) relates to claim 4 in Section 2. This is the result of my best attempt at remargining during the retraining procedure. I observed a slight improvement in the worst-case standard error, but little to no improvement in the worst case robustness, and indeed a general degradation in robustness across most classes.
|
| 138 |
+
|
| 139 |
+
These results were not in line with the paper's, which found FRL (Remargin) to be more effective than FRL (Reweight.) This may be due to differences in our datasets, or artifacts of my implementation. It should be noted that because of the greater expense of this procedure, it was harder to thoroughly explore its hyperpaprameters, and this is still an interesting area of exploration for me.
|
| 140 |
+
|
| 141 |
+
## 5 Discussion
|
| 142 |
+
|
| 143 |
+
I believe that overall my results are quite in line with the original paper's. I found that adversarial training does produce unfair results, both in the improvements to robustness the model receives as well as the degradation of standard error it experiences. I also found that these unequal costs penalize classes that are harder for the model to learn, making it worse at what classifying what it already had trouble with. Finally, I found that the FRL (reweight) approach was
|
| 144 |
+
|
| 145 |
+
1.0 STD Err 10 STD Err 0.8 Accuracy 0.6 0.4 0.2 0.0 Class (b) After the FRL (Remargin) procedure 0.8 Accuracy 0.6 0.2 0.0 (a) "Vanilla" adversarially-trained model
|
| 146 |
+
|
| 147 |
+
Figure 3: FRL Remargin was able to slightly improve the standard performance of the hardest class, but decreased the overall robustness of the model
|
| 148 |
+
|
| 149 |
+
able to mitigate most of the degradation in standard performance for the hardest to learn classes, and to a lesser degree improve the robustness for that class as well as well as the overall robustness.
|
| 150 |
+
|
| 151 |
+
One weak point of my implementation was in the FRL (Remargin) procedure. I was unable to successfully improve the model's robustness via remargining, though I am not confident that I thoroughly explored the space. It was the most costly procedure I ran, and it ran into its fair share of Colab timeouts, making hyperparameter tuning tricky.
|
| 152 |
+
|
| 153 |
+
One last experiment I did not have time for was a combination of reweighting and remargining, which Xu et al. suggest is the most effective means increasing adversarial fairness. This is because I wanted positive results in remargining before attempting to combine the two approaches, which I was unfortunately unable to achieve. This is still an open question to pursue.
|
| 154 |
+
|
| 155 |
+
### 5.1 What was easy
|
| 156 |
+
|
| 157 |
+
One of the paper's easiest claims to verify was that adversarial training creates the unfair outcomes described above. Even with little experience in adversarial training, we found that with only a bit of effort I could observe this phenomenon myself.
|
| 158 |
+
|
| 159 |
+
It was also fairly easy to implement Xu et al's FRL algorithms; the remargining and reweighting procedures are very clearly explained in the paper and were straightforward to put into code. One aspect of the paper not discussed in this report is their theoretical analysis, which was also very clear and helped motivate and explain the FRL problem formulation.
|
| 160 |
+
|
| 161 |
+
### 5.2 What was difficult
|
| 162 |
+
|
| 163 |
+
As mentioned above, the part I had the most difficulty with was the remargining procedure. It took much longer than anticipated, and its expense made automated hyperparameter searches difficult. Because I was unsuccessful in improving the model's robustness with remargining, I was also hesitant to implement a combined FRL (Reweight) and FRL (Remargin) approach, which the authors suggest might be the most effective result. As mentioned, this is an area in which I am still actively exploring. Hopefully in the future I can replicate their success there too.
|
| 164 |
+
|
| 165 |
+
### 5.3 Communication with original authors
|
| 166 |
+
|
| 167 |
+
As mentioned in my summary, I did not have contact with the authors throughout this process. It was only upon drafting my report that I learned it was encouraged to contact the original authors; in the future, I think it would be a great idea to communicate with them sooner. I reached out with a preprint of the report for any feedback or suggestions, but did not hear back.
|
| 168 |
+
|
| 169 |
+
## References
|
| 170 |
+
|
| 171 |
+
[1] Goodfellow, I. J., Shlens, J., and Szegedy, C. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572, 2014.
|
| 172 |
+
|
| 173 |
+
[2] Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I., and Fergus, R. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199, 2013.
|
| 174 |
+
|
| 175 |
+
[3] Kurakin, A., Goodfellow, I., and Bengio, S. Adversarial machine learning at scale. arXiv preprint arXiv:1611.01236, 2016.
|
| 176 |
+
|
| 177 |
+
[4] Madry, A., Makelov, A., Schmidt, L., Tsipras, D., and Vladu, A. Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083, 2017.
|
| 178 |
+
|
| 179 |
+
[5] Zhang, H., Yu, Y., Jiao, J., Xing, E. P., Ghaoui, L. E., and Jordan, M. I. Theoretically principled trade-off between robustness and accuracy. arXiv preprint arXiv:1901.08573, 2019b.
|
papers/ML_Reproducibility_Challenge/ML_Reproducibility_Challenge 2021/ML_Reproducibility_Challenge 2021 Fall/Sczshz7h0K/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,167 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ [RE] AN IMPLEMENTATION OF FAIR ROBUST LEARNING
|
| 2 |
+
|
| 3 |
+
Anonymous Author(s)
|
| 4 |
+
|
| 5 |
+
Affiliation
|
| 6 |
+
|
| 7 |
+
Address
|
| 8 |
+
|
| 9 |
+
email
|
| 10 |
+
|
| 11 |
+
1
|
| 12 |
+
|
| 13 |
+
§ REPRODUCIBILITY SUMMARY
|
| 14 |
+
|
| 15 |
+
§ SCOPE OF REPRODUCIBILITY
|
| 16 |
+
|
| 17 |
+
This work attempts to reproduce the results of the 2021 ICML paper To be Robust or to be Fair: Towards Fairness 4 in Adversarial Training. I first reproduce classwise accuracy and robustness discrepancies resulting from adversarial 5 training, and then implement the authors' proposed Fair Robust Learning (FRL) algorithms for correcting this bias.
|
| 18 |
+
|
| 19 |
+
§ METHODOLOGY
|
| 20 |
+
|
| 21 |
+
In the spirit of education and public accessibility, this work attempts to replicate the results of the paper from first principles using Google Colab resources. To account for the limitations imposed by Colab, a much smaller model and dataset are used. All results can be replicated in approximately 10 GPU hours, within the usual timeout window of an active Colab session. Serialization is also built into the example notebooks in the case of crashes to prevent too much loss, and serialized models are also included in the repository to allow others to explore the results without having to run hours of code.
|
| 22 |
+
|
| 23 |
+
§ RESULTS
|
| 24 |
+
|
| 25 |
+
This work finds that (1) adversarial training does in fact lead to classwise performance discrepancies not only in standard error (accuracy) but also in attack robustness, (2) these discrepancies exacerbate existing biases in the model, (3) upweighting the standard and robust errors of poorly performing classes during training decreased this discrepancy for both both the standard error and robustness and (4) increasing the attack margin for poorly performing classes during training also decreased these discrepancies, at the cost of some performance. (1) (2) and (3) match the conclusions of the original paper, while (4) deviated in that it was unsuccessful in helping increasing the robustness the most poorly perfmoring classes. Because the model and datsets used were totally different from the original paper's, it is hard to to quantify the exact similarity of our results. Conceptually however, I find very similar conclusions.
|
| 26 |
+
|
| 27 |
+
§ WHAT WAS EASY
|
| 28 |
+
|
| 29 |
+
It was easy to identify the unfairness resulting from existing adversarial training methods and implement the authors' FRL (reweight) and FRL (remargin) approaches for combating this bias. The algorithm and training approaches are well outlined in the original paper, and are relatively accessible even for those with little experience in adversarial training.
|
| 30 |
+
|
| 31 |
+
§ WHAT WAS DIFFICULT
|
| 32 |
+
|
| 33 |
+
Because of the resource limitations imposed, I was unable to successfully implement the suggested training process using the authors' specific model and dataset. Also, even with a smaller model and dataset it was difficult to thoroughly tune the hyperparameters of the model and algorithm.
|
| 34 |
+
|
| 35 |
+
§ 30 COMMUNICATION WITH ORIGINAL AUTHORS
|
| 36 |
+
|
| 37 |
+
I did not have contact with the authors during the process of this reproduction. I reached out for feedback once I had a draft of the report, but did not hear back.
|
| 38 |
+
|
| 39 |
+
Submitted to ML Reproducibility Challenge 2021. Do not distribute.
|
| 40 |
+
|
| 41 |
+
§ 1 INTRODUCTION
|
| 42 |
+
|
| 43 |
+
The advent of adversarial examples (1) (2) has motivated the need for procedures which decrease the sensitivity to noise of learned models (which I will call adversarial robustness or simply robustness.) Once such method is adversarial training (3) (4), in which adversarial examples are generated during the training process and are mixed in with "clean" examples to create mixed training batches of both manipulated and unmanipulated images. Learning on these batches has been shown to improve the robustness of models to adversarial attacks, often at a slight cost to standard performance (accuracy.)
|
| 44 |
+
|
| 45 |
+
To be Robust or to be Fair: Towards Fairness in Adversarial Training identifies that adversarial training creates unfairness in the resulting robust model. While the overall robustness of the model improves, some classes in the resulting model are more robust to adversarial attacks than others. Not only are the robustness benefits unfairly distributed, so too are the standard performance losses; the classes which are less robust at the end of the procedure tend to be the ones which suffer more in terms of standard performance. Moreover, these classes tend to be the ones which were harder to learn before adversarial training. As Xu et al. describe it: "adversarial training tends to make the hard classes even harder to be classified or robustly classified."
|
| 46 |
+
|
| 47 |
+
Motivated by this unfairness, Xu et al. conduct a theoretical analysis of the problem to explain this empirically observed phenomenon. They then draw on (5) to describe robust error in terms of the sum of standard errors (i.e. the probability that a class will be incorrectly classified without manipulation) and boundary errors (i.e. the probability that there exists some $\epsilon$ -ball attack which can change a classifier’s decision on a given class.) Using this description, they reformulate the learning problem into a series of cost-sensitive classification problems that can be penalized for violating fairness constraints. With this reformulation, they present two FRL algorithms for making adversarial training more fair: one which upweights the error of classes which violate the fairness constraints during training, and one which increases the attack radius for classes which violate fairness constraints during training.
|
| 48 |
+
|
| 49 |
+
§ 2 SCOPE OF REPRODUCIBILITY
|
| 50 |
+
|
| 51 |
+
The focus of this reproduction will be attempting to demonstrate the following:
|
| 52 |
+
|
| 53 |
+
* Claim 1, which is supported by Experiment 1 in Figure 1, is that adversarial training creates unfair outcomes in terms of both robustness and standard error.
|
| 54 |
+
|
| 55 |
+
* Claim 2, which is also supported by Experiment 1 in Figure 1, is that this unfairness exacerbates existing biases in model performance.
|
| 56 |
+
|
| 57 |
+
* Claim 3, which is supported by Experiment 2 in Figure 2, is that upweighting the error of classes which violates fairness constraints (using the authors' FRL: reweight algorithm) can improve the both the standard errors for the most poorly performing classes, and to a lesser degree their robustness.
|
| 58 |
+
|
| 59 |
+
* Claim 4, which is explored by Experiment 3 in Figure 3, is that increasing the margin of attack for classes which violates fairness constraints (using the authors' FRL: remargin algorithm) can also improve the fairness of the model- perhaps more effectively than reweighting.
|
| 60 |
+
|
| 61 |
+
§ 67 3 METHODOLOGY
|
| 62 |
+
|
| 63 |
+
3 As an educational exercise, I aimed to re-implement the authors' training approaches from their descriptions in the 9 paper. Because of the limitation imposed on the resources, however, I opted to use a simpler model and dataset in my experiments.
|
| 64 |
+
|
| 65 |
+
§ 3.1 MODEL DESCRIPTIONS
|
| 66 |
+
|
| 67 |
+
2 The paper used the PreAct-ResNet18 and WRN28 architectures for their experimentation; I opted for the LeNet-5 73 architecture in the interest of efficiency. Though it is a much simpler model than the paper's originals, it provided enough complexity to conduct my experiments. 3.2 Datasets
|
| 68 |
+
|
| 69 |
+
The paper used the CIFAR10 and SVHN datasets for their experimentation; I used the Fashion-MNIST dataset. The train set is comprised of 60,000 examples, the test set 10,000 . Both have a uniform label distribution across all 10 classes. The original train and test sets are used Experiment 1, while Experiments 2 and 3 split the train set into an 80/20 train/validation set for the FRL process. The only preprocessing done was to resize the images from 28x28 to ${32} \times {32}$ . The data is freely available here.
|
| 70 |
+
|
| 71 |
+
§ 3.3 HYPERPARAMETERS
|
| 72 |
+
|
| 73 |
+
The fairness tolerance hyperparameter was selected based on the recommendations in the paper (5%), as was the baseline $\epsilon$ (8/255 for the PGD attack.) For Experiment 1 I used a learning rate of 1e-3 for regular training and adversarial training, as the paper recommended. Due to resource constraints I had to limit the number of epochs I trained for to 15, and from convergence behavior I decayed the learning rate more often than the original paper (every 4 rounds by a factor of 3, as opposed to every 40 rounds by a factor of 10.) For the simpler model and dataset, this worked well.
|
| 74 |
+
|
| 75 |
+
For Experiments 2 and 3 I used a baseline learning rate of 1e-4, which I selected based on unstable behavior at a rate of 1e-3. I suspect this is due to differences in the model and dataset used, as well as the way I implemented the reweighting and remargining systems.
|
| 76 |
+
|
| 77 |
+
I utilized the results of the fairness evaluation ( $\phi$ values) in the training process by applying a Softmax function to creating cross-entropy loss weightings, and as such the $\alpha$ values were different than the original paper’s. I tried a variety of $\alpha$ values in the space of(1,2,5,10,)and a variety of ratios of natural- $\alpha$ s to boundary- $\alpha$ s. The best results came from a ratio of 5:1 natural:boundary error weighting, which decreased the worst-case standard error by 25%, and the worst case robust error by ${11}\%$ .
|
| 78 |
+
|
| 79 |
+
§ 3.4 EXPERIMENTAL SETUP AND CODE
|
| 80 |
+
|
| 81 |
+
For Experiment 1, I defined the LeNet-5 architecture and trained a classifier on the Fashion-MNIST dataset for 15 epochs at a learning rate of 1e-3. I then adversarially trained a new LeNet-5 model using a PDG attack for the same number of epochs at the same learning rate, with a 50/50 mixture of clean and manipulated images. I then compared the classwise standard accuracy (i.e. ability to predict a "clean" image correctly) and robust accuracy (i.e. ability to predict a image correctly despite manipulation) of the natural model and adversarially trained model. The results are recorded in Figure 1.
|
| 82 |
+
|
| 83 |
+
For Experiments 2, I retrained the unfair adversarially-trained model under the FRL (reweight) paradigm. During this procedure, I recorded the overall and classwise standard and boundary errors of the model during each batch, and based on these errors I re-calculated loss weights for each class. The loss function used was the sum of the standard loss and the loss for adversarially manipulated images with respect to the predictions on their unmanipulated counterparts (corresponding to standard error and boundary error, respectively.) Classes were penalized based on violations of fairness constraints, i.e. how greatly they differed from the average standard and boundary errors for all classes. I ran 10 rounds of retraining, and then compared the original unfair adversarially trained model with its retrained counterpart, comparing classwise standard and robust accuracy. These results can be found in Figure 2.
|
| 84 |
+
|
| 85 |
+
Experiment 3 was much the same as Experiment 2, the only difference being that instead of simply upweighting the loss of classes which violated fairness constraints, the radius of a class' attack during training was increased or decreased based on the size of their violation. Again, I ran 10 rounds of retraining, and then compared the original unfair adversarially trained model with its retrained counterpart, comparing classwise standard and robust accuracy. These results can be found in Figure 3.
|
| 86 |
+
|
| 87 |
+
All the code for these experiments, as well as example notebooks that walk through the procedure, can be found here,
|
| 88 |
+
|
| 89 |
+
§ 3.5 COMPUTATIONAL REQUIREMENTS
|
| 90 |
+
|
| 91 |
+
As mentioned, I used Google Colab for all of the experimentation. As such, it is difficult to describe the exact hardware that was used, or to even be confident of the consistency of the hardware throughout this process. I did use GPU resources, though I cannot speak to any specific type.
|
| 92 |
+
|
| 93 |
+
1.0 10 STD Err 0.8 Accuracy 0.6 0.4 0.2 0.0 Class (b) Adversarially Trained 0.8 Accuracy 0.6 PGD Err STD Err 0.2 0.0 (a) Naturally trained
|
| 94 |
+
|
| 95 |
+
Figure 1: Adversarial training produces unfair outcomes across classes, and worsens existing performance discrepancies
|
| 96 |
+
|
| 97 |
+
Experiment 1 can be run in approximately 15 minutes of GPU time. Experiment 2 can be run in approximately 5 hours of GPU time (for all alpha-combinations) and Experiment 3 can be run in approximately 3 hours.
|
| 98 |
+
|
| 99 |
+
All three notebooks can sometimes be run in parallel, but not always. Colab can be a bit unpredictable.
|
| 100 |
+
|
| 101 |
+
§ 4 RESULTS
|
| 102 |
+
|
| 103 |
+
In my experiments, I found that:
|
| 104 |
+
|
| 105 |
+
* Adversarial training does in fact lead to classwise discrepancies in standard error and adversarial robustness, that the least robust classes in the resulting model are the ones the model originally had a hard time learning, and that the penalties to standard performance brought on by adversarial training exacerbate existing biases in model performance.
|
| 106 |
+
|
| 107 |
+
* Reweighting the natural and boundary errors to penalize classes violating fairness constraints during adversarial retraining can improve the fairness of the model with respect to standard error, and to a lesser degree robust error.
|
| 108 |
+
|
| 109 |
+
* Remargining the attack radius for classes violating fairness constraints during adversarial retraining can also improve the fairness (i.e. lower the variance across classes) of the model's robustness (at a cost to robust performance) as well as improve the standard error.
|
| 110 |
+
|
| 111 |
+
Most of these results agree with the paper's conclusions, although the results in Experiment 3 differ in that I was not able to improve the robustness of the model with remargining as well as I could with reweighting. The original paper showed the opposite: that reweighting was unable to improve robustness for the most poorly performing classes. One experiment I did not conduct was to try both reweighting and remargining together, which the authors suggest might be fruitful. I leave that as a further exercise.
|
| 112 |
+
|
| 113 |
+
§ 4.1 RESULTS REPRODUCING ORIGINAL PAPER
|
| 114 |
+
|
| 115 |
+
§ 4.1.1 RESULT 1
|
| 116 |
+
|
| 117 |
+
The result of Experiment 1 (shown in Figure 1) relates to claims 1 and 2 in Section 2. I found that in the naturally trained model, the standard error is quite low and the adversarial error (PGD error) is quite high. The adversarial error is not quite as uniform as in the original paper, I suspect because of the simplicity of the dataset and model I used.
|
| 118 |
+
|
| 119 |
+
Still it is observable that after adversarial training, the model's adversarial error is much lower across the board, but not in a fair way. Certain classes are much more robust to attack than others, and in particular the classes which had poorer initial standard performance are the ones with worse adversarial robustness. Moreover, we can see that there are penalties to standard performance incurred as a result of adversarial training, and the classes which suffer the most are the ones the natural model already had a hard time learning.
|
| 120 |
+
|
| 121 |
+
1.0 STD Err 10 STD Err 0.8 Accuracy 0.6 0.4 0.2 0.0 Class (b) After the FRL (Reweight) procedure 0.8 Accuracy 0.6 0.2 0.0 (a) "Vanilla" adversarially-trained model
|
| 122 |
+
|
| 123 |
+
Figure 2: FRL Reweight is able to mitigate standard performance losses, while also increasing the robustness of the most difficult class
|
| 124 |
+
|
| 125 |
+
Indeed, as Xu et al. put it, "adversarial training tends to make the hard classes even harder to be classified or robustly classified." This is exactly what I found, even with a totally different model and dataset.
|
| 126 |
+
|
| 127 |
+
§ 4.1.2 RESULT 2
|
| 128 |
+
|
| 129 |
+
The result of Experiment 2 (shown in Figure 2) relates to claim 3 in Section 2. Here we can see the result of my best attempt at reweighting the loss of classes during adversarial retraining based on their violation of fairness constraints. As per the paper's FRL retraining algorithm, I began with an adversarially trained model and iteratively tried to retrain it, adjusting the loss of each class as I went depending on whether it violated fairness, and to what degree. As such, I compared the "vanilla" adversarially trained model with the resulting model after retraining.
|
| 130 |
+
|
| 131 |
+
I observed that for the hardest class to classify, there is a 25% reduction in standard error (bringing it nearly in line with the naturally trained model) and an 11% reduction in robust error. This is not totally free; we can observe, for example, that the standard and robust error for some of the easier classes suffers as a result. Still, the resulting model is fairer than it originally was.
|
| 132 |
+
|
| 133 |
+
These results seem relatively in-line with the original paper's, though again because of the different model and dataset selected it is hard to quantify the exact similarity. The overall conclusion is much the same though: reweighting is hugely successful in decreasing the classwise standard error discrepancies brought on by adversarial training, and to a lesser degree in decreasing classwise robustness discrepancies.
|
| 134 |
+
|
| 135 |
+
§ 4.1.3 RESULT 3
|
| 136 |
+
|
| 137 |
+
The result of Experiment 3 (shown in Figure 3) relates to claim 4 in Section 2. This is the result of my best attempt at remargining during the retraining procedure. I observed a slight improvement in the worst-case standard error, but little to no improvement in the worst case robustness, and indeed a general degradation in robustness across most classes.
|
| 138 |
+
|
| 139 |
+
These results were not in line with the paper's, which found FRL (Remargin) to be more effective than FRL (Reweight.) This may be due to differences in our datasets, or artifacts of my implementation. It should be noted that because of the greater expense of this procedure, it was harder to thoroughly explore its hyperpaprameters, and this is still an interesting area of exploration for me.
|
| 140 |
+
|
| 141 |
+
§ 5 DISCUSSION
|
| 142 |
+
|
| 143 |
+
I believe that overall my results are quite in line with the original paper's. I found that adversarial training does produce unfair results, both in the improvements to robustness the model receives as well as the degradation of standard error it experiences. I also found that these unequal costs penalize classes that are harder for the model to learn, making it worse at what classifying what it already had trouble with. Finally, I found that the FRL (reweight) approach was
|
| 144 |
+
|
| 145 |
+
1.0 STD Err 10 STD Err 0.8 Accuracy 0.6 0.4 0.2 0.0 Class (b) After the FRL (Remargin) procedure 0.8 Accuracy 0.6 0.2 0.0 (a) "Vanilla" adversarially-trained model
|
| 146 |
+
|
| 147 |
+
Figure 3: FRL Remargin was able to slightly improve the standard performance of the hardest class, but decreased the overall robustness of the model
|
| 148 |
+
|
| 149 |
+
able to mitigate most of the degradation in standard performance for the hardest to learn classes, and to a lesser degree improve the robustness for that class as well as well as the overall robustness.
|
| 150 |
+
|
| 151 |
+
One weak point of my implementation was in the FRL (Remargin) procedure. I was unable to successfully improve the model's robustness via remargining, though I am not confident that I thoroughly explored the space. It was the most costly procedure I ran, and it ran into its fair share of Colab timeouts, making hyperparameter tuning tricky.
|
| 152 |
+
|
| 153 |
+
One last experiment I did not have time for was a combination of reweighting and remargining, which Xu et al. suggest is the most effective means increasing adversarial fairness. This is because I wanted positive results in remargining before attempting to combine the two approaches, which I was unfortunately unable to achieve. This is still an open question to pursue.
|
| 154 |
+
|
| 155 |
+
§ 5.1 WHAT WAS EASY
|
| 156 |
+
|
| 157 |
+
One of the paper's easiest claims to verify was that adversarial training creates the unfair outcomes described above. Even with little experience in adversarial training, we found that with only a bit of effort I could observe this phenomenon myself.
|
| 158 |
+
|
| 159 |
+
It was also fairly easy to implement Xu et al's FRL algorithms; the remargining and reweighting procedures are very clearly explained in the paper and were straightforward to put into code. One aspect of the paper not discussed in this report is their theoretical analysis, which was also very clear and helped motivate and explain the FRL problem formulation.
|
| 160 |
+
|
| 161 |
+
§ 5.2 WHAT WAS DIFFICULT
|
| 162 |
+
|
| 163 |
+
As mentioned above, the part I had the most difficulty with was the remargining procedure. It took much longer than anticipated, and its expense made automated hyperparameter searches difficult. Because I was unsuccessful in improving the model's robustness with remargining, I was also hesitant to implement a combined FRL (Reweight) and FRL (Remargin) approach, which the authors suggest might be the most effective result. As mentioned, this is an area in which I am still actively exploring. Hopefully in the future I can replicate their success there too.
|
| 164 |
+
|
| 165 |
+
§ 5.3 COMMUNICATION WITH ORIGINAL AUTHORS
|
| 166 |
+
|
| 167 |
+
As mentioned in my summary, I did not have contact with the authors throughout this process. It was only upon drafting my report that I learned it was encouraged to contact the original authors; in the future, I think it would be a great idea to communicate with them sooner. I reached out with a preprint of the report for any feedback or suggestions, but did not hear back.
|
papers/ML_Reproducibility_Challenge/ML_Reproducibility_Challenge 2021/ML_Reproducibility_Challenge 2021 Fall/StblE2MQ3AY/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,232 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Reproduction and Extension of "Queens are Powerful too: Mitigating Gender Bias in Dialogue Generation"
|
| 2 |
+
|
| 3 |
+
Anonymous Author(s)
|
| 4 |
+
|
| 5 |
+
Affiliation
|
| 6 |
+
|
| 7 |
+
Address
|
| 8 |
+
|
| 9 |
+
email 1
|
| 10 |
+
|
| 11 |
+
## Reproducibility Summary
|
| 12 |
+
|
| 13 |
+
## 2 Scope of Reproducibility
|
| 14 |
+
|
| 15 |
+
3 The main claims we are trying to reproduce are that bias controlled training or combining counterfactual data augmenta- 4 tion, the positively biased data collected by Dinan et al. [5], and bias controlled training for the LIGHT dataset yields 5 generated dialogue in which the percent of gendered words and male bias closely match the ground truth.
|
| 16 |
+
|
| 17 |
+
## Methodology
|
| 18 |
+
|
| 19 |
+
We fine-tuned a transformer model, pre-trained on Reddit data [1], using the ParlAI API [8] with counterfactual 8 data augmentation, positively biased data collection, bias controlled training, and all three bias mitigation techniques - combined, as discussed in the original paper [5]. We implemented counterfactual data augmentation and bias controlled training ourselves. All models were trained and evaluated using a single NVIDIA Tesla P100 PCIe GPU, which took between 1.3 and 4.6 GPU hours approximately.
|
| 20 |
+
|
| 21 |
+
## Results
|
| 22 |
+
|
| 23 |
+
Overall, our results support the main claims of the original paper [5]. Although the percent gendered words and male bias in our results are not exactly the same as those in the original paper [5], the main trends are the same. The main difference is lower male bias for the baseline model in our results. However, our findings and the trend similarities between our results and those obtained by Dinan et al. [5] demonstrate that bias controlled training or combining all three bias mitigation techniques can effectively control the amount of gender bias present in the model generated responses, supporting Dinan et al.'s claims [5].
|
| 24 |
+
|
| 25 |
+
## 19 What was easy
|
| 26 |
+
|
| 27 |
+
When reproducing the original paper [5], implementing counterfactual data augmentation and bias controlled training was easy since these techniques were well-described in the original paper [5]. Also, combining all three bias mitigation techniques was simple, as we applied the same techniques used to implement each bias mitigation method individually.
|
| 28 |
+
|
| 29 |
+
## 23 What was difficult
|
| 30 |
+
|
| 31 |
+
The only difficulty we encountered, albeit minor, was learning how to use ParlAI, which was necessary to use the same model as in the original paper [5]. However, after reading through the ParlAI documentation and experimenting with the ParlAI Google Colaboratory tutorial [10], we understood how to use ParlAI to fine-tune the model, pre-trained on Reddit conversations [1], for the datasets we create.
|
| 32 |
+
|
| 33 |
+
## 28 Communication with original authors
|
| 34 |
+
|
| 35 |
+
We communicated with Emily Dinan, an author of the original paper [5], who clarified what model was used in the original paper [5] and provided us with the command to download the model as well as the hyperparameter settings used when fine-tuning.
|
| 36 |
+
|
| 37 |
+
## 1 Introduction
|
| 38 |
+
|
| 39 |
+
Ad-hoc methods for mitigating social bias in natural language data remain an active area of modern research. As transfer learning with pre-trained models such as BERT [3] and GPT-2 [9] continue to be pervasive, the inherent issues in their training data have come to light. Large corpora of unstructured text from the Internet reflect the biases and inequalities of society, and are consequently learned by these models and their fine-tuned variants. To this end, Dinan et al. [5] proposed three techniques to specifically mitigate gender bias in fine-tuned language models, using the LIGHT dataset [11] as an example. The LIGHT dataset is a crowdsourced collection of dialogues spoken between "personas," characters played by either humans or models, in a fantasy adventure game, LIGHT [III]. Dinan et al. applied the following techniques to this dataset: 1) counterfactual data augmentation, in which gendered words are replaced with their opposite, i.e., replacing "he" with "she"; 2) positively biased data collection, in which new, less biased female character personas and dialogues are created via crowd-sourcing; and 3) bias controlled training, in which the dialogue is placed in groups based on the number of gendered words it contains and this group number is included with the dialogue as a special token when training the model [5]. The model itself is a transformer pre-trained on a dataset of Reddit conversations [II] and then fine-tuned on LIGHT using the three techniques described above, individually, as well as one combining all three techniques.
|
| 40 |
+
|
| 41 |
+
## 2 Scope of reproducibility
|
| 42 |
+
|
| 43 |
+
The aim of this paper is to evaluate the following hypotheses made by Dinan et al. [5] by reproducing their experiments.
|
| 44 |
+
|
| 45 |
+
- Combining counterfactual data augmentation, the positively biased data collected by Dinan et al. [5], and bias controlled training for the LIGHT dataset yields generated dialogue in which the percent of gendered words and male bias closely match the ground truth.
|
| 46 |
+
|
| 47 |
+
- Bias controlled training for the LIGHT dataset yields generated dialogue in which the percent of gendered words and male bias closely match the ground truth.
|
| 48 |
+
|
| 49 |
+
## 543 Methodology
|
| 50 |
+
|
| 51 |
+
We fine-tuned the transformer model, pre-trained on Reddit data [1], using the ParlAI API [8] with counterfactual data augmentation, positively biased data collection, bias controlled training, and all three bias mitigation techniques combined, as discussed in the original paper [5]. We generated training, test, and validation datasets for counterfactual data augmentation and bias controlled training from the original LIGHT dialogue dataset. We also formatted the dataset used for each bias mitigation technique, extracting the dialogue from each dataset and placing it in the proper format, such that everything said in the dialogue so far is used to predict the next response in the dialogue, which is the label. All models were trained and evaluated using a single NVIDIA Tesla P100 PCIe GPU.
|
| 52 |
+
|
| 53 |
+
### 3.1 Model descriptions
|
| 54 |
+
|
| 55 |
+
Dinan et al. [5] used a transformer with 8 encoder layers, 8 decoder layers, embedding dimension of 512, and 16 attention heads. This model was pre-trained on Reddit conversations from the pushshift.io Reddit dataset, which contains 2.2 billion samples for training after removing comments that contain URLs or that are less than 5 characters long [5]. Specifically, the model was trained on all comments in each thread and learned to predict the next comment in the thread [5]. Thus, this pre-training makes the model well-suited for the dialogue generation task [1]. The model contains87,508,992trainable parameters and the training objective is to minimize the cross entropy loss on the original and augmented LIGHT dialogues.
|
| 56 |
+
|
| 57 |
+
### 3.2 Datasets
|
| 58 |
+
|
| 59 |
+
We used the ParlAI API command from the paper's ParlAI project page [4] to obtain the following data: the LIGHT dataset [11], a list of counterfactuals, a list of gendered words [12], and the positively biased data collected by Dinan et al. [5]. The LIGHT dataset and positively biased data collected by Dinan et al. contain information about interactions between characters in the game, LIGHT, such as the character names and personas, dialogue, and environment where the interaction took place, to name a few. The LIGHT dataset contains approximately 11,000 interactions and 111,000 iterances [11]. An utterance is a single occurrence of a character talking during a dialogue. The LIGHT dataset is used to fine-tune the baseline model.
|
| 60 |
+
|
| 61 |
+
Each bias mitigation method employed by Dinan et al. [5] also requires fine-tuning the pre-trained model on a new dataset. For counterfactual data augmentation, we used the list of counterfactuals to replace every gendered word, according to the list of gendered words from Zhao et al. [12], in the LIGHT dialogue dataset with its counterfactual. The list of gendered words [12] has 1,049 words. The list of counterfactuals contains each gendered word and its opposite gendered counterpart. For example, the counterfactual for "he" is "she". In addition, the list of counterfactuals, containing 421 words, was constructed by Dinan et al. [5] using the list of gendered words from Zhao et al. [12].
|
| 62 |
+
|
| 63 |
+
For positively biased data collection, Dinan et al. crowdsource new dialogue data, asking workers to create dialogue assuming gender equality [5]. This dataset contains 507 interactions and 6,658 utterances. Given the time and resource constraints, we used Dinan et al.'s positively biased data [5] rather than crowdsourcing the data ourselves.
|
| 64 |
+
|
| 65 |
+
For bias controlled training, we appended "fx my" after the last utterance in an episode, which is a portion of a dialogue between two characters, based on the label, which is the next utterance in the dialogue. In "fx my," x is 1 if there is at least one female gendered word in the label and 0 otherwise, and $\mathrm{y}$ is 1 if there is at least one male gendered word in the label and 0 otherwise. Thus, each label falls into one of four bins: " $\mathrm{f}0\mathrm{\;m}0$ " which has no gendered words;" $\mathrm{f}0$ m1" which has no female gendered words but at least one male gendered word; "fl m0" which has at least one female gendered word but no male gendered words; and "f1 ml" which has at least one female and one male gendered word. Placing the dialogue labels in these bins causes the model to learn the gender bias present in an utterance, allowing us to specify the desired gender bias in the model's generated dialogue using one of the four bins. We used the list of gendered words from Zhao et al. [12] to determine the number of gendered words and proper bin for each label and model generated utterance.
|
| 66 |
+
|
| 67 |
+
We split the datasets used for fine-tuning each model into approximately 90% for training and 10% for an unseen test set. The training set was further split into ${80}\%$ for training and ${20}\%$ for validation.
|
| 68 |
+
|
| 69 |
+
### 3.3 Hyperparameters
|
| 70 |
+
|
| 71 |
+
As previously mentioned, the model, pre-trained on Reddit conversations, has 8 encoder layers, 8 decoder layers, 16 attention heads, and an embedding dimension of 512 [1]. In addition, this model has 2, 048 nodes in the hidden layer, uses GeLU activation function, and truncates each dialogue to at most 512 characters and each label to at most 128 characters. Other hyperparameters for each model are an initial learning rate of ${3.1e} - 7$ , memory-efficient Adam optimizer, gradient clipping of 0.1 , inverse square root learning rate scheduler with a decay factor of 0.5 and patience of 3 , no activation or attention dropout, batch size of 20 , and dropout of 0.1 or 0.15 depending on hyperparameter tuning results. Emily Dinan, one of the authors of the original paper [5], provided some of the hyperparameter values, but we reduced the batch size due to memory constraints with Google Colaboratory resources. Since most hyperparameters were provided by Emily Dinan and the learning rate is adjusted by the inverse square root learning rate scheduler and batch size could not be increased due to GPU limitations, the only remaining hyperparameter that we could effectively tune to improve perplexity, based on our experience with deep NLP models, particularly pre-trained transformers, was dropout. Thus, we tuned dropout, applied to the embeddings and before layer normalization, for the model combining all three bias mitigation techniques, since this model provided the best results according to the original paper [5], to obtain lower perplexity on the validation set. In order to tune dropout, we increased dropout in increments of 0.025 , starting from a value of 0.1 , which was given by Emily Dinan, up to 0.2. After training a number of models with different dropouts, we found that 0.15 dropout resulted in the lowest perplexity. In addition, for the extension with neutral, generated data, we again tuned dropout, and found 0.15 to be the optimal value.
|
| 72 |
+
|
| 73 |
+
### 3.4 Experimental setup and code
|
| 74 |
+
|
| 75 |
+
Similar to the Reddit dataset used for pre-training the model as well as the training done by Dinan et al. [5], we generated the datasets based on the entire history of conversations so far, predicting the next utterance in each conversation For each bias mitigation technique and combining all three techniques, we generated the datasets from the original conversations in the LIGHT dataset [11] for training, evaluation, and response generation. Using ParlAI's API, we fine-tuned 5 versions of the model, pre-trained on Reddit conversations [1]: baseline, counterfactual data augmentation. positively biased data collection, bias controlled training, and all three bias mitigation techniques combined. When fine-tuning each model, the best model is saved according to the perplexity on the validation set. As long as the perplexity on the validation set continues to improve, the model continues training and at every quarter epoch, the version of the model achieving the lowest perplexity on the validation set is saved. If the model does not improve after 10 quarter epochs, training will be automatically stopped to avoid overfitting or unnecessary training. After training is complete, we run further evaluation to obtain F1 scores on the validation and test datasets as well as F1 scores pertaining to the labels for each bin for these two datasets. Finally, we pass every dialogue episode in the test set through the model to generate responses. These generated responses are used to compute statistics defined by Dinan et al. [5] to evaluate gender bias in generated responses from the model. ${}^{1}$
|
| 76 |
+
|
| 77 |
+
All experiments were run on Google Colaboratory using a single NVIDIA Tesla P100 PCIe GPU. After fine-tuning each model, the labels in the test set are split into the bias controlled training bins and within these bins, each model's generated utterances are also grouped into the same bins. This allowed us to compute the percent gendered words and male bias for the generated utterances within each bin of labels for the test set. In addition, we computed the F1 score for predicted tokens in generated responses separately for each bin of test labels.
|
| 78 |
+
|
| 79 |
+
### 3.5 Computational requirements
|
| 80 |
+
|
| 81 |
+
The model used by Dinan et al. in the original paper [5] was pre-trained on Reddit conversations in the same manner as the polyencoder transformer model from Humeau et al. [7], and contains the same number of encoder layers, decoder layers, attention heads, and embedding dimension size. Training the polyencoder transformer on the ConvAI2 dataset, which has about 131,000 elements [6], took 2.7 hours using 8 NVIDIA Volta 100 GPUs [7]. Since the polyencoder transformer has about ${20}\%$ more parameters than the model used by Dinan et al. and the LIGHT dataset is about ${15}\%$ smaller than the ConvAI2 dataset, we estimated it took Dinan et al. about 2.3 hours or less, which is 85% of 2.7 hours, using 8 GPUs to fine-tune each model or about 11.5 hours total for all 5 models.
|
| 82 |
+
|
| 83 |
+
We initially estimated we could also fine-tune all 5 models in approximately 11.5 hours using Google Cloud Platform. Instead, we used a single NVIDIA Tesla P100 PCIe GPU on Google Colaboratory. During training, each model required about ${16}\mathrm{{GB}}$ of $\mathrm{{GPU}}$ memory, maximizing the $\mathrm{{GPU}}$ memory available with the aforementioned batch size of 20. Table Tlists runtime information for fine-tuning each model, where the model combining all three bias mitigation techniques uses dropout of 0.15 for the embeddings and before layer normalization, as previously mentioned. The runtime for this model with other values for dropout was approximately the same. The actual training time for our models was substantially lower than our estimate, likely due, at least in part, to the unpredictability of Google Colaboratory providing the full computational GPU resources assigned to a particular session.
|
| 84 |
+
|
| 85 |
+
<table><tr><td>Model</td><td>Number of Epochs</td><td>Training Time (GPU Hours)</td><td>Average Runtime per Epoch (GPU Hours)</td></tr><tr><td>Baseline</td><td>7.51</td><td>1.32</td><td>0.18</td></tr><tr><td>Counterfactual Data Augmentation</td><td>4.75</td><td>1.63</td><td>0.34</td></tr><tr><td>Positively Biased Data Collection</td><td>7.26</td><td>1.40</td><td>0.19</td></tr><tr><td>Bias Controlled Training</td><td>7.76</td><td>1.38</td><td>0.18</td></tr><tr><td>All 3 Bias Mitigation Techniques</td><td>6.58</td><td>4.63</td><td>0.70</td></tr></table>
|
| 86 |
+
|
| 87 |
+
Table 1: Computational Requirements for Training each Model
|
| 88 |
+
|
| 89 |
+
## 4 Results
|
| 90 |
+
|
| 91 |
+
Below are the results from reproducing and extending the experiments in the original paper [5]. Overall, our results support the hypotheses previously identified. Further discussion of the results in relation to the hypotheses is provided below. We also implement 3 extensions to the original paper [5], two of which are aimed at addressing the high time and monetary cost of positively biased data collection, which requires crowdsourcing data.
|
| 92 |
+
|
| 93 |
+
Figure 1 shows the percent gendered words, percent male bias, and F1 score of each model's generated utterances for conversations in the test set, separated according to the test label bins, where "Baseline" is the model trained only on the LIGHT dataset, "CDA" is counterfactual data augmentation, "Pos Data" is positively biased data collection, "Bias" is bias controlled training, and "All" combines all three bias mitigation techniques. In Figure 1, each set of three graphs corresponds to one of the four bias controlled training bins for test labels. The results shown in Figure 1 are quite similar to those in Figure 1 of the original paper [5] in terms of how the percent gendered words, percent male bias, and F1 score for each model in each bin compare. Although our results are not exactly the same as those in the original paper [5] in terms of values, the main trends in our results are the same as those in the original paper [5]. The main differences between our results and those in the original paper [5] are lower male bias in each bin for the baseline and a percent gendered words for "CDA" that is closer in value to the baseline in our results.
|
| 94 |
+
|
| 95 |
+
---
|
| 96 |
+
|
| 97 |
+
${}^{1}$ The GitHub repository for our project is located at https://github.com/Pnaghavi/Mitigating-Gender-Bias-in-Generated-Text
|
| 98 |
+
|
| 99 |
+
---
|
| 100 |
+
|
| 101 |
+
FOMO FOMO FOMO 14 F1 Score 13 12 11 Pos Data Pos Data F+M0 ${\mathrm{F}}^{ + }{\mathrm{M}}^{0}$ 16 F1 Score 14 13 Pos Data 品 Baseline Pos Data ${F}^{0}{M}^{ + }$ ${\mathrm{F}}^{0}{\mathrm{M}}^{ + }$ F1 Score 13 Pos Data Baseline Pos Data ${\mathrm{F}}^{ + }{\mathrm{M}}^{ + }$ ${\mathrm{F}}^{ + }{\mathrm{M}}^{ + }$ 16 F1 Score 14 13 Pos Data Baseline Pos Data % Gendered Words 60 % Male Bias 48 Baseline CDA Pos Data Baseline CDA ${\mathrm{F}}^{ + }{\mathrm{M}}^{0}$ % Gendered Words 50 % Male Bias Baseline Pos Data Baseline FOM+ % Gendered Words 100 % Male Bias Baseline Pos Data Baseline ${\mathrm{F}}^{ + }{\mathrm{M}}^{ + }$ % Gendered Words % Male Bias Baseline Pos Data Baseline
|
| 102 |
+
|
| 103 |
+
Figure 1: Results for Reproducing the Experiments in the Original Paper [5]
|
| 104 |
+
|
| 105 |
+
### 4.1 Results for First Hypothesis
|
| 106 |
+
|
| 107 |
+
According to the first hypothesis, the number of gendered words in the generated utterances for the "All" model for each bin should be similar to the number of gendered words in the labels of the test set. This is observed in all four bins in Figure 1. Specifically, for the ${\mathrm{F}}^{0}{\mathrm{M}}^{0}$ bin, the test labels have no gendered words, which means the generated utterances for both models should have a very low number of gendered words and approximately 50% male bias. The "All" model satisfies these two requirements, as depicted in the first set of charts in Figure 1, because the generated utterances from this model are less than $1\%$ gendered words and the percent male bias is approximately ${44}\%$ . For the ${\mathrm{F}}^{ + }{\mathrm{M}}^{0}$ bin, the test labels have at least one female gendered word and no male gendered words, which means the generated utterances should have a higher number of gendered words and a smaller percentage of male bias. This is observed for the "All" model in the second set of charts in Figure 1, since the percent gendered words for the "All" model is higher than the baseline and the percent male bias is under $5\%$ , compared to about ${42}\%$ male bias for the baseline. Similarly, in the ${\mathrm{F}}^{0}{\mathrm{M}}^{ + }$ bin, the test labels have at least one male gendered word and no female gendered words. Thus, the generated utterances for the "All" model should have a higher number of gendered words and a larger percentage of male bias, which is depicted in the third set of charts in Figure 1. In the ${\mathrm{F}}^{0}{\mathrm{M}}^{ + }$ bin, the percent of gendered words for the "All" model is about $1\%$ higher than the baseline and the male bias is approximately 97%, compared to only ${52}\%$ for the baseline. For the last bin, ${\mathrm{F}}^{ + }{\mathrm{M}}^{ + }$ , the test labels have at least one male and one female gendered word. As a result, the generated utterances for the "All" model should have a higher percentage of gendered words and closer to ${50}\%$ male bias. As shown in the last set of charts in Figure 1, the "All" model does have a higher percentage of gendered words than the baseline, specifically ${13}\%$ , compared to $8\%$ for the baseline. However, the male bias is about ${43}\%$ for the "All" model, which is not as close to an even gender bias split, ${50}\%$ male and ${50}\%$ female, as the baseline, which has about ${46}\%$ male bias. In the discussion section, we give a possible cause for this discrepancy in our results.
|
| 108 |
+
|
| 109 |
+
### 4.2 Results for Second Hypothesis
|
| 110 |
+
|
| 111 |
+
Based on the second hypothesis, the number of gendered words in each utterance generated by the "Bias" model should be similar to that of the labels in the test set for each dialogue. This can be clearly seen for all four bins in Figure 1. In the ${\mathrm{F}}^{0}{\mathrm{M}}^{0}$ bin, the test labels have no gendered words. If the model has learned from bias controlled training, producing properly gender biased text according to the bin appended to the end of the dialogue, then the generated text for the "Bias" model in the ${\mathrm{F}}^{0}{\underline{\mathrm{M}}}^{0}$ bin should have very few gendered words and about ${50}\%$ male bias. As depicted in the first set of charts in Figure 1, for the ${\mathrm{F}}^{0}{\mathrm{M}}^{0}$ bin, the "Bias" model has less than 1% gendered words and approximately ${57}\%$ male bias, as desired. For the ${\mathrm{F}}^{ + }{\mathrm{M}}^{0}$ bin, the generated text should have more female gendered words and few to no male gendered words, matching the gender bias in the test set label. This is observed in the second set of charts in Figure 1, since the "Bias" model yields a higher percent of gendered words than the baseline and less than $5\%$ male bias, compared to ${42}\%$ male bias for the baseline. Generated text in the ${\mathrm{F}}^{0}{\mathrm{M}}^{ + }$ test label bin should have more male gendered words and few to no female gendered words, which is depicted in the third set of charts in Figure 1. Specifically, the percent gendered words for the "Bias" model is 1% higher than the baseline and male bias is approximately ${94}\%$ , compared to only ${52}\%$ for the baseline. In the last bin, ${\mathrm{F}}^{ + }{\mathrm{M}}^{ + }$ , the generated text should ideally have an even distribution of male and female gendered words and a higher percentage of gendered words overall. This is shown in the last set of charts in Figure 1, since the "Bias" model has a higher percentage of gendered words than the baseline, specifically ${11}\%$ for the "Bias" model and 8% for the baseline, although male bias is ${36}\%$ for the "Bias" model compared to ${46}\%$ for the baseline, which is not an even distribution. A possible cause for this discrepancy in our results is described in the discussion section.
|
| 112 |
+
|
| 113 |
+
### 4.3 Effect of Removing Positively Biased Data Collection
|
| 114 |
+
|
| 115 |
+
Given the time and monetary cost involved in crowdsourcing data, specifically the positively biased data Dinan et al. collected [5], a natural question is whether adding this positively biased data to counterfactual data augmentation and bias controlled training is worth the cost. In other words, what is the performance loss if positively biased data collection is excluded from the model, instead relying only on counterfactual data augmentation and bias controlled training.
|
| 116 |
+
|
| 117 |
+
#### 4.3.1 Implementation and Experimental Setup
|
| 118 |
+
|
| 119 |
+
We fine-tuned the model, pre-trained on Reddit conversations [1], on the data generated from counterfactual data augmentation and using bias controlled training. The implementation and experimental setup is the same as that for the model that combines all three bias mitigation techniques, except we excluded the positively biased data collected by Dinan et al. [5].
|
| 120 |
+
|
| 121 |
+
#### 4.3.2 Results and Discussion
|
| 122 |
+
|
| 123 |
+
Figure 2 depicts, for each bin, the percent gendered words and percent male bias in the generated utterances as well as the F1 score for the "All" model, which combines all three bias mitigation techniques, the "CDA + Bias" model, which uses counterfactual data augmentation and bias controlled training, and the baseline. As expected, for all four bins, the percent gendered words, percent male bias, and F1 score for "All" achieves better results than "CDA + Bias," in terms of higher F1 scores and the percent gendered words and male bias being closer to ground truth, except "CDA + Bias" achieves a slightly higher F1 score for the ${\mathrm{F}}^{0}{\mathrm{M}}^{0}$ bin. However, results for "CDA + Bias" are always within about $2\%$ of the results for "All" and the overall F1 score for "CDA + Bias" is within 0.25% of the overall F1 score for "All," specifically an F1 score of 15.31 for "CDA + Bias" and 15.56 for "All." Although incorporating positively biased data collection does yield better results, given how small the difference is between including vs. excluding this technique, it may not be worth the necessary time or money. Instead, one could simply use counterfactual data augmentation and bias controlled training or find a less costly way to collect positively biased data, which is the focus of the next extension.
|
| 124 |
+
|
| 125 |
+
### 4.4 Generating Gender Neutral Data
|
| 126 |
+
|
| 127 |
+
In the previous section, we created a model incorporating counterfactual data augmentation and bias controlled training, removing positively biased data collection. Instead of completely removing this additional, positively biased data, an alternative, which still avoids the cost of crowdsourcing data, is to generate new, gender neutral data using code. Incorporating gender neutral data can help shift the gender bias of the data, whether male or female, closer to 50%.
|
| 128 |
+
|
| 129 |
+
Comparison of % Gendered Words for Models Comparison of % Male Bias for Models Comparison of F1 Score for Models 20 15 F1 Score FOM ${F}^{ + }{M}^{ + }$ FOMO ${F}^{ + }{M}^{c}$ FOM+ ${F}^{ + }{M}^{ + }$ - Baseline E All I CDA + Bias 15 100 % Gendered Words % Male Bias 60 20 FOMO F+M0 FOM ${\mathrm{F}}^{ + }{\mathrm{M}}^{ + }$ FOMO F+M0 - Baseline I All B CDA + Bias - Baseline All BCDA + Bias
|
| 130 |
+
|
| 131 |
+
Figure 2: Results for the Baseline vs. Combining all 3 Bias Mitigation Techniques vs. Counterfactual Data Augmentation and Bias Controlled Training
|
| 132 |
+
|
| 133 |
+
#### 4.4.1 Implementation and Experimental Setup
|
| 134 |
+
|
| 135 |
+
We fine-tuned the model, pre-trained on Reddit conversations [1], using counterfactual data augmentation and bias controlled training, then generated responses from this model for all dialogue episodes in the training data. For each generated response, we set the response to be either the model's generated response or the actual label. If the generated response is neutral, meaning it contains approximately the same number of male and female gendered words or no gendered words, we use the generated response 90% of the time, selecting the actual label in all other cases. These neutral generated responses were used to reconstruct the conversations. We then created new training and validation datasets from these conversations that partially included neutral model generated utterances. Finally, a new model was fine-tuned on these datasets. The experimental setup is the same as that for the model that combines all three bias mitigation techniques, except we excluded the positively biased data collected by Dinan et al. [5] and used the gender neutral data we generated instead. An important point to note is that the test dataset for this new model is the original test dataset. Thus, the F1 scores obtained for each bin and the overall F1 score are from the original test dataset, containing ${100}\%$ natural conversations.
|
| 136 |
+
|
| 137 |
+
#### 4.4.2 Results and Discussion
|
| 138 |
+
|
| 139 |
+
Figure 3 shows, for each bin, the percent gendered words and percent male bias in the generated utterances as well as the FI score for the "All" model, which combines all three bias mitigation techniques, the baseline, and the "CDA + Bias + Our Gen Data" and "CDA + Bias" models, which use counterfactual data augmentation and bias controlled training with and without our neutral, generated data, respectively. Results for our new model, "CDA + Bias + Our Gen Data," are within $2\%$ of the results for "All" in all cases except male bias for ${\mathrm{F}}^{0}{\mathrm{M}}^{0},{\mathrm{\;F}}^{ + }{\mathrm{M}}^{0}$ , and ${\mathrm{F}}^{0}{\mathrm{M}}^{ + }$ . For ${\mathrm{F}}^{0}{\mathrm{M}}^{0}$ , our model yields male bias closer to 50% than "All" by 6%, specifically male bias of about 43% for "All" and 49% for our model. Also, our model results in about $4\%$ higher male bias than "All" for the ${\mathrm{F}}^{ + }{\mathrm{M}}^{0}$ bin and about $4\%$ lower male bias for the ${\mathrm{F}}^{0}{\mathrm{M}}^{ + }$ bin. However, these are actually the desired results because for each bin, the male bias for our model is closer to ${50}\%$ , at least slightly, than "All." Thus, our model results in more gender neutral responses overall, which was the goal of this method. In addition, all results for our new model are still relatively close to the results of "All," demonstrating the effectiveness of our new method, as it did not require any crowdsourced data, only additional training. One concern with using model generated responses is that they may not be as coherent as natural dialogue, but the F1 scores for our new model are comparable to those for the "All" model. For future work, if we repeatedly use the dialogues with our neutral, generated responses to create new generated responses, coherency will become a greater concern and necessitate the use of a coherency assessment model, such as some of the machine-learned evaluation metrics highlighted by Celikyilmaz et al. [2]. Given that adding our neutral, generated data to counterfactual data augmentation and bias controlled training yields approximately the same or slightly higher F1 scores than the "All" model, using only neutral, generated responses with high coherency, according to the metrics introduced by Celikyilmaz et al. [2], in the reconstructed conversations, we can continue to shift the model towards gender neutrality, while maintaining high F1 scores.
|
| 140 |
+
|
| 141 |
+
### 4.5 Percent Generated Responses with Respect to Bins
|
| 142 |
+
|
| 143 |
+
To better evaluate the degree to which our extensions generate gender neutral responses in comparison to the "All" model, we placed the generated responses from these three models into one of the bias controlled training bins based on the presence of gendered words in the generated response, and computed the percent of generated utterances in each bin for each of the three models.
|
| 144 |
+
|
| 145 |
+
Comparison of % Gendered Words for Models Comparison of % Male Bias for Models Comparison of F1 Score for Models 20 15 F1 Score 10 0 FONT F+M+ FOMO F+M0 FOM+ F+M+ $\blacksquare$ CDA + Bias $\blacksquare$ CDA + Bias + Our Gen Data 100 % Gendered Words % Male Bias 60 20 ${F}^{ + }{M}^{0}$ FOM+ F+M+ FOMO ${F}^{ + }{M}^{0}$ ☐ Baseline BAll & CDA + Bias ■ CDA + Bias + Our Gen Data III Baseline
|
| 146 |
+
|
| 147 |
+
Figure 3: Results for the Baseline vs. Combining all 3 Bias Mitigation Techniques vs. Counterfactual Data Augmentation and Bias Controlled Training both with and without Neutral, Generated Data
|
| 148 |
+
|
| 149 |
+
#### 4.5.1 Results and Discussion
|
| 150 |
+
|
| 151 |
+
Figure 4 depicts the percent of generated responses in each bin for the baseline, when combining all bias mitigation techniques, denoted "All," and using counterfactual data augmentation and bias controlled training with and without our neutral, generated data, denoted "CDA + Bias + Our Gen Data" and "CDA + Bias," respectively. These results demonstrate that the "CDA + Bias + Our Gen Data" model generates more gender neutral responses overall, compared to "All" and "CDA + Bias." Specifically, for the ${\mathrm{F}}^{0}{\mathrm{M}}^{0}$ and ${\mathrm{F}}^{ + }{\mathrm{M}}^{ + }$ bins, which are the more gender neutral bins,"CDA +Bias + Our Gen Data" has the highest, or near highest, percentage of generated responses. For the ${\mathrm{F}}^{ + }{\mathrm{M}}^{0}$ and ${\mathrm{F}}^{0}{\mathrm{M}}^{ + }$ bins, which are not gender neutral, "CDA + Bias + Our Gen Data" has the lowest percent of generated responses. In addition to generating more neutral responses, "CDA + Bias + Our Gen Data" achieves approximately the same F1 score for each bin as "All," as depicted in Figure 3, demonstrating that the control over gender bias provided by bias controlled training is still present despite the responses being more gender neutral overall. This indicates an opportunity for future work to shift the overall bias of the model's generated responses to any direction, male biased, female biased, or neutral, by selecting model generated responses that belong to the bin with the desired bias to infuse the original dialogues with this bias and train a model to generate more responses with the desired bias. By repeating this process, we can reinforce the model to generate more responses biased in the desired direction, as long as we can still achieve a high F1 score and maintain coherency, which can be checked by machine-learned coherency metrics [2] as a form of second or outsider opinion on the generated responses during the infusion process.
|
| 152 |
+
|
| 153 |
+
% Generated Responses in ${\mathrm{F}}^{0}{\mathrm{M}}^{0}$ % Generated Responses in ${\mathrm{F}}^{ + }{\mathrm{M}}^{0}$ 36 % Generated Responses 34 32 30 28 Baseline All CDA + Bias CDA + Bias + Our Gen Data % Generated Responses in ${\mathrm{F}}^{ + }{\mathrm{M}}^{ + }$ 16 % Generated Responses 14 12 8 Baseline All CDA + Bias CDA + Bias + Our Gen Data 6 Generated Responses Baseline All CDA + Bias CDA + Bias + Our Gen Data % Generated Responses in ${\mathrm{F}}^{0}{\mathrm{M}}^{ + }$ % Generated Responses Baseline All CDA + Bias CDA + Bias + Our Gen Data
|
| 154 |
+
|
| 155 |
+
Figure 4: Percent of Generated Responses in each Bin for the Baseline vs. Combining all 3 Bias Mitigation Techniques vs. Counterfactual Data Augmentation and Bias Controlled Training with and without Neutral, Generated Data
|
| 156 |
+
|
| 157 |
+
## 5 Discussion
|
| 158 |
+
|
| 159 |
+
Given how closely our experimental results for bias controlled training and combining all three original bias mitigation methods matched the ground truth, these two techniques can be used to control the gender bias of these models' generated text. Thus, gender neutral dialogue could be created by constructing ground truth data with either no gendered words or ${50}\%$ male bias and ${50}\%$ female bias within the gendered words. Given that we reproduced the results from the original paper [5] for bias controlled training and combining all three bias mitigation techniques, we feel that overall our results support the claims in the original paper [5], despite the differences in value between our results and those in the original paper [5]. One possible cause for the differences between our results and those in the original paper [5] is our training method, since we achieve higher $\mathrm{F}1$ scores for each model and stop training when perplexity stops decreasing, which may not be the same criteria Dinan et al. used to determine when to stop training. It is also possible that in the original paper [5], the list of gendered words used to place utterances in bins was a subset of the original gendered word list [12], most likely the list of counterfactuals. This could also account for the lower male bias we observed for the baseline in our results compared to Dinan et al.'s, however Dinan et al. explicitly stated they used the gendered word list from Zhao et al. [12]. Evaluating our approach to reproducing the original paper [5], one of the strengths of our approach is that we ran all code on Google Colaboratory with one GPU, a free resource, in a reasonable amount of time. However, Google Colaboratory imposes GPU limitations and as a result, we could not use the same batch size as that in the original paper [5], although we achieve higher F1 scores than those in the original paper [5].
|
| 160 |
+
|
| 161 |
+
### 5.1 What was easy
|
| 162 |
+
|
| 163 |
+
When reproducing the original paper [5], implementing counterfactual data augmentation and bias controlled training and combining all three bias mitigation techniques was easy. Specifically, counterfactual data augmentation and bias controlled training were well-described in the original paper [5] and the list of counterfactuals needed for counterfactual data augmentation was provided by Dinan et al. in an easy-to-use format. Combining all three bias mitigation techniques was also an easy part of reproducing the original paper [5], as we simply needed to apply the same techniques used when implementing each bias mitigation method individually.
|
| 164 |
+
|
| 165 |
+
### 5.2 What was difficult
|
| 166 |
+
|
| 167 |
+
The only difficulty we encountered, albeit minor, was learning how to use ParlAI, which was necessary in order to use the same model as that in the original paper [5]. However, after reading through the ParlAI documentation and experimenting with the ParlAI Google Colaboratory tutorial [10], we understood how to use ParlAI to fine-tune the model, pre-trained on Reddit conversations [1], for the datasets we created.
|
| 168 |
+
|
| 169 |
+
### 5.3 Recommendations for reproducibility
|
| 170 |
+
|
| 171 |
+
Overall, reproducing the original paper [5] was fairly straightforward, but we do have three recommendations to further improve reproducibility. The first is more clearly indicating what model, pre-trained on Reddit conversations, is used, because the source of the model is not provided in the original paper [5], only that the model is based on the implementation by Miller et al. [8], who introduce ParlAI in that paper. The second recommendation is to specify the hyperparameters used when fine-tuning each model, as these were not provided in the original paper [5]. The last recommendation is to describe the stopping condition for fine-tuning the models. We stopped training when perplexity stopped improving, but this resulted in higher F1 scores for the models than those achieved in the original paper [5].
|
| 172 |
+
|
| 173 |
+
### 5.4 Communication with original authors
|
| 174 |
+
|
| 175 |
+
We communicated with Emily Dinan, one of the authors of the original paper [5], who clarified what model, pre-trained on Reddit conversations, was used in the original paper [5] and provided us with the command to download the model 32 as well as the hyperparameter settings for training the models.
|
| 176 |
+
|
| 177 |
+
References
|
| 178 |
+
|
| 179 |
+
[1] Tutorial transformer generator. tutorial-transformer-generator
|
| 180 |
+
|
| 181 |
+
[2] A. Celikyilmaz, E. Clark, and J. Gao. Evaluation of text generation: A survey, 2020.
|
| 182 |
+
|
| 183 |
+
[3] J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota, June 2019. Association for Computational Linguistics.
|
| 184 |
+
|
| 185 |
+
[4] E. Dinan, A. Fan, A. Williams, J. Urbanek, D. Kiela, and J. Weston. Queens are powerful too: Mitigating gender bias in dialogue generation. ParlAI, https://parl.ai/projects/genderation_bias/
|
| 186 |
+
|
| 187 |
+
[5] E. Dinan, A. Fan, A. Williams, J. Urbanek, D. Kiela, and J. Weston. Queens are powerful too: Mitigating gender bias in dialogue generation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 8173-8188, Online, Nov. 2020. Association for Computational Linguistics.
|
| 188 |
+
|
| 189 |
+
[6] E. Dinan, V. Logacheva, V. Malykh, A. Miller, K. Shuster, J. Urbanek, D. Kiela, A. Szlam, I. Serban, R. Lowe, S. Prabhumoye, A. W. Black, A. Rudnicky, J. Williams, J. Pineau, M. Burtsev, and J. Weston. The second conversational intelligence challenge (convai2), 2019.
|
| 190 |
+
|
| 191 |
+
[7] S. Humeau, K. Shuster, M.-A. Lachaux, and J. Weston. Poly-encoders: Transformer architectures and pre-training strategies for fast and accurate multi-sentence scoring, 2020.
|
| 192 |
+
|
| 193 |
+
[8] A. H. Miller, W. Feng, A. Fisch, J. Lu, D. Batra, A. Bordes, D. Parikh, and J. Weston. Parlai: A dialog research software platform, 2018.
|
| 194 |
+
|
| 195 |
+
[9] A. Radford, J. Wu, R. Child, D. Luan, D. Amodei, and I. Sutskever. Language models are unsupervised multitask learners. 2019.
|
| 196 |
+
|
| 197 |
+
[10] S. Roller. Parlai tutorial. https://colab.research.google.com/drive 1bRMvNOlGXaTF5fuTidgvlAl-Lb41F7AD#scrollTo=zsb-Cvf6lnVX,2020.
|
| 198 |
+
|
| 199 |
+
[11] J. Urbanek, A. Fan, S. Karamcheti, S. Jain, S. Humeau, E. Dinan, T. Rocktäschel, D. Kiela, A. Szlam, and J. Weston. Learning to speak and act in a fantasy text adventure game. 2019.
|
| 200 |
+
|
| 201 |
+
[12] J. Zhao, Y. Zhou, Z. Li, W. Wang, and K.-W. Chang. Learning gender-neutral word embeddings. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4847-4853, Brussels, Belgium, Oct.-Nov. 2018. Association for Computational Linguistics.
|
| 202 |
+
|
| 203 |
+
${}_{362}$ A Generated Text Statistics for ${\mathrm{F}}^{0}{\mathrm{M}}^{0}$ Bin
|
| 204 |
+
|
| 205 |
+
<table><tr><td>Model</td><td>% Gendered Words</td><td>% Male Bias</td><td>F1 Score</td><td>% Generated Responses</td></tr><tr><td>Baseline</td><td>5.48</td><td>45.14</td><td>13.22</td><td>35.11</td></tr><tr><td>Counterfactual Data Augmentation</td><td>5.35</td><td>38.05</td><td>12.98</td><td>38.96</td></tr><tr><td>Positively Biased Data Collection</td><td>5.94</td><td>46.50</td><td>13.06</td><td>36.31</td></tr><tr><td>Bias Controlled Training</td><td>0.69</td><td>56.85</td><td>13.59</td><td>41.30</td></tr><tr><td>All 3 Bias Mitigation Techniques</td><td>0.32</td><td>43.53</td><td>13.75</td><td>39.41</td></tr><tr><td>CDA + Bias Control</td><td>0.80</td><td>44.96</td><td>14.62</td><td>41.94</td></tr><tr><td>CDA + Bias Control + Our Gen. Data</td><td>0.72</td><td>49.68</td><td>14.62</td><td>41.40</td></tr></table>
|
| 206 |
+
|
| 207 |
+
Table 2: Results for each Model for ${\mathrm{F}}^{0}{\mathrm{M}}^{0}$ Bin
|
| 208 |
+
|
| 209 |
+
${}_{363}$ B Generated Text Statistics for ${\mathrm{F}}^{ + }{\mathrm{M}}^{0}$ Bin
|
| 210 |
+
|
| 211 |
+
<table><tr><td>Model</td><td>% Gendered Words</td><td>% Male Bias</td><td>F1 Score</td><td>% Generated Responses</td></tr><tr><td>Baseline</td><td>6.40</td><td>42.07</td><td>14.84</td><td>29.88</td></tr><tr><td>Counterfactual Data Augmentation</td><td>6.16</td><td>33.85</td><td>14.27</td><td>31.04</td></tr><tr><td>Positively Biased Data Collection</td><td>7.62</td><td>40.88</td><td>14.99</td><td>31.48</td></tr><tr><td>Bias Controlled Training</td><td>8.76</td><td>4.70</td><td>15.40</td><td>34.26</td></tr><tr><td>All 3 Bias Mitigation Techniques</td><td>8.25</td><td>1.95</td><td>15.92</td><td>35.02</td></tr><tr><td>CDA + Bias Control</td><td>7.62</td><td>4.08</td><td>15.48</td><td>33.74</td></tr><tr><td>CDA + Bias Control + Our Gen. Data</td><td>8.44</td><td>5.90</td><td>15.40</td><td>33.41</td></tr></table>
|
| 212 |
+
|
| 213 |
+
Table 3: Results for each Model for ${\mathrm{F}}^{ + }{\mathrm{M}}^{0}$ Bin
|
| 214 |
+
|
| 215 |
+
${}_{364}\mathrm{C}$ Generated Text Statistics for ${\mathrm{F}}^{0}{\mathrm{M}}^{ + }$ Bin
|
| 216 |
+
|
| 217 |
+
<table><tr><td>Model</td><td>% Gendered Words</td><td>% Male Bias</td><td>F1 Score</td><td>% Generated Responses</td></tr><tr><td>Baseline</td><td>6.90</td><td>52.35</td><td>15.12</td><td>20.38</td></tr><tr><td>Counterfactual Data Augmentation</td><td>6.46</td><td>41.53</td><td>14.9</td><td>18.67</td></tr><tr><td>Positively Biased Data Collection</td><td>7.51</td><td>53.53</td><td>15.41</td><td>19.92</td></tr><tr><td>Bias Controlled Training</td><td>7.36</td><td>94.37</td><td>15.40</td><td>14.82</td></tr><tr><td>All 3 Bias Mitigation Techniques</td><td>7.89</td><td>97.13</td><td>17.31</td><td>13.41</td></tr><tr><td>CDA + Bias Control</td><td>6.97</td><td>95.52</td><td>16.37</td><td>14.00</td></tr><tr><td>CDA + Bias Control + Our Gen. Data</td><td>6.55</td><td>93.41</td><td>16.60</td><td>12.98</td></tr></table>
|
| 218 |
+
|
| 219 |
+
Table 4: Results for each Model for ${\mathrm{F}}^{0}{\mathrm{M}}^{ + }$ Bin
|
| 220 |
+
|
| 221 |
+
${}_{365}\mathrm{D}$ Generated Text Statistics for ${\mathrm{F}}^{ + }{\mathrm{M}}^{ + }$ Bin
|
| 222 |
+
|
| 223 |
+
<table><tr><td>Model</td><td>% Gendered Words</td><td>% Male Bias</td><td>F1 Score</td><td>% Generated Responses</td></tr><tr><td>Baseline</td><td>7.70</td><td>46.28</td><td>15.38</td><td>14.64</td></tr><tr><td>Counterfactual Data Augmentation</td><td>7.00</td><td>44.19</td><td>14.83</td><td>11.33</td></tr><tr><td>Positively Biased Data Collection</td><td>8.51</td><td>49.71</td><td>15.37</td><td>12.28</td></tr><tr><td>Bias Controlled Training</td><td>11.40</td><td>36.41</td><td>15.56</td><td>9.62</td></tr><tr><td>All 3 Bias Mitigation Techniques</td><td>12.55</td><td>43.01</td><td>16.73</td><td>12.15</td></tr><tr><td>CDA + Bias Control</td><td>11.15</td><td>40.89</td><td>15.48</td><td>10.32</td></tr><tr><td>CDA + Bias Control + Our Gen. Data</td><td>11.54</td><td>44.64</td><td>16.61</td><td>12.21</td></tr></table>
|
| 224 |
+
|
| 225 |
+
Table 5: Results for each Model for ${\mathrm{F}}^{ + }{\mathrm{M}}^{ + }$ Bin
|
| 226 |
+
|
| 227 |
+
E Distribution of Generated Responses across Bins for each Model
|
| 228 |
+
|
| 229 |
+
% Generated Responses in ${\mathrm{F}}^{0}{\mathrm{M}}^{0}$ % Generated Responses in ${\mathrm{F}}^{ + }{\mathrm{M}}^{0}$ % Generated Responses 34 32 30 28 26 Baseline CDA Pos Data Bias Bias + Our Gen Data % Generated Responses in ${\mathrm{F}}^{ + }{\mathrm{M}}^{ + }$ % Generated Responses 0 Baseline CDA Pos Data Bias All CDA + CDA + Bias Bias + Our Gen Data % Generated Responses 42 39 36 33 30 Pos Data Bias All CDA + CDA + Bias Bias + Our Gen Data % Generated Responses in ${\mathrm{F}}^{0}{\mathrm{M}}^{ + }$ % Generated Responses 20 15 Baseline CDA Pos Data Bias All CDA + CDA + Bias Bias + Our Gen Data
|
| 230 |
+
|
| 231 |
+
Figure 5: Percent of Generated Responses from each Model in each Bin
|
| 232 |
+
|
papers/ML_Reproducibility_Challenge/ML_Reproducibility_Challenge 2021/ML_Reproducibility_Challenge 2021 Fall/StblE2MQ3AY/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,190 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ REPRODUCTION AND EXTENSION OF "QUEENS ARE POWERFUL TOO: MITIGATING GENDER BIAS IN DIALOGUE GENERATION"
|
| 2 |
+
|
| 3 |
+
Anonymous Author(s)
|
| 4 |
+
|
| 5 |
+
Affiliation
|
| 6 |
+
|
| 7 |
+
Address
|
| 8 |
+
|
| 9 |
+
email 1
|
| 10 |
+
|
| 11 |
+
§ REPRODUCIBILITY SUMMARY
|
| 12 |
+
|
| 13 |
+
§ 2 SCOPE OF REPRODUCIBILITY
|
| 14 |
+
|
| 15 |
+
3 The main claims we are trying to reproduce are that bias controlled training or combining counterfactual data augmenta- 4 tion, the positively biased data collected by Dinan et al. [5], and bias controlled training for the LIGHT dataset yields 5 generated dialogue in which the percent of gendered words and male bias closely match the ground truth.
|
| 16 |
+
|
| 17 |
+
§ METHODOLOGY
|
| 18 |
+
|
| 19 |
+
We fine-tuned a transformer model, pre-trained on Reddit data [1], using the ParlAI API [8] with counterfactual 8 data augmentation, positively biased data collection, bias controlled training, and all three bias mitigation techniques - combined, as discussed in the original paper [5]. We implemented counterfactual data augmentation and bias controlled training ourselves. All models were trained and evaluated using a single NVIDIA Tesla P100 PCIe GPU, which took between 1.3 and 4.6 GPU hours approximately.
|
| 20 |
+
|
| 21 |
+
§ RESULTS
|
| 22 |
+
|
| 23 |
+
Overall, our results support the main claims of the original paper [5]. Although the percent gendered words and male bias in our results are not exactly the same as those in the original paper [5], the main trends are the same. The main difference is lower male bias for the baseline model in our results. However, our findings and the trend similarities between our results and those obtained by Dinan et al. [5] demonstrate that bias controlled training or combining all three bias mitigation techniques can effectively control the amount of gender bias present in the model generated responses, supporting Dinan et al.'s claims [5].
|
| 24 |
+
|
| 25 |
+
§ 19 WHAT WAS EASY
|
| 26 |
+
|
| 27 |
+
When reproducing the original paper [5], implementing counterfactual data augmentation and bias controlled training was easy since these techniques were well-described in the original paper [5]. Also, combining all three bias mitigation techniques was simple, as we applied the same techniques used to implement each bias mitigation method individually.
|
| 28 |
+
|
| 29 |
+
§ 23 WHAT WAS DIFFICULT
|
| 30 |
+
|
| 31 |
+
The only difficulty we encountered, albeit minor, was learning how to use ParlAI, which was necessary to use the same model as in the original paper [5]. However, after reading through the ParlAI documentation and experimenting with the ParlAI Google Colaboratory tutorial [10], we understood how to use ParlAI to fine-tune the model, pre-trained on Reddit conversations [1], for the datasets we create.
|
| 32 |
+
|
| 33 |
+
§ 28 COMMUNICATION WITH ORIGINAL AUTHORS
|
| 34 |
+
|
| 35 |
+
We communicated with Emily Dinan, an author of the original paper [5], who clarified what model was used in the original paper [5] and provided us with the command to download the model as well as the hyperparameter settings used when fine-tuning.
|
| 36 |
+
|
| 37 |
+
§ 1 INTRODUCTION
|
| 38 |
+
|
| 39 |
+
Ad-hoc methods for mitigating social bias in natural language data remain an active area of modern research. As transfer learning with pre-trained models such as BERT [3] and GPT-2 [9] continue to be pervasive, the inherent issues in their training data have come to light. Large corpora of unstructured text from the Internet reflect the biases and inequalities of society, and are consequently learned by these models and their fine-tuned variants. To this end, Dinan et al. [5] proposed three techniques to specifically mitigate gender bias in fine-tuned language models, using the LIGHT dataset [11] as an example. The LIGHT dataset is a crowdsourced collection of dialogues spoken between "personas," characters played by either humans or models, in a fantasy adventure game, LIGHT [III]. Dinan et al. applied the following techniques to this dataset: 1) counterfactual data augmentation, in which gendered words are replaced with their opposite, i.e., replacing "he" with "she"; 2) positively biased data collection, in which new, less biased female character personas and dialogues are created via crowd-sourcing; and 3) bias controlled training, in which the dialogue is placed in groups based on the number of gendered words it contains and this group number is included with the dialogue as a special token when training the model [5]. The model itself is a transformer pre-trained on a dataset of Reddit conversations [II] and then fine-tuned on LIGHT using the three techniques described above, individually, as well as one combining all three techniques.
|
| 40 |
+
|
| 41 |
+
§ 2 SCOPE OF REPRODUCIBILITY
|
| 42 |
+
|
| 43 |
+
The aim of this paper is to evaluate the following hypotheses made by Dinan et al. [5] by reproducing their experiments.
|
| 44 |
+
|
| 45 |
+
* Combining counterfactual data augmentation, the positively biased data collected by Dinan et al. [5], and bias controlled training for the LIGHT dataset yields generated dialogue in which the percent of gendered words and male bias closely match the ground truth.
|
| 46 |
+
|
| 47 |
+
* Bias controlled training for the LIGHT dataset yields generated dialogue in which the percent of gendered words and male bias closely match the ground truth.
|
| 48 |
+
|
| 49 |
+
§ 543 METHODOLOGY
|
| 50 |
+
|
| 51 |
+
We fine-tuned the transformer model, pre-trained on Reddit data [1], using the ParlAI API [8] with counterfactual data augmentation, positively biased data collection, bias controlled training, and all three bias mitigation techniques combined, as discussed in the original paper [5]. We generated training, test, and validation datasets for counterfactual data augmentation and bias controlled training from the original LIGHT dialogue dataset. We also formatted the dataset used for each bias mitigation technique, extracting the dialogue from each dataset and placing it in the proper format, such that everything said in the dialogue so far is used to predict the next response in the dialogue, which is the label. All models were trained and evaluated using a single NVIDIA Tesla P100 PCIe GPU.
|
| 52 |
+
|
| 53 |
+
§ 3.1 MODEL DESCRIPTIONS
|
| 54 |
+
|
| 55 |
+
Dinan et al. [5] used a transformer with 8 encoder layers, 8 decoder layers, embedding dimension of 512, and 16 attention heads. This model was pre-trained on Reddit conversations from the pushshift.io Reddit dataset, which contains 2.2 billion samples for training after removing comments that contain URLs or that are less than 5 characters long [5]. Specifically, the model was trained on all comments in each thread and learned to predict the next comment in the thread [5]. Thus, this pre-training makes the model well-suited for the dialogue generation task [1]. The model contains87,508,992trainable parameters and the training objective is to minimize the cross entropy loss on the original and augmented LIGHT dialogues.
|
| 56 |
+
|
| 57 |
+
§ 3.2 DATASETS
|
| 58 |
+
|
| 59 |
+
We used the ParlAI API command from the paper's ParlAI project page [4] to obtain the following data: the LIGHT dataset [11], a list of counterfactuals, a list of gendered words [12], and the positively biased data collected by Dinan et al. [5]. The LIGHT dataset and positively biased data collected by Dinan et al. contain information about interactions between characters in the game, LIGHT, such as the character names and personas, dialogue, and environment where the interaction took place, to name a few. The LIGHT dataset contains approximately 11,000 interactions and 111,000 iterances [11]. An utterance is a single occurrence of a character talking during a dialogue. The LIGHT dataset is used to fine-tune the baseline model.
|
| 60 |
+
|
| 61 |
+
Each bias mitigation method employed by Dinan et al. [5] also requires fine-tuning the pre-trained model on a new dataset. For counterfactual data augmentation, we used the list of counterfactuals to replace every gendered word, according to the list of gendered words from Zhao et al. [12], in the LIGHT dialogue dataset with its counterfactual. The list of gendered words [12] has 1,049 words. The list of counterfactuals contains each gendered word and its opposite gendered counterpart. For example, the counterfactual for "he" is "she". In addition, the list of counterfactuals, containing 421 words, was constructed by Dinan et al. [5] using the list of gendered words from Zhao et al. [12].
|
| 62 |
+
|
| 63 |
+
For positively biased data collection, Dinan et al. crowdsource new dialogue data, asking workers to create dialogue assuming gender equality [5]. This dataset contains 507 interactions and 6,658 utterances. Given the time and resource constraints, we used Dinan et al.'s positively biased data [5] rather than crowdsourcing the data ourselves.
|
| 64 |
+
|
| 65 |
+
For bias controlled training, we appended "fx my" after the last utterance in an episode, which is a portion of a dialogue between two characters, based on the label, which is the next utterance in the dialogue. In "fx my," x is 1 if there is at least one female gendered word in the label and 0 otherwise, and $\mathrm{y}$ is 1 if there is at least one male gendered word in the label and 0 otherwise. Thus, each label falls into one of four bins: " $\mathrm{f}0\mathrm{\;m}0$ " which has no gendered words;" $\mathrm{f}0$ m1" which has no female gendered words but at least one male gendered word; "fl m0" which has at least one female gendered word but no male gendered words; and "f1 ml" which has at least one female and one male gendered word. Placing the dialogue labels in these bins causes the model to learn the gender bias present in an utterance, allowing us to specify the desired gender bias in the model's generated dialogue using one of the four bins. We used the list of gendered words from Zhao et al. [12] to determine the number of gendered words and proper bin for each label and model generated utterance.
|
| 66 |
+
|
| 67 |
+
We split the datasets used for fine-tuning each model into approximately 90% for training and 10% for an unseen test set. The training set was further split into ${80}\%$ for training and ${20}\%$ for validation.
|
| 68 |
+
|
| 69 |
+
§ 3.3 HYPERPARAMETERS
|
| 70 |
+
|
| 71 |
+
As previously mentioned, the model, pre-trained on Reddit conversations, has 8 encoder layers, 8 decoder layers, 16 attention heads, and an embedding dimension of 512 [1]. In addition, this model has 2, 048 nodes in the hidden layer, uses GeLU activation function, and truncates each dialogue to at most 512 characters and each label to at most 128 characters. Other hyperparameters for each model are an initial learning rate of ${3.1e} - 7$ , memory-efficient Adam optimizer, gradient clipping of 0.1, inverse square root learning rate scheduler with a decay factor of 0.5 and patience of 3, no activation or attention dropout, batch size of 20, and dropout of 0.1 or 0.15 depending on hyperparameter tuning results. Emily Dinan, one of the authors of the original paper [5], provided some of the hyperparameter values, but we reduced the batch size due to memory constraints with Google Colaboratory resources. Since most hyperparameters were provided by Emily Dinan and the learning rate is adjusted by the inverse square root learning rate scheduler and batch size could not be increased due to GPU limitations, the only remaining hyperparameter that we could effectively tune to improve perplexity, based on our experience with deep NLP models, particularly pre-trained transformers, was dropout. Thus, we tuned dropout, applied to the embeddings and before layer normalization, for the model combining all three bias mitigation techniques, since this model provided the best results according to the original paper [5], to obtain lower perplexity on the validation set. In order to tune dropout, we increased dropout in increments of 0.025, starting from a value of 0.1, which was given by Emily Dinan, up to 0.2. After training a number of models with different dropouts, we found that 0.15 dropout resulted in the lowest perplexity. In addition, for the extension with neutral, generated data, we again tuned dropout, and found 0.15 to be the optimal value.
|
| 72 |
+
|
| 73 |
+
§ 3.4 EXPERIMENTAL SETUP AND CODE
|
| 74 |
+
|
| 75 |
+
Similar to the Reddit dataset used for pre-training the model as well as the training done by Dinan et al. [5], we generated the datasets based on the entire history of conversations so far, predicting the next utterance in each conversation For each bias mitigation technique and combining all three techniques, we generated the datasets from the original conversations in the LIGHT dataset [11] for training, evaluation, and response generation. Using ParlAI's API, we fine-tuned 5 versions of the model, pre-trained on Reddit conversations [1]: baseline, counterfactual data augmentation. positively biased data collection, bias controlled training, and all three bias mitigation techniques combined. When fine-tuning each model, the best model is saved according to the perplexity on the validation set. As long as the perplexity on the validation set continues to improve, the model continues training and at every quarter epoch, the version of the model achieving the lowest perplexity on the validation set is saved. If the model does not improve after 10 quarter epochs, training will be automatically stopped to avoid overfitting or unnecessary training. After training is complete, we run further evaluation to obtain F1 scores on the validation and test datasets as well as F1 scores pertaining to the labels for each bin for these two datasets. Finally, we pass every dialogue episode in the test set through the model to generate responses. These generated responses are used to compute statistics defined by Dinan et al. [5] to evaluate gender bias in generated responses from the model. ${}^{1}$
|
| 76 |
+
|
| 77 |
+
All experiments were run on Google Colaboratory using a single NVIDIA Tesla P100 PCIe GPU. After fine-tuning each model, the labels in the test set are split into the bias controlled training bins and within these bins, each model's generated utterances are also grouped into the same bins. This allowed us to compute the percent gendered words and male bias for the generated utterances within each bin of labels for the test set. In addition, we computed the F1 score for predicted tokens in generated responses separately for each bin of test labels.
|
| 78 |
+
|
| 79 |
+
§ 3.5 COMPUTATIONAL REQUIREMENTS
|
| 80 |
+
|
| 81 |
+
The model used by Dinan et al. in the original paper [5] was pre-trained on Reddit conversations in the same manner as the polyencoder transformer model from Humeau et al. [7], and contains the same number of encoder layers, decoder layers, attention heads, and embedding dimension size. Training the polyencoder transformer on the ConvAI2 dataset, which has about 131,000 elements [6], took 2.7 hours using 8 NVIDIA Volta 100 GPUs [7]. Since the polyencoder transformer has about ${20}\%$ more parameters than the model used by Dinan et al. and the LIGHT dataset is about ${15}\%$ smaller than the ConvAI2 dataset, we estimated it took Dinan et al. about 2.3 hours or less, which is 85% of 2.7 hours, using 8 GPUs to fine-tune each model or about 11.5 hours total for all 5 models.
|
| 82 |
+
|
| 83 |
+
We initially estimated we could also fine-tune all 5 models in approximately 11.5 hours using Google Cloud Platform. Instead, we used a single NVIDIA Tesla P100 PCIe GPU on Google Colaboratory. During training, each model required about ${16}\mathrm{{GB}}$ of $\mathrm{{GPU}}$ memory, maximizing the $\mathrm{{GPU}}$ memory available with the aforementioned batch size of 20. Table Tlists runtime information for fine-tuning each model, where the model combining all three bias mitigation techniques uses dropout of 0.15 for the embeddings and before layer normalization, as previously mentioned. The runtime for this model with other values for dropout was approximately the same. The actual training time for our models was substantially lower than our estimate, likely due, at least in part, to the unpredictability of Google Colaboratory providing the full computational GPU resources assigned to a particular session.
|
| 84 |
+
|
| 85 |
+
max width=
|
| 86 |
+
|
| 87 |
+
Model Number of Epochs Training Time (GPU Hours) Average Runtime per Epoch (GPU Hours)
|
| 88 |
+
|
| 89 |
+
1-4
|
| 90 |
+
Baseline 7.51 1.32 0.18
|
| 91 |
+
|
| 92 |
+
1-4
|
| 93 |
+
Counterfactual Data Augmentation 4.75 1.63 0.34
|
| 94 |
+
|
| 95 |
+
1-4
|
| 96 |
+
Positively Biased Data Collection 7.26 1.40 0.19
|
| 97 |
+
|
| 98 |
+
1-4
|
| 99 |
+
Bias Controlled Training 7.76 1.38 0.18
|
| 100 |
+
|
| 101 |
+
1-4
|
| 102 |
+
All 3 Bias Mitigation Techniques 6.58 4.63 0.70
|
| 103 |
+
|
| 104 |
+
1-4
|
| 105 |
+
|
| 106 |
+
Table 1: Computational Requirements for Training each Model
|
| 107 |
+
|
| 108 |
+
§ 4 RESULTS
|
| 109 |
+
|
| 110 |
+
Below are the results from reproducing and extending the experiments in the original paper [5]. Overall, our results support the hypotheses previously identified. Further discussion of the results in relation to the hypotheses is provided below. We also implement 3 extensions to the original paper [5], two of which are aimed at addressing the high time and monetary cost of positively biased data collection, which requires crowdsourcing data.
|
| 111 |
+
|
| 112 |
+
Figure 1 shows the percent gendered words, percent male bias, and F1 score of each model's generated utterances for conversations in the test set, separated according to the test label bins, where "Baseline" is the model trained only on the LIGHT dataset, "CDA" is counterfactual data augmentation, "Pos Data" is positively biased data collection, "Bias" is bias controlled training, and "All" combines all three bias mitigation techniques. In Figure 1, each set of three graphs corresponds to one of the four bias controlled training bins for test labels. The results shown in Figure 1 are quite similar to those in Figure 1 of the original paper [5] in terms of how the percent gendered words, percent male bias, and F1 score for each model in each bin compare. Although our results are not exactly the same as those in the original paper [5] in terms of values, the main trends in our results are the same as those in the original paper [5]. The main differences between our results and those in the original paper [5] are lower male bias in each bin for the baseline and a percent gendered words for "CDA" that is closer in value to the baseline in our results.
|
| 113 |
+
|
| 114 |
+
${}^{1}$ The GitHub repository for our project is located at https://github.com/Pnaghavi/Mitigating-Gender-Bias-in-Generated-Text
|
| 115 |
+
|
| 116 |
+
< g r a p h i c s >
|
| 117 |
+
|
| 118 |
+
Figure 1: Results for Reproducing the Experiments in the Original Paper [5]
|
| 119 |
+
|
| 120 |
+
§ 4.1 RESULTS FOR FIRST HYPOTHESIS
|
| 121 |
+
|
| 122 |
+
According to the first hypothesis, the number of gendered words in the generated utterances for the "All" model for each bin should be similar to the number of gendered words in the labels of the test set. This is observed in all four bins in Figure 1. Specifically, for the ${\mathrm{F}}^{0}{\mathrm{M}}^{0}$ bin, the test labels have no gendered words, which means the generated utterances for both models should have a very low number of gendered words and approximately 50% male bias. The "All" model satisfies these two requirements, as depicted in the first set of charts in Figure 1, because the generated utterances from this model are less than $1\%$ gendered words and the percent male bias is approximately ${44}\%$ . For the ${\mathrm{F}}^{ + }{\mathrm{M}}^{0}$ bin, the test labels have at least one female gendered word and no male gendered words, which means the generated utterances should have a higher number of gendered words and a smaller percentage of male bias. This is observed for the "All" model in the second set of charts in Figure 1, since the percent gendered words for the "All" model is higher than the baseline and the percent male bias is under $5\%$ , compared to about ${42}\%$ male bias for the baseline. Similarly, in the ${\mathrm{F}}^{0}{\mathrm{M}}^{ + }$ bin, the test labels have at least one male gendered word and no female gendered words. Thus, the generated utterances for the "All" model should have a higher number of gendered words and a larger percentage of male bias, which is depicted in the third set of charts in Figure 1. In the ${\mathrm{F}}^{0}{\mathrm{M}}^{ + }$ bin, the percent of gendered words for the "All" model is about $1\%$ higher than the baseline and the male bias is approximately 97%, compared to only ${52}\%$ for the baseline. For the last bin, ${\mathrm{F}}^{ + }{\mathrm{M}}^{ + }$ , the test labels have at least one male and one female gendered word. As a result, the generated utterances for the "All" model should have a higher percentage of gendered words and closer to ${50}\%$ male bias. As shown in the last set of charts in Figure 1, the "All" model does have a higher percentage of gendered words than the baseline, specifically ${13}\%$ , compared to $8\%$ for the baseline. However, the male bias is about ${43}\%$ for the "All" model, which is not as close to an even gender bias split, ${50}\%$ male and ${50}\%$ female, as the baseline, which has about ${46}\%$ male bias. In the discussion section, we give a possible cause for this discrepancy in our results.
|
| 123 |
+
|
| 124 |
+
§ 4.2 RESULTS FOR SECOND HYPOTHESIS
|
| 125 |
+
|
| 126 |
+
Based on the second hypothesis, the number of gendered words in each utterance generated by the "Bias" model should be similar to that of the labels in the test set for each dialogue. This can be clearly seen for all four bins in Figure 1. In the ${\mathrm{F}}^{0}{\mathrm{M}}^{0}$ bin, the test labels have no gendered words. If the model has learned from bias controlled training, producing properly gender biased text according to the bin appended to the end of the dialogue, then the generated text for the "Bias" model in the ${\mathrm{F}}^{0}{\underline{\mathrm{M}}}^{0}$ bin should have very few gendered words and about ${50}\%$ male bias. As depicted in the first set of charts in Figure 1, for the ${\mathrm{F}}^{0}{\mathrm{M}}^{0}$ bin, the "Bias" model has less than 1% gendered words and approximately ${57}\%$ male bias, as desired. For the ${\mathrm{F}}^{ + }{\mathrm{M}}^{0}$ bin, the generated text should have more female gendered words and few to no male gendered words, matching the gender bias in the test set label. This is observed in the second set of charts in Figure 1, since the "Bias" model yields a higher percent of gendered words than the baseline and less than $5\%$ male bias, compared to ${42}\%$ male bias for the baseline. Generated text in the ${\mathrm{F}}^{0}{\mathrm{M}}^{ + }$ test label bin should have more male gendered words and few to no female gendered words, which is depicted in the third set of charts in Figure 1. Specifically, the percent gendered words for the "Bias" model is 1% higher than the baseline and male bias is approximately ${94}\%$ , compared to only ${52}\%$ for the baseline. In the last bin, ${\mathrm{F}}^{ + }{\mathrm{M}}^{ + }$ , the generated text should ideally have an even distribution of male and female gendered words and a higher percentage of gendered words overall. This is shown in the last set of charts in Figure 1, since the "Bias" model has a higher percentage of gendered words than the baseline, specifically ${11}\%$ for the "Bias" model and 8% for the baseline, although male bias is ${36}\%$ for the "Bias" model compared to ${46}\%$ for the baseline, which is not an even distribution. A possible cause for this discrepancy in our results is described in the discussion section.
|
| 127 |
+
|
| 128 |
+
§ 4.3 EFFECT OF REMOVING POSITIVELY BIASED DATA COLLECTION
|
| 129 |
+
|
| 130 |
+
Given the time and monetary cost involved in crowdsourcing data, specifically the positively biased data Dinan et al. collected [5], a natural question is whether adding this positively biased data to counterfactual data augmentation and bias controlled training is worth the cost. In other words, what is the performance loss if positively biased data collection is excluded from the model, instead relying only on counterfactual data augmentation and bias controlled training.
|
| 131 |
+
|
| 132 |
+
§ 4.3.1 IMPLEMENTATION AND EXPERIMENTAL SETUP
|
| 133 |
+
|
| 134 |
+
We fine-tuned the model, pre-trained on Reddit conversations [1], on the data generated from counterfactual data augmentation and using bias controlled training. The implementation and experimental setup is the same as that for the model that combines all three bias mitigation techniques, except we excluded the positively biased data collected by Dinan et al. [5].
|
| 135 |
+
|
| 136 |
+
§ 4.3.2 RESULTS AND DISCUSSION
|
| 137 |
+
|
| 138 |
+
Figure 2 depicts, for each bin, the percent gendered words and percent male bias in the generated utterances as well as the F1 score for the "All" model, which combines all three bias mitigation techniques, the "CDA + Bias" model, which uses counterfactual data augmentation and bias controlled training, and the baseline. As expected, for all four bins, the percent gendered words, percent male bias, and F1 score for "All" achieves better results than "CDA + Bias," in terms of higher F1 scores and the percent gendered words and male bias being closer to ground truth, except "CDA + Bias" achieves a slightly higher F1 score for the ${\mathrm{F}}^{0}{\mathrm{M}}^{0}$ bin. However, results for "CDA + Bias" are always within about $2\%$ of the results for "All" and the overall F1 score for "CDA + Bias" is within 0.25% of the overall F1 score for "All," specifically an F1 score of 15.31 for "CDA + Bias" and 15.56 for "All." Although incorporating positively biased data collection does yield better results, given how small the difference is between including vs. excluding this technique, it may not be worth the necessary time or money. Instead, one could simply use counterfactual data augmentation and bias controlled training or find a less costly way to collect positively biased data, which is the focus of the next extension.
|
| 139 |
+
|
| 140 |
+
§ 4.4 GENERATING GENDER NEUTRAL DATA
|
| 141 |
+
|
| 142 |
+
In the previous section, we created a model incorporating counterfactual data augmentation and bias controlled training, removing positively biased data collection. Instead of completely removing this additional, positively biased data, an alternative, which still avoids the cost of crowdsourcing data, is to generate new, gender neutral data using code. Incorporating gender neutral data can help shift the gender bias of the data, whether male or female, closer to 50%.
|
| 143 |
+
|
| 144 |
+
< g r a p h i c s >
|
| 145 |
+
|
| 146 |
+
Figure 2: Results for the Baseline vs. Combining all 3 Bias Mitigation Techniques vs. Counterfactual Data Augmentation and Bias Controlled Training
|
| 147 |
+
|
| 148 |
+
§ 4.4.1 IMPLEMENTATION AND EXPERIMENTAL SETUP
|
| 149 |
+
|
| 150 |
+
We fine-tuned the model, pre-trained on Reddit conversations [1], using counterfactual data augmentation and bias controlled training, then generated responses from this model for all dialogue episodes in the training data. For each generated response, we set the response to be either the model's generated response or the actual label. If the generated response is neutral, meaning it contains approximately the same number of male and female gendered words or no gendered words, we use the generated response 90% of the time, selecting the actual label in all other cases. These neutral generated responses were used to reconstruct the conversations. We then created new training and validation datasets from these conversations that partially included neutral model generated utterances. Finally, a new model was fine-tuned on these datasets. The experimental setup is the same as that for the model that combines all three bias mitigation techniques, except we excluded the positively biased data collected by Dinan et al. [5] and used the gender neutral data we generated instead. An important point to note is that the test dataset for this new model is the original test dataset. Thus, the F1 scores obtained for each bin and the overall F1 score are from the original test dataset, containing ${100}\%$ natural conversations.
|
| 151 |
+
|
| 152 |
+
§ 4.4.2 RESULTS AND DISCUSSION
|
| 153 |
+
|
| 154 |
+
Figure 3 shows, for each bin, the percent gendered words and percent male bias in the generated utterances as well as the FI score for the "All" model, which combines all three bias mitigation techniques, the baseline, and the "CDA + Bias + Our Gen Data" and "CDA + Bias" models, which use counterfactual data augmentation and bias controlled training with and without our neutral, generated data, respectively. Results for our new model, "CDA + Bias + Our Gen Data," are within $2\%$ of the results for "All" in all cases except male bias for ${\mathrm{F}}^{0}{\mathrm{M}}^{0},{\mathrm{\;F}}^{ + }{\mathrm{M}}^{0}$ , and ${\mathrm{F}}^{0}{\mathrm{M}}^{ + }$ . For ${\mathrm{F}}^{0}{\mathrm{M}}^{0}$ , our model yields male bias closer to 50% than "All" by 6%, specifically male bias of about 43% for "All" and 49% for our model. Also, our model results in about $4\%$ higher male bias than "All" for the ${\mathrm{F}}^{ + }{\mathrm{M}}^{0}$ bin and about $4\%$ lower male bias for the ${\mathrm{F}}^{0}{\mathrm{M}}^{ + }$ bin. However, these are actually the desired results because for each bin, the male bias for our model is closer to ${50}\%$ , at least slightly, than "All." Thus, our model results in more gender neutral responses overall, which was the goal of this method. In addition, all results for our new model are still relatively close to the results of "All," demonstrating the effectiveness of our new method, as it did not require any crowdsourced data, only additional training. One concern with using model generated responses is that they may not be as coherent as natural dialogue, but the F1 scores for our new model are comparable to those for the "All" model. For future work, if we repeatedly use the dialogues with our neutral, generated responses to create new generated responses, coherency will become a greater concern and necessitate the use of a coherency assessment model, such as some of the machine-learned evaluation metrics highlighted by Celikyilmaz et al. [2]. Given that adding our neutral, generated data to counterfactual data augmentation and bias controlled training yields approximately the same or slightly higher F1 scores than the "All" model, using only neutral, generated responses with high coherency, according to the metrics introduced by Celikyilmaz et al. [2], in the reconstructed conversations, we can continue to shift the model towards gender neutrality, while maintaining high F1 scores.
|
| 155 |
+
|
| 156 |
+
§ 4.5 PERCENT GENERATED RESPONSES WITH RESPECT TO BINS
|
| 157 |
+
|
| 158 |
+
To better evaluate the degree to which our extensions generate gender neutral responses in comparison to the "All" model, we placed the generated responses from these three models into one of the bias controlled training bins based on the presence of gendered words in the generated response, and computed the percent of generated utterances in each bin for each of the three models.
|
| 159 |
+
|
| 160 |
+
< g r a p h i c s >
|
| 161 |
+
|
| 162 |
+
Figure 3: Results for the Baseline vs. Combining all 3 Bias Mitigation Techniques vs. Counterfactual Data Augmentation and Bias Controlled Training both with and without Neutral, Generated Data
|
| 163 |
+
|
| 164 |
+
§ 4.5.1 RESULTS AND DISCUSSION
|
| 165 |
+
|
| 166 |
+
Figure 4 depicts the percent of generated responses in each bin for the baseline, when combining all bias mitigation techniques, denoted "All," and using counterfactual data augmentation and bias controlled training with and without our neutral, generated data, denoted "CDA + Bias + Our Gen Data" and "CDA + Bias," respectively. These results demonstrate that the "CDA + Bias + Our Gen Data" model generates more gender neutral responses overall, compared to "All" and "CDA + Bias." Specifically, for the ${\mathrm{F}}^{0}{\mathrm{M}}^{0}$ and ${\mathrm{F}}^{ + }{\mathrm{M}}^{ + }$ bins, which are the more gender neutral bins,"CDA +Bias + Our Gen Data" has the highest, or near highest, percentage of generated responses. For the ${\mathrm{F}}^{ + }{\mathrm{M}}^{0}$ and ${\mathrm{F}}^{0}{\mathrm{M}}^{ + }$ bins, which are not gender neutral, "CDA + Bias + Our Gen Data" has the lowest percent of generated responses. In addition to generating more neutral responses, "CDA + Bias + Our Gen Data" achieves approximately the same F1 score for each bin as "All," as depicted in Figure 3, demonstrating that the control over gender bias provided by bias controlled training is still present despite the responses being more gender neutral overall. This indicates an opportunity for future work to shift the overall bias of the model's generated responses to any direction, male biased, female biased, or neutral, by selecting model generated responses that belong to the bin with the desired bias to infuse the original dialogues with this bias and train a model to generate more responses with the desired bias. By repeating this process, we can reinforce the model to generate more responses biased in the desired direction, as long as we can still achieve a high F1 score and maintain coherency, which can be checked by machine-learned coherency metrics [2] as a form of second or outsider opinion on the generated responses during the infusion process.
|
| 167 |
+
|
| 168 |
+
< g r a p h i c s >
|
| 169 |
+
|
| 170 |
+
Figure 4: Percent of Generated Responses in each Bin for the Baseline vs. Combining all 3 Bias Mitigation Techniques vs. Counterfactual Data Augmentation and Bias Controlled Training with and without Neutral, Generated Data
|
| 171 |
+
|
| 172 |
+
§ 5 DISCUSSION
|
| 173 |
+
|
| 174 |
+
Given how closely our experimental results for bias controlled training and combining all three original bias mitigation methods matched the ground truth, these two techniques can be used to control the gender bias of these models' generated text. Thus, gender neutral dialogue could be created by constructing ground truth data with either no gendered words or ${50}\%$ male bias and ${50}\%$ female bias within the gendered words. Given that we reproduced the results from the original paper [5] for bias controlled training and combining all three bias mitigation techniques, we feel that overall our results support the claims in the original paper [5], despite the differences in value between our results and those in the original paper [5]. One possible cause for the differences between our results and those in the original paper [5] is our training method, since we achieve higher $\mathrm{F}1$ scores for each model and stop training when perplexity stops decreasing, which may not be the same criteria Dinan et al. used to determine when to stop training. It is also possible that in the original paper [5], the list of gendered words used to place utterances in bins was a subset of the original gendered word list [12], most likely the list of counterfactuals. This could also account for the lower male bias we observed for the baseline in our results compared to Dinan et al.'s, however Dinan et al. explicitly stated they used the gendered word list from Zhao et al. [12]. Evaluating our approach to reproducing the original paper [5], one of the strengths of our approach is that we ran all code on Google Colaboratory with one GPU, a free resource, in a reasonable amount of time. However, Google Colaboratory imposes GPU limitations and as a result, we could not use the same batch size as that in the original paper [5], although we achieve higher F1 scores than those in the original paper [5].
|
| 175 |
+
|
| 176 |
+
§ 5.1 WHAT WAS EASY
|
| 177 |
+
|
| 178 |
+
When reproducing the original paper [5], implementing counterfactual data augmentation and bias controlled training and combining all three bias mitigation techniques was easy. Specifically, counterfactual data augmentation and bias controlled training were well-described in the original paper [5] and the list of counterfactuals needed for counterfactual data augmentation was provided by Dinan et al. in an easy-to-use format. Combining all three bias mitigation techniques was also an easy part of reproducing the original paper [5], as we simply needed to apply the same techniques used when implementing each bias mitigation method individually.
|
| 179 |
+
|
| 180 |
+
§ 5.2 WHAT WAS DIFFICULT
|
| 181 |
+
|
| 182 |
+
The only difficulty we encountered, albeit minor, was learning how to use ParlAI, which was necessary in order to use the same model as that in the original paper [5]. However, after reading through the ParlAI documentation and experimenting with the ParlAI Google Colaboratory tutorial [10], we understood how to use ParlAI to fine-tune the model, pre-trained on Reddit conversations [1], for the datasets we created.
|
| 183 |
+
|
| 184 |
+
§ 5.3 RECOMMENDATIONS FOR REPRODUCIBILITY
|
| 185 |
+
|
| 186 |
+
Overall, reproducing the original paper [5] was fairly straightforward, but we do have three recommendations to further improve reproducibility. The first is more clearly indicating what model, pre-trained on Reddit conversations, is used, because the source of the model is not provided in the original paper [5], only that the model is based on the implementation by Miller et al. [8], who introduce ParlAI in that paper. The second recommendation is to specify the hyperparameters used when fine-tuning each model, as these were not provided in the original paper [5]. The last recommendation is to describe the stopping condition for fine-tuning the models. We stopped training when perplexity stopped improving, but this resulted in higher F1 scores for the models than those achieved in the original paper [5].
|
| 187 |
+
|
| 188 |
+
§ 5.4 COMMUNICATION WITH ORIGINAL AUTHORS
|
| 189 |
+
|
| 190 |
+
We communicated with Emily Dinan, one of the authors of the original paper [5], who clarified what model, pre-trained on Reddit conversations, was used in the original paper [5] and provided us with the command to download the model 32 as well as the hyperparameter settings for training the models.
|
papers/MPM/MPM 2022/MPM 2022 Workshop/4WZdqAolwCa/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,19 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Mixed Material Point Method for nearly incompressible polymeric solids
|
| 2 |
+
|
| 3 |
+
Ashkan Ali Madadi ${}^{\mathrm{a}}$ , Berkin Dortdivanlioglu ${}^{\mathrm{a},\mathrm{b}, * }$
|
| 4 |
+
|
| 5 |
+
${}^{a}$ Civil, Architectural and Environmental Engineering, The University of Texas at Austin, Austin,78712, TX, USA
|
| 6 |
+
|
| 7 |
+
${}^{b}$ Oden Institute for Computational Engineering and Sciences, The University of Texas at Austin, Austin,78712, TX, USA
|
| 8 |
+
|
| 9 |
+
## Abstract
|
| 10 |
+
|
| 11 |
+
Exposed to external loadings, polymeric materials, e.g., biological tissues, hydrogels, and elastomers, may undergo extreme, nearly incompressible, self-contacting deformations, which pose significant challenges for numerical modeling employing mesh-based techniques such as the finite element method (FEM) due to i) extreme distortions in the deformed geometry, ii) accuracy issues stemming from volumetric locking effects, and iii) vastly increased computational cost due to complex contact search. As an alternative to FEM, the material point method (MPM) is a continuum-based meshless particle technique attracting considerable interest due to its robustness against extreme distortions and ability to capture contact at no additional cost. An effective technique to overcome locking effects is the two-field mixed formulation with displacement and pressure as independent fields instead of a displacement-based single-field formulation. However, mixed formulations suffer from numerical instabilities in the nearly incompressible limit due to the violation of the inf-sup or Ladyzhenskaya-Babuska-Brezzi (LBB) condition, leading to spurious nodal pressure solutions. The main objective of this paper is to extend the mixed formulations at large strains to the B-spline material point method using the two-scale relation of B-splines; we further develop a subdivision-stabilized mixed MPM and obtain a stable, oscillation-free nodal pressure solution. We assess the stability and accuracy of the mixed MPM through several benchmark problems, including the Cook's membrane and the rigid sphere obstacle on a quasi-compressible hyperelastic substrate (with comparisons to FEM results). Additionally, we test the robustness of the proposed MPM by modeling the compression of a quasi-compressible hyperelastic sphere where the no-slip contact condition is assumed. The proposed methodology provides a robust computational foundation to study extreme deformations observed in practical soft matter applications.
|
| 12 |
+
|
| 13 |
+
Keywords: Material Point Method, Sub-Division Method, Stabilized u-p formulation, Isogeometric analysis, B-Spline based shape functions
|
| 14 |
+
|
| 15 |
+
---
|
| 16 |
+
|
| 17 |
+
*Corresponding author.
|
| 18 |
+
|
| 19 |
+
---
|
papers/MPM/MPM 2022/MPM 2022 Workshop/4WZdqAolwCa/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,15 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ MIXED MATERIAL POINT METHOD FOR NEARLY INCOMPRESSIBLE POLYMERIC SOLIDS
|
| 2 |
+
|
| 3 |
+
Ashkan Ali Madadi ${}^{\mathrm{a}}$ , Berkin Dortdivanlioglu ${}^{\mathrm{a},\mathrm{b}, * }$
|
| 4 |
+
|
| 5 |
+
${}^{a}$ Civil, Architectural and Environmental Engineering, The University of Texas at Austin, Austin,78712, TX, USA
|
| 6 |
+
|
| 7 |
+
${}^{b}$ Oden Institute for Computational Engineering and Sciences, The University of Texas at Austin, Austin,78712, TX, USA
|
| 8 |
+
|
| 9 |
+
§ ABSTRACT
|
| 10 |
+
|
| 11 |
+
Exposed to external loadings, polymeric materials, e.g., biological tissues, hydrogels, and elastomers, may undergo extreme, nearly incompressible, self-contacting deformations, which pose significant challenges for numerical modeling employing mesh-based techniques such as the finite element method (FEM) due to i) extreme distortions in the deformed geometry, ii) accuracy issues stemming from volumetric locking effects, and iii) vastly increased computational cost due to complex contact search. As an alternative to FEM, the material point method (MPM) is a continuum-based meshless particle technique attracting considerable interest due to its robustness against extreme distortions and ability to capture contact at no additional cost. An effective technique to overcome locking effects is the two-field mixed formulation with displacement and pressure as independent fields instead of a displacement-based single-field formulation. However, mixed formulations suffer from numerical instabilities in the nearly incompressible limit due to the violation of the inf-sup or Ladyzhenskaya-Babuska-Brezzi (LBB) condition, leading to spurious nodal pressure solutions. The main objective of this paper is to extend the mixed formulations at large strains to the B-spline material point method using the two-scale relation of B-splines; we further develop a subdivision-stabilized mixed MPM and obtain a stable, oscillation-free nodal pressure solution. We assess the stability and accuracy of the mixed MPM through several benchmark problems, including the Cook's membrane and the rigid sphere obstacle on a quasi-compressible hyperelastic substrate (with comparisons to FEM results). Additionally, we test the robustness of the proposed MPM by modeling the compression of a quasi-compressible hyperelastic sphere where the no-slip contact condition is assumed. The proposed methodology provides a robust computational foundation to study extreme deformations observed in practical soft matter applications.
|
| 12 |
+
|
| 13 |
+
Keywords: Material Point Method, Sub-Division Method, Stabilized u-p formulation, Isogeometric analysis, B-Spline based shape functions
|
| 14 |
+
|
| 15 |
+
*Corresponding author.
|
papers/MPM/MPM 2022/MPM 2022 Workshop/6opKSxYmlwl/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,15 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Large Stress Calculations with Material Point Methods
|
| 2 |
+
|
| 3 |
+
Kyle A. Perez ${}^{1}$ , Kwitae Chong ${}^{2}$ , Paul L. Barclay ${}^{1}$ , Duan Z. Zhang ${}^{1}$ , and Jeremiah U. Brackbill ${}^{1}$
|
| 4 |
+
|
| 5 |
+
${}^{1}$ Los Alamos National Laboratory, Los Alamos, NM,87545, USA
|
| 6 |
+
|
| 7 |
+
${}^{2}$ Oak Ridge National Laboratory, Oak Ridge, TN 37830
|
| 8 |
+
|
| 9 |
+
A 1D bar with fixed ends and a constant nonzero initial stress is not a dynamic system; the velocity, stress, and density of the bar should be at their initial values for all times. However, both MPM and DDMP have problems calculating such a system, when the initial stress is large. This causes difficulties when modeling phenomena such as thermal expansion of a material with boundary constraints.
|
| 10 |
+
|
| 11 |
+
The root cause of this issue is the numerical errors in the calculation of the sum of the gradient of the shape functions. Summing over all of the shape function gradients is supposed to be zero, with numerical errors, the value is very small, but never exactly zero. Given that a nodal force is calculated by the sum of the shape function gradients weighted by the particle stresses and volumes, the error in the nonzero sum of the shape function gradient is, therefore, multiplied by the initial stress, greatly amplifying the error in the nodal force, resulting in unexpected dynamics in a pre-stressed 1D bar.
|
| 12 |
+
|
| 13 |
+
It is possible to modify the nodal force calculation to include an auxiliary stress field in order to reduce the numerical error. Preliminary results suggest that this relatively easy fix is effective in improving the simulation stability of a pre-stressed 1D bar when using either MPM or DDMP.
|
| 14 |
+
|
| 15 |
+
The financial support is provided by Exascale Computing Program (ECP) under the auspices of the United States Department of Energy.
|
papers/MPM/MPM 2022/MPM 2022 Workshop/6opKSxYmlwl/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,15 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ LARGE STRESS CALCULATIONS WITH MATERIAL POINT METHODS
|
| 2 |
+
|
| 3 |
+
Kyle A. Perez ${}^{1}$ , Kwitae Chong ${}^{2}$ , Paul L. Barclay ${}^{1}$ , Duan Z. Zhang ${}^{1}$ , and Jeremiah U. Brackbill ${}^{1}$
|
| 4 |
+
|
| 5 |
+
${}^{1}$ Los Alamos National Laboratory, Los Alamos, NM,87545, USA
|
| 6 |
+
|
| 7 |
+
${}^{2}$ Oak Ridge National Laboratory, Oak Ridge, TN 37830
|
| 8 |
+
|
| 9 |
+
A 1D bar with fixed ends and a constant nonzero initial stress is not a dynamic system; the velocity, stress, and density of the bar should be at their initial values for all times. However, both MPM and DDMP have problems calculating such a system, when the initial stress is large. This causes difficulties when modeling phenomena such as thermal expansion of a material with boundary constraints.
|
| 10 |
+
|
| 11 |
+
The root cause of this issue is the numerical errors in the calculation of the sum of the gradient of the shape functions. Summing over all of the shape function gradients is supposed to be zero, with numerical errors, the value is very small, but never exactly zero. Given that a nodal force is calculated by the sum of the shape function gradients weighted by the particle stresses and volumes, the error in the nonzero sum of the shape function gradient is, therefore, multiplied by the initial stress, greatly amplifying the error in the nodal force, resulting in unexpected dynamics in a pre-stressed 1D bar.
|
| 12 |
+
|
| 13 |
+
It is possible to modify the nodal force calculation to include an auxiliary stress field in order to reduce the numerical error. Preliminary results suggest that this relatively easy fix is effective in improving the simulation stability of a pre-stressed 1D bar when using either MPM or DDMP.
|
| 14 |
+
|
| 15 |
+
The financial support is provided by Exascale Computing Program (ECP) under the auspices of the United States Department of Energy.
|
papers/MPM/MPM 2022/MPM 2022 Workshop/IDCAmWl27e/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,11 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Reducing Quadrature Errors with CutMesh MPM
|
| 2 |
+
|
| 3 |
+
Joel Given, University of California, Berkeley, joelgiven@berkeley.edu
|
| 4 |
+
|
| 5 |
+
Shyamini Kularathna, University of California, Berkeley, kshyamini@berkeley.edu
|
| 6 |
+
|
| 7 |
+
Kenichi Soga, University of California, Berkeley, soga@berkeley.edu
|
| 8 |
+
|
| 9 |
+
The Material Point Method (MPM) is a powerful numerical tool for modeling large deformation problems in solid mechanics. It is well understood that as material points move freely throughout the computational domain, the accuracy of the simulated stresses and strains degrade as a result of increasing quadrature error. Additionally, when moving between adjacent cells, material points exhibit cell crossing error. Cell crossing error is responsible for large numerical perturbations that further reduce the simulation's overall accuracy and stability. Others have shown that cell crossing error is a consequence of combining the following: (i) inaccurate volume integration, (ii) point-wise representation of the material point density, and (iii) basis functions with discontinuous gradients at cell boundaries. There are many existing variations of the MPM which address the point-wise representation of density or the discontinuous gradients at cell boundaries. This work employs a novel approach to address inaccurate volume integration. The proposed method reduces quadrature error by recomputing optimal integration weights throughout the simulation as material points move to new positions. By addressing the inaccurate volume integration, cell crossing errors are considerably reduced. Numerical improvements are illustrated by comparing the displacement, strain, and stress errors for a uniaxial elastic bar problem between this newly proposed formulation and the standard MPM.
|
| 10 |
+
|
| 11 |
+
Wilson, P., Wüchner, R., & Fernando, D. (2021). Distillation of the material point method cell crossing error leading to a novel quadrature-based CO remedy. International Journal for Numerical Methods in Engineering, 122(6), 1513-1537.
|
papers/MPM/MPM 2022/MPM 2022 Workshop/IDCAmWl27e/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,11 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Reducing Quadrature Errors with CutMesh MPM
|
| 2 |
+
|
| 3 |
+
Joel Given, University of California, Berkeley, joelgiven@berkeley.edu
|
| 4 |
+
|
| 5 |
+
Shyamini Kularathna, University of California, Berkeley, kshyamini@berkeley.edu
|
| 6 |
+
|
| 7 |
+
Kenichi Soga, University of California, Berkeley, soga@berkeley.edu
|
| 8 |
+
|
| 9 |
+
The Material Point Method (MPM) is a powerful numerical tool for modeling large deformation problems in solid mechanics. It is well understood that as material points move freely throughout the computational domain, the accuracy of the simulated stresses and strains degrade as a result of increasing quadrature error. Additionally, when moving between adjacent cells, material points exhibit cell crossing error. Cell crossing error is responsible for large numerical perturbations that further reduce the simulation's overall accuracy and stability. Others have shown that cell crossing error is a consequence of combining the following: (i) inaccurate volume integration, (ii) point-wise representation of the material point density, and (iii) basis functions with discontinuous gradients at cell boundaries. There are many existing variations of the MPM which address the point-wise representation of density or the discontinuous gradients at cell boundaries. This work employs a novel approach to address inaccurate volume integration. The proposed method reduces quadrature error by recomputing optimal integration weights throughout the simulation as material points move to new positions. By addressing the inaccurate volume integration, cell crossing errors are considerably reduced. Numerical improvements are illustrated by comparing the displacement, strain, and stress errors for a uniaxial elastic bar problem between this newly proposed formulation and the standard MPM.
|
| 10 |
+
|
| 11 |
+
Wilson, P., Wüchner, R., & Fernando, D. (2021). Distillation of the material point method cell crossing error leading to a novel quadrature-based CO remedy. International Journal for Numerical Methods in Engineering, 122(6), 1513-1537.
|
papers/MPM/MPM 2022/MPM 2022 Workshop/OmE6pREhjYC/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,7 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Development of a stable two-phase contact MPM algorithm for saturated soil-structure interaction using a semi-implicit solver with the projection method
|
| 2 |
+
|
| 3 |
+
Chihun Sung, Shyamini Kularathna, Krishna Kumar
|
| 4 |
+
|
| 5 |
+
## Abstract
|
| 6 |
+
|
| 7 |
+
Numerical simulation of the soil-structure interaction problems with the soil as a two-phase material has been a challenging topic in geotechnical engineering due to the differences in material stiffnesses and interaction between multiple phases, the high bulk modulus of pore fluid, and low permeability. The conventional explicit time integration scheme is conditionally stable, thus requiring a limited time step size, and causes pressure oscillations in rapid loading conditions. As a solution, we develop a stable two-phase contact algorithm in the framework of the Material Point Method (MPM) to study soil-structure interaction problems. We model the soil as a fully saturated porous media and the pore fluid is modeled as incompressible. The algorithm has three main advances over the conventional MPM: (1) We solve the coupled formulations with Chorin's projection method to reduce the numerical oscillation. (2) By handling a diffusion term implicitly, the proposed contact algorithm allows a larger stable time step size which is independent of the bulk modulus and permeability of the pore fluid. (3) To deal with the single-phase material with relatively high stiffness to the soil, we introduce a rigid algorithm in the model. We present detailed formulations and the time increment process of the two-phase contact MPM algorithm. We compare the proposed algorithm with the FEM and the explicit MPM in simulating coupled hydro-mechanical problems to evaluate the accuracy and performance of the algorithm.
|
papers/MPM/MPM 2022/MPM 2022 Workshop/OmE6pREhjYC/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,7 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ DEVELOPMENT OF A STABLE TWO-PHASE CONTACT MPM ALGORITHM FOR SATURATED SOIL-STRUCTURE INTERACTION USING A SEMI-IMPLICIT SOLVER WITH THE PROJECTION METHOD
|
| 2 |
+
|
| 3 |
+
Chihun Sung, Shyamini Kularathna, Krishna Kumar
|
| 4 |
+
|
| 5 |
+
§ ABSTRACT
|
| 6 |
+
|
| 7 |
+
Numerical simulation of the soil-structure interaction problems with the soil as a two-phase material has been a challenging topic in geotechnical engineering due to the differences in material stiffnesses and interaction between multiple phases, the high bulk modulus of pore fluid, and low permeability. The conventional explicit time integration scheme is conditionally stable, thus requiring a limited time step size, and causes pressure oscillations in rapid loading conditions. As a solution, we develop a stable two-phase contact algorithm in the framework of the Material Point Method (MPM) to study soil-structure interaction problems. We model the soil as a fully saturated porous media and the pore fluid is modeled as incompressible. The algorithm has three main advances over the conventional MPM: (1) We solve the coupled formulations with Chorin's projection method to reduce the numerical oscillation. (2) By handling a diffusion term implicitly, the proposed contact algorithm allows a larger stable time step size which is independent of the bulk modulus and permeability of the pore fluid. (3) To deal with the single-phase material with relatively high stiffness to the soil, we introduce a rigid algorithm in the model. We present detailed formulations and the time increment process of the two-phase contact MPM algorithm. We compare the proposed algorithm with the FEM and the explicit MPM in simulating coupled hydro-mechanical problems to evaluate the accuracy and performance of the algorithm.
|
papers/MPM/MPM 2022/MPM 2022 Workshop/WmhHlPmRq_J/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,19 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Shear band evolution and post-failure simulation using an improved extended material point method
|
| 2 |
+
|
| 3 |
+
Yong Liang*, Bodhinanda Chandra and Kenichi Soga
|
| 4 |
+
|
| 5 |
+
Department of Civil and Environmental Engineering, University of California, Berkeley, CA, 94720, United States
|
| 6 |
+
|
| 7 |
+
Key Words: Extended Material Point Method, Localization detection, Frictional Self-contact, Shear Band, Large Deformation
|
| 8 |
+
|
| 9 |
+
An improved XMPM formulation is proposed to simulate the evolution of shear bands and post-failure behaviors with large deformations (e.g. landslide). The XMPM introduces a localization search algorithm based on the theory of bifurcation to predict the initiation and propagation of shear band. To deal with the dynamic frictional contact mechanism between the generated shear planes, a formulation of self-contact is integrated into the XMPM framework. In addition, a hybrid implicit-explicit description of discontinuity is considered by employing the level-set method and a point cloud approach to ensure the smoothness of the discontinuity surface during localization propagation. Several numerical examples have been investigated to assess the accuracy and demonstrate the capability of the proposed XMPM approach in simulating the shear band evolution of different engineering problems in both 2D and 3D. The proposed formulation is proven to exhibit minor sensitivity with respect to mesh refinements in predicting the shear-band path. Furthermore, a slope failure simulation is presented to demonstrate the accuracy and capability of the method in simulating real-scale problems in both 2D and 3D. The 3D simulation further emphasizes the potential of the XMPM in modeling complex problems experiencing localization, which eventually leads to large deformations post-failure behavior.
|
| 10 |
+
|
| 11 |
+
*E-mail: yliang_sn@berkeley.edu
|
| 12 |
+
|
| 13 |
+
## REFERENCES
|
| 14 |
+
|
| 15 |
+
[1] Yong Liang and Bodhinanda Chandra, Kenichi Soga. Shear band evolution and post-failure simulation by the extended material point method (XMPM) with localization detection and frictional self-contact. Computer Methods in Applied Mechanics and Engineering. (2022). 390: 114530.
|
| 16 |
+
|
| 17 |
+
[2] Yong Liang, Tamas Benedek, Xiong Zhang, Yan Liu. Material point method with enriched shape function for crack problems. Computer Methods in Applied Mechanics and Engineering. (2017). 322: 541-562.
|
| 18 |
+
|
| 19 |
+
[3] Yong Liang, Xiong Zhang, Yan Liu. Extended material point method for three-dimensional crack problems. International Journal for Numerical Methods in Engineering. (2021). 122: 3044-3069.
|
papers/MPM/MPM 2022/MPM 2022 Workshop/WmhHlPmRq_J/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,11 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ SHEAR BAND EVOLUTION AND POST-FAILURE SIMULATION USING AN IMPROVED EXTENDED MATERIAL POINT METHOD
|
| 2 |
+
|
| 3 |
+
Yong Liang*, Bodhinanda Chandra and Kenichi Soga
|
| 4 |
+
|
| 5 |
+
Department of Civil and Environmental Engineering, University of California, Berkeley, CA, 94720, United States
|
| 6 |
+
|
| 7 |
+
Key Words: Extended Material Point Method, Localization detection, Frictional Self-contact, Shear Band, Large Deformation
|
| 8 |
+
|
| 9 |
+
An improved XMPM formulation is proposed to simulate the evolution of shear bands and post-failure behaviors with large deformations (e.g. landslide). The XMPM introduces a localization search algorithm based on the theory of bifurcation to predict the initiation and propagation of shear band. To deal with the dynamic frictional contact mechanism between the generated shear planes, a formulation of self-contact is integrated into the XMPM framework. In addition, a hybrid implicit-explicit description of discontinuity is considered by employing the level-set method and a point cloud approach to ensure the smoothness of the discontinuity surface during localization propagation. Several numerical examples have been investigated to assess the accuracy and demonstrate the capability of the proposed XMPM approach in simulating the shear band evolution of different engineering problems in both 2D and 3D. The proposed formulation is proven to exhibit minor sensitivity with respect to mesh refinements in predicting the shear-band path. Furthermore, a slope failure simulation is presented to demonstrate the accuracy and capability of the method in simulating real-scale problems in both 2D and 3D. The 3D simulation further emphasizes the potential of the XMPM in modeling complex problems experiencing localization, which eventually leads to large deformations post-failure behavior.
|
| 10 |
+
|
| 11 |
+
*E-mail: yliang_sn@berkeley.edu
|
papers/MPM/MPM 2022/MPM 2022 Workshop/iDoRmiH7OPF/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,11 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Tsunami-driven debris effects on structures using a multi-GPU MPM tool.
|
| 2 |
+
|
| 3 |
+
Pedro Arduino (University of Washington, parduino@uw.edu) Justin Bonus (University of Washington, bonusj@uw.edu) Michael R. Motley (University of Washington, mrmotley@uw.edu) Marc O. Eberhard (University of Washington, eberhard@uw.edu)
|
| 4 |
+
|
| 5 |
+
Tsunamis and storm surges pose a significant threat to coastal communities and infrastructure around the world. The damage from such events is often not only the result of the flowing water itself, but also of transported fields of debris. Such debris fields are composed of objects the inundation flow mobilizes while moving through a stricken region, ranging from vehicles, to watercraft, to houses. Our work addresses this by experimentally and numerically studying flow-driven ensembles of debris impacting structures, quantifying impact forces and damming effects of these highly nonlinear, chaotic systems. Among many possible numerical methods, the Material Point Method (MPM) emerges as most effective for modeling dynamic interaction of multi-phase, multi-material deformable bodies. However, the method presents limitations on practical levels of resolution that can be achieved at scale due to computational and memory costs. Graphics Processing Unit (GPU) based MPM implementations offer a solution.
|
| 6 |
+
|
| 7 |
+
GPUs accelerate MPM programs on the order of ${100}\mathrm{x}$ , but limited memory and bandwidth have historically restricted simulation size. Recent hardware and software advances now permit fast Multi-GPU MPM for engineering projects with many material points (100,000,000+) and grid-cells $\left( {1,{000},{000},{000} + }\right)$ . Further, hardware trends suggest rising GPU viability, with doubling of (i) video memory, (ii) bandwidth, (iii) computational cores, and (iv) increased accessibility in the next four years.
|
| 8 |
+
|
| 9 |
+
We present our expansion of an optimized, open-source Multi-GPU MPM code (Claymore, https://github.com/penn-graphics-research/claymore), bringing it from computer graphics to engineering, where certain quantities (e.g., stress, strain, state-variables, forces) must be held to high standards. We explore the pros and cons of innovative MPM modifications proposed mainly by computer graphic researchers (e.g. APIC, MLS-MPM, ASFLIP) which may improve conservation of angular momentum, halve simulation time, avoid shape-function gradient calculations, and reduce sticky-contact in MPM with extraordinary simplicity. Such methods range from rigorous innovations to unstable short-cuts, the latter of which may still present value if handled carefully. We provide an engineering review of computer graphics methods and present validation case examples so others may discern their applicability to their own projects
|
| 10 |
+
|
| 11 |
+
This Multi-GPU MPM tool is used to model hundreds of real-world wave flume experiments, performed in ${12}\mathrm{\;m}$ (UW WASIRF) and ${120}\mathrm{\;m}$ (OSU LWF) long facilities. Stochastic debris-field transportation, jammed debris formations, and precise loadings are captured and extrapolated for probabilistic structural design against tsunamis. Validation and verification examples are included for the tool.
|
papers/MPM/MPM 2022/MPM 2022 Workshop/iDoRmiH7OPF/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,11 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ TSUNAMI-DRIVEN DEBRIS EFFECTS ON STRUCTURES USING A MULTI-GPU MPM TOOL.
|
| 2 |
+
|
| 3 |
+
Pedro Arduino (University of Washington, parduino@uw.edu) Justin Bonus (University of Washington, bonusj@uw.edu) Michael R. Motley (University of Washington, mrmotley@uw.edu) Marc O. Eberhard (University of Washington, eberhard@uw.edu)
|
| 4 |
+
|
| 5 |
+
Tsunamis and storm surges pose a significant threat to coastal communities and infrastructure around the world. The damage from such events is often not only the result of the flowing water itself, but also of transported fields of debris. Such debris fields are composed of objects the inundation flow mobilizes while moving through a stricken region, ranging from vehicles, to watercraft, to houses. Our work addresses this by experimentally and numerically studying flow-driven ensembles of debris impacting structures, quantifying impact forces and damming effects of these highly nonlinear, chaotic systems. Among many possible numerical methods, the Material Point Method (MPM) emerges as most effective for modeling dynamic interaction of multi-phase, multi-material deformable bodies. However, the method presents limitations on practical levels of resolution that can be achieved at scale due to computational and memory costs. Graphics Processing Unit (GPU) based MPM implementations offer a solution.
|
| 6 |
+
|
| 7 |
+
GPUs accelerate MPM programs on the order of ${100}\mathrm{x}$ , but limited memory and bandwidth have historically restricted simulation size. Recent hardware and software advances now permit fast Multi-GPU MPM for engineering projects with many material points (100,000,000+) and grid-cells $\left( {1,{000},{000},{000} + }\right)$ . Further, hardware trends suggest rising GPU viability, with doubling of (i) video memory, (ii) bandwidth, (iii) computational cores, and (iv) increased accessibility in the next four years.
|
| 8 |
+
|
| 9 |
+
We present our expansion of an optimized, open-source Multi-GPU MPM code (Claymore, https://github.com/penn-graphics-research/claymore), bringing it from computer graphics to engineering, where certain quantities (e.g., stress, strain, state-variables, forces) must be held to high standards. We explore the pros and cons of innovative MPM modifications proposed mainly by computer graphic researchers (e.g. APIC, MLS-MPM, ASFLIP) which may improve conservation of angular momentum, halve simulation time, avoid shape-function gradient calculations, and reduce sticky-contact in MPM with extraordinary simplicity. Such methods range from rigorous innovations to unstable short-cuts, the latter of which may still present value if handled carefully. We provide an engineering review of computer graphics methods and present validation case examples so others may discern their applicability to their own projects
|
| 10 |
+
|
| 11 |
+
This Multi-GPU MPM tool is used to model hundreds of real-world wave flume experiments, performed in ${12}\mathrm{\;m}$ (UW WASIRF) and ${120}\mathrm{\;m}$ (OSU LWF) long facilities. Stochastic debris-field transportation, jammed debris formations, and precise loadings are captured and extrapolated for probabilistic structural design against tsunamis. Validation and verification examples are included for the tool.
|
papers/MPM/MPM 2022/MPM 2022 Workshop/mBUv4nLTYf/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Surface Tension and Thermodynamic Effects with the Material Point Method
|
| 2 |
+
|
| 3 |
+
## David Hyde, Vanderbilt University (david.hyde.1@vanderbilt.edu)
|
| 4 |
+
|
| 5 |
+
Abstract: We will present the results of several recent works on incorporating surface tension and thermodynamic effects into the material point method. On both fronts, a key innovation is a novel boundary quadrature method for MPM surfaces. We will explain how these methods can be made conservative (e.g., for momentum and heat), and we will demonstrate a number of interesting examples from both computational physics and computer graphics. Our methods are able to simulate a dynamic range of phenomena-from thermocapillary effects like Marangoni convection, to high-surface tension fluids like liquid metals, to thin-film effects similar to tears of wine-that are typically quite difficult to achieve with other numerical methods. We will conclude with some thoughts on generalizing these ideas to other MPM simulation problems.
|
papers/MPM/MPM 2022/MPM 2022 Workshop/mBUv4nLTYf/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Surface Tension and Thermodynamic Effects with the Material Point Method
|
| 2 |
+
|
| 3 |
+
§ DAVID HYDE, VANDERBILT UNIVERSITY (DAVID.HYDE.1@VANDERBILT.EDU)
|
| 4 |
+
|
| 5 |
+
Abstract: We will present the results of several recent works on incorporating surface tension and thermodynamic effects into the material point method. On both fronts, a key innovation is a novel boundary quadrature method for MPM surfaces. We will explain how these methods can be made conservative (e.g., for momentum and heat), and we will demonstrate a number of interesting examples from both computational physics and computer graphics. Our methods are able to simulate a dynamic range of phenomena-from thermocapillary effects like Marangoni convection, to high-surface tension fluids like liquid metals, to thin-film effects similar to tears of wine-that are typically quite difficult to achieve with other numerical methods. We will conclude with some thoughts on generalizing these ideas to other MPM simulation problems.
|
papers/MPM/MPM 2022/MPM 2022 Workshop/rL11psvzvO/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,37 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Time Stepping with Space and Time Errors and Stability of The Material Point Method
|
| 2 |
+
|
| 3 |
+
Martin Berzins*
|
| 4 |
+
|
| 5 |
+
Scientific Computing and Imagaing Institute
|
| 6 |
+
|
| 7 |
+
University of Utah
|
| 8 |
+
|
| 9 |
+
Salt Lake City, UT 84112 USA
|
| 10 |
+
|
| 11 |
+
e-mail: mb@sci.utah.edu
|
| 12 |
+
|
| 13 |
+
## Abstract
|
| 14 |
+
|
| 15 |
+
The challenge of choosing the best time step for the Material Point Method (MPM) is often addressed by using a simple stability criterion, such as the speed of sound. While in many instances this works well it is important to understand how this relates to the overall errors present in the method. This is particularly true as the spatial error appears to dominate the temporal one and makes the use of high-order time stepping methods challenging [3].
|
| 16 |
+
|
| 17 |
+
Recently there have been several advances in understanding the stability of MPM. These range from Non-linear stability analysis, [1] through to Von Neumann type approaches [3]. While it is has been long observed that the spatial errors dominate [4] at the same time recent work has made more precise the forms of the different MPM errors [5].
|
| 18 |
+
|
| 19 |
+
Both these advances now make it possible to understand how the different errors and the stability analysis are connected. At the same time this also requires simple computable estimates of the different errors in the material point method.
|
| 20 |
+
|
| 21 |
+
Local estimation of the temporal errors is relatively straightforward but theses errors may not be the most significant ones.
|
| 22 |
+
|
| 23 |
+
Of the other errors perhaps the most significant are those in mapping from particles to nodes to calculate forces and the differentiation process needed to calculate spatial derivatives at particles. By using simple estimates of these errors an attempt will be made to reconcile the errors introduced with the stability criteria used. A number of simple computational experiments will be used to illustrate the theoretical results.
|
| 24 |
+
|
| 25 |
+
## REFERENCES
|
| 26 |
+
|
| 27 |
+
[1] M. Berzins Nonlinear Stability and timestep selection in the material point method. Computational Particle Mechanics January 2018.
|
| 28 |
+
|
| 29 |
+
[2] Ruichen Ni Xiong Zhang. A precise critical time step formula for the explicit material point method. IJNMF 2020 Vol 121 Issue 22
|
| 30 |
+
|
| 31 |
+
[3] M. Berzins. "Symplectic Time Integration Methods for the Material Point Method, Experiments, Analysis and Order Reduction," In WCCM-ECCOMAS2020 virtual Conference, January, 2021.
|
| 32 |
+
|
| 33 |
+
[4] M. Steffen, R.M. Kirby, M. Berzins. "Decoupling and Balancing of Space and Time Errors in the Material Point Method (MPM)," In International Journal for Numerical Methods in Engineering, Vol. 82, No. 10, pp. 1207-1243. 2010.
|
| 34 |
+
|
| 35 |
+
[5] M.Berzins Energy Conservation and Accurcay of Some MPM Methods. Submitted to Computational Particle Mechanices 2021.
|
| 36 |
+
|
| 37 |
+
[6] Sun, Y., Shinar, T., Schroeder, C., "Effective time step restrictions for explicit MPM simulation," ACM SIGGRAPH/Eurographics Symposium on Computer Animation (SCA), (2020)
|
papers/MPM/MPM 2022/MPM 2022 Workshop/rL11psvzvO/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,23 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ TIME STEPPING WITH SPACE AND TIME ERRORS AND STABILITY OF THE MATERIAL POINT METHOD
|
| 2 |
+
|
| 3 |
+
Martin Berzins*
|
| 4 |
+
|
| 5 |
+
Scientific Computing and Imagaing Institute
|
| 6 |
+
|
| 7 |
+
University of Utah
|
| 8 |
+
|
| 9 |
+
Salt Lake City, UT 84112 USA
|
| 10 |
+
|
| 11 |
+
e-mail: mb@sci.utah.edu
|
| 12 |
+
|
| 13 |
+
§ ABSTRACT
|
| 14 |
+
|
| 15 |
+
The challenge of choosing the best time step for the Material Point Method (MPM) is often addressed by using a simple stability criterion, such as the speed of sound. While in many instances this works well it is important to understand how this relates to the overall errors present in the method. This is particularly true as the spatial error appears to dominate the temporal one and makes the use of high-order time stepping methods challenging [3].
|
| 16 |
+
|
| 17 |
+
Recently there have been several advances in understanding the stability of MPM. These range from Non-linear stability analysis, [1] through to Von Neumann type approaches [3]. While it is has been long observed that the spatial errors dominate [4] at the same time recent work has made more precise the forms of the different MPM errors [5].
|
| 18 |
+
|
| 19 |
+
Both these advances now make it possible to understand how the different errors and the stability analysis are connected. At the same time this also requires simple computable estimates of the different errors in the material point method.
|
| 20 |
+
|
| 21 |
+
Local estimation of the temporal errors is relatively straightforward but theses errors may not be the most significant ones.
|
| 22 |
+
|
| 23 |
+
Of the other errors perhaps the most significant are those in mapping from particles to nodes to calculate forces and the differentiation process needed to calculate spatial derivatives at particles. By using simple estimates of these errors an attempt will be made to reconcile the errors introduced with the stability criteria used. A number of simple computational experiments will be used to illustrate the theoretical results.
|
papers/MobiCom/MobiCom 2020/MobiCom 2020 Workshop/MobiCom 2020 Workshop CPS-IoTBench/BSZPNEUHiS2/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,193 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Synchronous transmissions on Bluetooth 5 and IEEE 802.15.4 - An experimental study
|
| 2 |
+
|
| 3 |
+
Double-blind submission
|
| 4 |
+
|
| 5 |
+
https://fast-crag-58943.herokuapp.com/
|
| 6 |
+
|
| 7 |
+
## ABSTRACT
|
| 8 |
+
|
| 9 |
+
Synchronous transmissions(ST)is a wireless communication technique that has been shown to be particularly efficient in low-power multi-hop networks. Since 2011, research on ST mainly focused on the physical layer defined by the IEEE 802.15.4 standard. Nowadays, another pervasive technology is embedded by default in almost all connected objects: Bluetooth. Thus, researchers recently started to investigate whether the benefits of ${ST}$ also apply to Bluetooth.
|
| 10 |
+
|
| 11 |
+
This paper presents the results of an experimental study of ${ST}$ using the popular and low-cost nRF52840 Dongle, which supports all modes of the Bluetooth 5 standard as well as IEEE 802.15.4. We measure the packet reception rate for different parameters known to affect ${ST}$ for all physical layers supported by the platform. We use a data exploration application that allows to extract useful information from the measurements and uncover new insights. We validate that ${ST}$ is viable on Bluetooth, as previously shown. Moreover, we highlight that successful ${ST}$ on Bluetooth cannot be explained by "constructive interference" or capture effect alone: multiple effects interplay in a way that is not yet fully understood.
|
| 12 |
+
|
| 13 |
+
Data Availability Statement. The authors commit to keep all data presented in this paper publicly available for at least 3 years. The data collection firmware and the data exploration app will be made available on GitHub (omitted for double-blind reviewing).
|
| 14 |
+
|
| 15 |
+
## 1 INTRODUCTION
|
| 16 |
+
|
| 17 |
+
Synchronous transmissions(ST)(also referred to as concurrent transmissions) is a wireless communication technique that allows multiple nodes to transmit a message at the "same time." A destination node may successfully receive (one of these) synchronous transmissions thanks to two artifacts of the physical layer: constructive interference and the capture effect. In a nutshell, ${ST}$ is likely to be successful if the incoming messages arrive at the receiving node's antenna within a small time offset (in the range of a few $\mu \mathrm{s}$ ) and/or with a sufficiently large difference in signal strength (a few dB). In 2011, Glossy [7] was the first protocol using ${ST}$ for fast and reliable communication over a low-power multi-hop wireless network. This triggered a decade of research, mainly focused on the IEEE 802.15.4 standard. Refer to [12] for more details on ${ST}$ and the associated literature.
|
| 18 |
+
|
| 19 |
+
In 2019, Al-Nahas et al. showed that ${ST}$ can also be successfully used on Bluetooth's physical layer [1]. The authors presented a first characterization of the conditions under which ${ST}$ can be received and presented BlueFlood, a Glossy-like communication protocol where nodes efficiently exchange Bluetooth-compatible advertisement packets.
|
| 20 |
+
|
| 21 |
+
While these first results were promising, there are still some gaps in our understanding of how ${ST}$ work on Bluetooth. In particular, while we have an intuition of the parameters that affect the success of ${ST}$ , previous results suggest cross-dependencies between these effects [1], which are yet to be fully characterized. We therefore ask ourselves the following questions:
|
| 22 |
+
|
| 23 |
+

|
| 24 |
+
|
| 25 |
+
Figure 1: Experimental setup of the nRF52840 Dongles in an anechoic chamber. Left corner: receiver. Right corner: transmitters.
|
| 26 |
+
|
| 27 |
+
Can we reproduce the results from [1] and confirm the conditions under which ST can be successful on Bluetooth's physical layers?
|
| 28 |
+
|
| 29 |
+
Can more data help shed light on ST's underlying mechanisms?
|
| 30 |
+
|
| 31 |
+
Does ST work on cheap commercial-off-the-shelf platforms?
|
| 32 |
+
|
| 33 |
+
We attempt to answer these questions with an experimental campaign using the nRF52840 Dongle [9] which is capable of both Bluetooth 5 and IEEE 802.15.4. We focus on the link layer and measure the packet reception rate for different parameters known to affect ${ST}$ for all physical layers supported by the platform. We perform all experiments with two synchronous transmitters set in an anechoic chamber to avoid external interference (Figure 1).
|
| 34 |
+
|
| 35 |
+
## 2 BACKGROUND
|
| 36 |
+
|
| 37 |
+
This study aims to investigate and compare the conditions for successful synchronous transmissions(ST)for the different physical layers (PHY) supported by the nRF52840 platform. Before presenting our results, this section briefly presents the platform (Sec. 2.1), summarizes the relevant properties of supported PHY (Sec. 2.2), and lists the different parameters known to affect ${ST}$ (Sec. 2.3).
|
| 38 |
+
|
| 39 |
+
### 2.1 The nRF52840 platform
|
| 40 |
+
|
| 41 |
+
We perform all our experiments using the Nordic Semiconductor nRF52840 Dongle (also known as PCA10059) [9]. The dongle embeds a PCB antenna, a few peripherals, and the nRF52840 system-on-chip [8] including an ARM Cortex-M4, 256 kB of RAM and 1 MB of flash. The dongle is about ${1.5}\mathrm{\;{cm}} \times {4.6}\mathrm{\;{cm}}$ -large and costs around ${10}\$$ as of today: it is a cheap and small commercial-off-the-shelf embedded platforms, suitable for all sorts of deployments.
|
| 42 |
+
|
| 43 |
+
Table 1: Summary of relevant PHY properties for ${ST}$ .
|
| 44 |
+
|
| 45 |
+
<table><tr><td>IEEE 802.15.4 Modulation</td><td>Bitrate [bps]</td><td>Chip rate [ps]</td><td>Chip period [us]</td><td>FEC</td></tr><tr><td>O-QPSK</td><td>250k</td><td>2M</td><td>0.5</td><td>1:8</td></tr><tr><td>Bluetooth 5 Modulation</td><td>Bitrate [bps]</td><td>Symbol rate [ps]</td><td>Symbol period [us]</td><td>FEC</td></tr><tr><td>GFSK</td><td>2M</td><td>2M</td><td>0.5</td><td>-</td></tr><tr><td>GFSK</td><td>1M</td><td>1M</td><td>1</td><td>-</td></tr><tr><td>GFSK</td><td>500k</td><td>1M</td><td>1</td><td>1:2</td></tr><tr><td>GFSK</td><td>125k</td><td>1M</td><td>1</td><td>1:8</td></tr></table>
|
| 46 |
+
|
| 47 |
+
The nRF52840 SoC features some hardware support particularly interesting for ${ST}$ . The programmable peripheral interconnect (PPI) allows peripherals to communicate with each other independently of the CPU. The PPI signals are synchronized to a ${16}\mathrm{{MHz}}$ clock, thus have a predictable delay of (at most) ${62.5}\mathrm{\;{ns}}$ . Certain peripherals also provide so-called shortcuts, which are connections between events and tasks within a peripheral. For example, one can use shortcuts to automatically stop or clear a timer when it has reached a user-defined value, then triggers a radio transmission using the PPI; all without involvement of the CPU, and thus predictable timing.
|
| 48 |
+
|
| 49 |
+
Finally, the nRF52840 supports multiple radio physical layers, which are further described in the following section.
|
| 50 |
+
|
| 51 |
+
### 2.2 Physical layers
|
| 52 |
+
|
| 53 |
+
The nRF52840 plaform supports five different PHY: IEEE 802.15.4 and the four modes specified in Bluetooth 5 [11], which all operate in the ${2.4}\mathrm{{GHz}}$ ISM band. We briefly summarize the parameters of these PHY that are relevant for our study of ${ST}$ (Table 1).
|
| 54 |
+
|
| 55 |
+
The IEEE 802.15.4 standard uses O-QPSK modulation with DSSS forward error correction (FEC), encoding 4 bits of data into a symbol made of 32 chips (a chip is an analog 0 or 1). The chip rate is $2\mathrm{{Mbps}}$ which yields a chip period of ${0.5\mu }\mathrm{s}$ and a data bitrate of ${250}\mathrm{{kbps}}$ .
|
| 56 |
+
|
| 57 |
+
The Bluetooth 5 standard describes 4 modes, all using GFSK modulation. They are best identified by their bitrate of $2\mathrm{{Mbps}},1\mathrm{{Mbps}}$ , ${500}\mathrm{{kbps}}$ , and ${125}\mathrm{{kbps}}$ respectively. These modes use different symbol rate; the slowest two use convolution coding for FEC, with different rate (see e.g., [2] for more details).
|
| 58 |
+
|
| 59 |
+
### 2.3 Parameters affecting the reception of ${ST}$
|
| 60 |
+
|
| 61 |
+
${ST}$ refers to a situation where multiple transmitters in range of the same receiver simultaneously send packets. ${ST}$ is considered successful if the receiver correctly decodes one of the transmitted packets. The PHY supported by the nRF52840 (Sec. 2.2) are based on phase and frequency modulation for which the following parameters are known to affect the success of ${ST}$ .
|
| 62 |
+
|
| 63 |
+
Power delta By design of RF receivers based on frequency modulation, if one signal is sufficiently stronger than other interfering signals and still arrives during the preamble of previous transmissions, the receiver locks onto the stronger signal and decodes the corresponding packets with high probability. This is known as the capture effect and requires a power difference of 3 to ${10}\mathrm{\;{dB}}$ depending on the PHY.
|
| 64 |
+
|
| 65 |
+

|
| 66 |
+
|
| 67 |
+
Figure 2: Schema of the experimental setup (pictured in Fig. 1).
|
| 68 |
+
|
| 69 |
+
Packet content In an ideal scenario, the signal from different transmitters arrive at the receiver with the same phase offset. If the packets are the same, this can lead to constructive interference, resulting in a strictly stronger signal.
|
| 70 |
+
|
| 71 |
+
Time delta Even if the packets are the same, the received signals invariably have an offset in the time domain. If this offset is larger than the symbol period $\tau$ , the received symbols superpose randomly and reception fails (assuming no capture effect). However, if the time offset is small ( $\tau$ or less), it has been experimentally shown that successful ${ST}$ is likely. This superposition is called "constructive interference."
|
| 72 |
+
|
| 73 |
+
Coding Some PHY use various coding mechanisms to improve the reliability of reception of transmissions. These techniques are known to affect the reception ${ST}$ as well.
|
| 74 |
+
|
| 75 |
+
Carrier frequency offsets Transmitters do not have the exact same carrier frequency, which is particularly true for cheap commercial-off-the-shelf platforms. With two transmitters, the envelop of received signal has a sinusoidal shape, which creates time windows with stronger and weaker signals. This is known as the beating effect.
|
| 76 |
+
|
| 77 |
+
Environment Naturally, interference from other radio sources affects ${ST}$ . Moreover, reflections of the transmitted signals may also reach the receiver (multipath effects), which creates additional signals with other phases and may affect the reception of ${ST}$ in peculiar ways.
|
| 78 |
+
|
| 79 |
+
Number of transmitters With more transmitters, all the previous effects add up and are mixed together in ways that are difficult to predict. In an ideal scenario, more transmitters could perfectly superimpose and produce a stronger signal; the reality is more complex and less predictable.
|
| 80 |
+
|
| 81 |
+
## 3 EXPERIMENT DESIGN
|
| 82 |
+
|
| 83 |
+
Our experimental campaign aims to investigate the success rate of ${ST}$ with two synchronous transmitters while varying four variables: the physical layer used, whether the transmitted packets have the same content or not, and the time and power delta between the signals at the receiver. This section presents our experiment design, which is described in more detail in [3].
|
| 84 |
+
|
| 85 |
+
Setup. Our physical setup is illustrated in Fig. 2: two transmitters are placed at equal distance of the receiver, such that we can control the received signals time delta with the time offset at the transmitters. We run the experiments in an anechoic chamber to minimize external interference and signal reflections (Fig. 1).
|
| 86 |
+
|
| 87 |
+
Runs. Experiments are performed in runs where multiple parameter settings are tested in sequence. For each setting, we perform 20 ${ST}$ : the receiver simply counts and logs the number of successful attempts. The runs are time-triggered and controlled by the trigger node on the transmitting side (Fig. 2). The sequence of settings is predefined such that nodes know how to set their radio correctly. Finally, the packet payload contains the current setting to guarantee that the receiver logs the correct data, even in case of packet losses.
|
| 88 |
+
|
| 89 |
+
Time delta setting. The time delta between transmitters is controlled by the trigger node; it raises GPIO pins connected to the transmitters with a precise offset, which trigger the transmissions. We analyzed the jitter on the resulting time delta between transmissions, which results in an accuracy of 124 ns (i.e., $\approx 2$ ticks) [3].
|
| 90 |
+
|
| 91 |
+
Power delta setting. While the radio can be configured with different transmit power settings, it is not clear (i) how precise these settings are, and (ii) what the actual signal strength at the receiver is. Therefore, we chose to start each run with a series of RSSI measurements. For each setting, each transmitter sends 20 packets for which the receiver measures and logs the corresponding RSSI value. These measurements are used in post-processing to estimate the actual received power difference between the transmitters in each of the different settings. According to the nRF52840 datasheet, the RSSI measurements have an accuracy of $\pm 2\mathrm{\;{dB}}$ .
|
| 92 |
+
|
| 93 |
+
Parameters. We chose to test the following settings.
|
| 94 |
+
|
| 95 |
+
- We test the five physical layers supported (see Table 1).
|
| 96 |
+
|
| 97 |
+
- The time delta set by the triggering node are $\pm 0.{.15},{20},{25}$ , 30,35,40,45,50,60,70,80,90,100,120ticks.
|
| 98 |
+
|
| 99 |
+
- One transmitter sets its radio to $8\mathrm{{dBm}}$ while the other cycles through $- 8, - 4,0,2,3,4,5,6,7$ and $8\mathrm{\;{dBm}}$ .
|
| 100 |
+
|
| 101 |
+
- Transmitters send packets of 38 bytes (Bluetooth-compatible advertisement packets). When packet content should be different, 14 bytes (out of 38) are randomly generated.
|
| 102 |
+
|
| 103 |
+
We fix the packet content parameter in each run and cycle through the other settings; this yields 5900 combinations and ${118}{}^{\prime }{000}\mathrm{{ST}}$ attempts per run, resulting in about ${74}\mathrm{\;{min}}$ of runtime.
|
| 104 |
+
|
| 105 |
+
Data collection. For each of the 5900 setting, ${20}\mathrm{{ST}}$ attempts are performed. The receiver logs the number of successes, which fits in one byte. In addition, the receiver must log the RSSI measurement: 10 power settings, 5 modes, 2 transmitters, and 20 attempts result in 2000 measurements of one byte each. Thus one run produces ${7.9}\mathrm{{MB}}$ of data, which easily fits in the nRF52840 memory. Once the run is finished, the receiver is connected to a laptop; the RSSI and ${ST}$ data are written over UART and finally stored as CSV files.
|
| 106 |
+
|
| 107 |
+
## 4 RESULTS
|
| 108 |
+
|
| 109 |
+
The main goal of our study is to (i) measure the tolerable time delay for "constructive interference" and the power delta threshold for the capture effect and (ii) compare these to previous studies [2, 10]; This goal is fulfilled by Table 2. The following figures provide more details: Fig. 3 illustrates that the capture effects works on all modes while Fig. 4 presents the data for the constructive interference case. Finally, Fig. 5 shows that there is a region where successful ${ST}$ cannot be explained by solely constructive interference or the capture effect.
|
| 110 |
+
|
| 111 |
+
Plotted data. In Fig. 3 to 5, the dots (when displayed) show the packet reception rate (PRR) for one run, which is computed over 20 ${ST}$ attempts (Sec. 3). Based on all the runs, we compute, for each setting, the median PRR (solid line) and its ${75}\%$ confidence interval (shaded areas); i.e., given the collected data, there is a 75% probability that the true median PRR is within the confidence interval. We use TriScale [5] to compute the confidence intervals.
|
| 112 |
+
|
| 113 |
+
Threshold definitions. In Table 2, we report the thresholds we observe for "constructive interference" and the capture effect. We consider "constructive interference" when the confidence interval of the median PRR raises above 0 . For the capture effect threshold, we take the minimal power delta such that the confidence interval of the median PRR is above ${75}\%$ for all time delta.
|
| 114 |
+
|
| 115 |
+
Findings. Overall, we obtain three main findings:
|
| 116 |
+
|
| 117 |
+
(1) We observe that ${ST}$ is viable on all Bluetooth physical layers included in the Bluetooth 5 standard [11]. This confirms the results presented in $\left\lbrack {1,2}\right\rbrack$ . Similarly, our observations for IEEE 802.15.4 match previous studies [6, 7, 10].
|
| 118 |
+
|
| 119 |
+
(2) We measure the packet reception rate (PRR) of ${ST}$ on the nRF52840 Dongles; the capture effect thresholds and tolerable delays for "constructive interference" that we obtain (Table 2) slightly defer from previous studies [2, 10].
|
| 120 |
+
|
| 121 |
+
(3) For the Bluetooth modes, we confirm the evidence of a region where good PRR is observed but can not solely be explain by capture effect or constructive interference (Fig. 5).
|
| 122 |
+
|
| 123 |
+
The raw data, processing scripts, and data visualizations are all publicly available [4]. We created an online app to facilitate the data exploration and reproduce all the plots presented in the paper [4] (reference contains only the link to the app due to blind review).
|
| 124 |
+
|
| 125 |
+
## 5 LESSONS LEARNED
|
| 126 |
+
|
| 127 |
+
During this study, we learned a few lessons we deem worth sharing. - A careful study of ${ST}$ implies working with a lot of data with many dimensions. We found that having an efficient data visualization framework is not just nice to have, it is necessary. We make our visualization tools available hoping they can be useful for others too.
|
| 128 |
+
|
| 129 |
+
- Fig. 5 clearly shows that ${ST}$ can be successful on the Blue-tooth physical layers under conditions that can neither be attributed to the capture effect nor constructive inference. Success depends on many other parameters (e.g., coding scheme, radio transceiver design); this is not yet fully understood.
|
| 130 |
+
|
| 131 |
+
- One important parameter is the carrier frequency offset between transmitters, which results in the well-known "beating effect" [6]. It has been shown experimentally that the beating frequency strongly affects the performance of ${ST}\left\lbrack 2\right\rbrack$ . It is not clear whether imprecise and unstable oscillators of cheap commercial-off-the-self platforms like the nRF52840 Don-gle are detrimental or beneficial with respect to the beating effect. In our study, we tried different devices but obtained only mildly different carrier frequency offsets, leading to minor effects.
|
| 132 |
+
|
| 133 |
+
Table 2: Conditions for "constructive interference" and capture effect reported in previous studies, compared with our measurements (in bold). We find that Bluetooth modes can tolerate larger time delay than concluded in [2] while still benefiting from "constructive interference." Conversely, our threshold for capture effect are slightly more conservative. However, these differences are moderate and could be simply explained by the different definition of these time and power thresholds (not clearly defined in [2]). τ denotes the symbol or chip period for the different PHY (see Table 1); consequently, $\tau = {0.5\mu }\mathrm{s}$ for IEEE 802.15.4 and BLE 2 Mbit, and $\tau = {1\mu }\mathrm{s}$ for the other Bluetooth modes.
|
| 134 |
+
|
| 135 |
+
<table><tr><td>Conditions</td><td>Units</td><td>Study</td><td>IEEE 802.15.4</td><td>BLE 2 Mbit</td><td>BLE 1 Mbit</td><td>BLE 500 kbit</td><td>BLE 125 kbit</td></tr><tr><td colspan="8">Tolerable delay for constructive interfence</td></tr><tr><td>- 0 dB delta</td><td/><td>[10]</td><td>$\tau /2\;\left( {0.25}\right)$</td><td>$\tau /2\left( {0.25}\right)$</td><td>$\tau /2\left( {0.5}\right)$</td><td>-</td><td>-</td></tr><tr><td>- same payload</td><td>$\tau \left( {\mu \mathrm{s}}\right)$</td><td>[2]</td><td>-</td><td>$\tau /4\left( {0.13}\right)$</td><td>$\tau /4\left( {0.25}\right)$</td><td>$\tau /4\left( {0.25}\right)$</td><td>$\tau /4\left( {0.25}\right)$</td></tr><tr><td/><td/><td>This study</td><td>$\tau /2\left( {0.25}\right)$</td><td>$3/{4\tau }\left( {0.375}\right)$</td><td>$3/{4\tau }\left( {0.75}\right)$</td><td>$\tau /4\left( {0.25}\right)$</td><td>$\tau /2\left( {0.5}\right)$</td></tr><tr><td colspan="8">Power difference threshold for the capture effect</td></tr><tr><td rowspan="3">- any time delta - any payload</td><td rowspan="3">dB</td><td>[10]</td><td>3</td><td>10</td><td>10</td><td>-</td><td>-</td></tr><tr><td>[2]</td><td>-</td><td>8</td><td>8</td><td>8</td><td>2</td></tr><tr><td>This study</td><td>3</td><td>10</td><td>10</td><td>8</td><td>6</td></tr></table>
|
| 136 |
+
|
| 137 |
+

|
| 138 |
+
|
| 139 |
+
Figure 3: [Measurements with ${0\mu }$ s time delta between the transmitters] ST is successful for all modes when the power delta at the receiver becomes sufficient. When the same packet is sent by the transmitters (Fig. 3a), the median PRR is close to or larger than 50% even without any power delta. In these conditions, the "constructive interference" effect helps the reception of ST. When different packets are sent (Fig. 3b), the PRR requires a larger power delta (between 2 and 10 dB depending on the mode) to reach ${100}\%$ . The minimum power delta beyond which ST is successful independently of the time delta (i.e., capture effect threshold) is even larger (see Fig. 5). These results match those presented in [2].
|
| 140 |
+
|
| 141 |
+
- The timing of operation executions on the nRF52840 can be made very predictable thanks to built-in hardware support (e.g., PPI and shortcuts [8]). This significantly facilitates the design of communication protocols based on ${ST}$ ; in particular, achieving the sub - $\mu$ s time synchronization accuracy necessary to benefit from "constructive interference" is now much easier than it used to be (e.g., for the initial Glossy [7]).
|
| 142 |
+
|
| 143 |
+
- The receptive field of the PCB antenna on the nRF52840 Don-gle is far from perfect. During our experimental campaign, we sometimes observed significant differences between runs after touching the boards. Such effects cannot be observed when using coaxial cables, like in [2].
|
| 144 |
+
|
| 145 |
+
- As mentioned in Sec. 3, the precision of time $\left( {\pm 2\text{ticks}}\right)$ and power delta $\left( {\pm 2\mathrm{\;{dB}}}\right)$ is in the order of the threshold we try to identify: the results in Table 2 must be taken with care.
|
| 146 |
+
|
| 147 |
+
## 6 CONCLUSIONS AND FUTURE WORK
|
| 148 |
+
|
| 149 |
+
With this study, we confirm that synchronous transmissions(ST) is viable on the physical layers of Bluetooth 5. We show that ${ST}$ works reliably even on cheap commercial-off-the-shelf platforms like the nRF52840 [9]. The data we collected sheds more light on the expected performance of ${ST}$ for various time and power delta between the signals of two transmitting devices. In particular, we observe that small power delta improves the reliability of ${ST}$ , even though the conditions for the capture effect are not met (Fig. 5).
|
| 150 |
+
|
| 151 |
+
Yet, there are still a lot we do not fully understand. In [2], the authors attempted to model the effect of beating on ${ST}$ for the different Bluetooth modes, and validated their theory with experiments with two transmitters. It is expected that the effect of beating "averages out" with more transmitters, which should yield an increase in performance for the uncoded modes ( 1 Mbit and 2 Mbit). This remains to be experimentally validated.
|
| 152 |
+
|
| 153 |
+

|
| 154 |
+
|
| 155 |
+
Figure 4: [Measurements with $0\mathrm{\;{dB}}$ received signal difference at the receiver (estimated)] Without power delta, ${ST}$ can still be successful when (i) the same packets are sent and (ii) the time delta between the transmitters are sufficiently small. This is the so-called "constructive interference" effect. For the 4 Bluetooth modes (Fig. 4a to 4d) the median PRR drops to 0 when the time delta between transmitters becomes too big. The bounds found in our experiments are marked on the graph and labeled with the tolerable time delta (in ratio of the symbol period and the corresponding time in $\mu$ s). In [2], the authors conclude that Bluetooth modes cannot tolerate more than $\tau /4$ of delay; our results show that some modes can. For IEEE 802.15.4, the PRR never drops to 0 thanks to the DSSS error correction (Fig. 4e); thus we redefine the "constructive interference" region as the time deltas for which the PRR is 0 when transmitters send different packets (Fig. 4f). We observe a limit around $\tau /2$ (or ${0.25\mu }\mathrm{s}$ ), which matches previous studies [6,10].
|
| 156 |
+
|
| 157 |
+
Most importantly, apart BlueFlood, introduced in [1] as a proof-of-concept, the design on multi-hop communication protocols based on ${ST}$ using Bluetooth is largely unexplored. The different Bluetooth modes offer a broad design space in reliability, bandwidth, and range properties: How to leverage these to improve network-wide performance? How does this compare with the performance of Bluetooth Mesh (the multi-hop protocol included in the Bluetooth 5 standard)? How does it compare with existing solutions using the IEEE 802.15.4 physical layer? All these questions remain open.
|
| 158 |
+
|
| 159 |
+
## REFERENCES
|
| 160 |
+
|
| 161 |
+
[1] Beshr Al Nahas, Simon Duquennoy, and Olaf Landsiedel. 2019. Concurrent Transmissions for Multi-Hop Bluetooth 5. In Proceedings of the 2019 International
|
| 162 |
+
|
| 163 |
+
Conference on Embedded Wireless Systems and Networks (EWSN '19). Junction Publishing, USA. http://dl.acm.org/citation.cfm?id=3324320.3324336.
|
| 164 |
+
|
| 165 |
+
[2] Beshr Al Nahas, Antonio Escobar-Molero, Jirka Klaue, Simon Duquennoy, and Olaf Landsiedel. 2020. BlueFlood: Concurrent Transmissions for MultiHop Bluetooth 5 - Modeling and Evaluation. arXiv:2002.12906 [cs] (2020). arXiv:2002.12906 [cs] http://arxiv.org/abs/2002.12906.
|
| 166 |
+
|
| 167 |
+
[3] Anonymous. 2019. Anonymized for Double-Blind Review. Master Thesis.
|
| 168 |
+
|
| 169 |
+
[4] Anonymous. 2020. Dataset: Synchronous Transmissions on the nRF52840. https: //fast-crag-58943.herokuapp.com/.
|
| 170 |
+
|
| 171 |
+
[5] Anonymous. 2020. TriScale: A Framework Supporting Reproducible Performance Evaluations in Networking. In Zenodo. https://doi.org/10.5281/zenodo.3656819
|
| 172 |
+
|
| 173 |
+
[6] Antonio Escobar-Molero. 2019. Improving Reliability and Latency of Wireless Sensor Networks Using Concurrent Transmissions. at - Automatisierungstechnik
|
| 174 |
+
|
| 175 |
+
(2019). https://doi.org/10.1515/auto-2018-0064
|
| 176 |
+
|
| 177 |
+
[7] Federico Ferrari, Marco Zimmerling, Lothar Thiele, and Olga Saukh. 2011. Efficient Network Flooding and Time Synchronization with Glossy. In Proceedings of the 10th ACM/IEEE International Conference on Information Processing in Sensor
|
| 178 |
+
|
| 179 |
+

|
| 180 |
+
|
| 181 |
+
Figure 5: [Measurements with the same packet content. PRR as a function of the time delta between transmitters for the different modes (columns) and power delta (rows)] For the Bluetooth modes, it is not all "constructive interference" or "capture effect": a power delta smaller than the capture threshold (Table 2) still improves the reception of ${ST}$ for moderate time delta. In other words, even a small power delta increases the tolerable time delay and improves the PRR. For example, consider the 1Mbit mode: we observe a capture threshold (i.e., when ST becomes successful regardless of the time delta) at about 10 dB. However, with only $6\mathrm{\;{dB}}$ power delta, the median PRR is close to ${100}\%$ for time delta below16ticks $\left( {{1\mu }\mathrm{s}}\right)$ . A similar observation can be made for the ${125}\mathrm{{kbit}}$ mode: a $2\mathrm{\;{dB}}$ power delta is sufficient to provide good reliability up to8ticks time delta $\left( {{0.5\mu }\mathrm{s}}\right)$ .
|
| 182 |
+
|
| 183 |
+
Networks. https://ieeexplore.ieee.org/document/5779066.
|
| 184 |
+
|
| 185 |
+
[8] Nordic Semiconductors. 2018. nRF52840. https://www.nordicsemi.com/Products/ Low-power-short-range-wireless/nRF52840. [Online] Last accessed 2020-05-16.
|
| 186 |
+
|
| 187 |
+
[9] Nordic Semiconductors. 2018. nRF52840 Dongle. https://www.nordicsemi.com/ en/Software%20and%20tools/Development%20Kits/nRF52840%20Dongle. [Online] Last accessed 2020-05-16.
|
| 188 |
+
|
| 189 |
+
[10] Matthias Wilhelm, Vincent Lenders, and Jens B. Schmitt. 2014. On the Reception of Concurrent Transmissions in Wireless Sensor Networks. IEEE Transactions on Wireless Communications (2014). https://doi.org/10.1109/TWC.2014.2349896
|
| 190 |
+
|
| 191 |
+
[11] Martin Woolley. 2018. Bluetooth 5: Go Faster. Go Further. Technical Report. Bluetooth SIG. https://www.bluetooth.com/bluetooth-resources/bluetooth-5- go-faster-go-further/.
|
| 192 |
+
|
| 193 |
+
[12] Marco Zimmerling, Luca Mottola, and Silvia Santini. 2020. Synchronous Transmissions in Low-Power Wireless: A Survey of Communication Protocols and Network Services. arXiv:2001.08557 [cs, eess] (2020). arXiv:2001.08557 [cs, eess]
|
papers/MobiCom/MobiCom 2020/MobiCom 2020 Workshop/MobiCom 2020 Workshop CPS-IoTBench/BSZPNEUHiS2/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,207 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ SYNCHRONOUS TRANSMISSIONS ON BLUETOOTH 5 AND IEEE 802.15.4 - AN EXPERIMENTAL STUDY
|
| 2 |
+
|
| 3 |
+
Double-blind submission
|
| 4 |
+
|
| 5 |
+
https://fast-crag-58943.herokuapp.com/
|
| 6 |
+
|
| 7 |
+
§ ABSTRACT
|
| 8 |
+
|
| 9 |
+
Synchronous transmissions(ST)is a wireless communication technique that has been shown to be particularly efficient in low-power multi-hop networks. Since 2011, research on ST mainly focused on the physical layer defined by the IEEE 802.15.4 standard. Nowadays, another pervasive technology is embedded by default in almost all connected objects: Bluetooth. Thus, researchers recently started to investigate whether the benefits of ${ST}$ also apply to Bluetooth.
|
| 10 |
+
|
| 11 |
+
This paper presents the results of an experimental study of ${ST}$ using the popular and low-cost nRF52840 Dongle, which supports all modes of the Bluetooth 5 standard as well as IEEE 802.15.4. We measure the packet reception rate for different parameters known to affect ${ST}$ for all physical layers supported by the platform. We use a data exploration application that allows to extract useful information from the measurements and uncover new insights. We validate that ${ST}$ is viable on Bluetooth, as previously shown. Moreover, we highlight that successful ${ST}$ on Bluetooth cannot be explained by "constructive interference" or capture effect alone: multiple effects interplay in a way that is not yet fully understood.
|
| 12 |
+
|
| 13 |
+
Data Availability Statement. The authors commit to keep all data presented in this paper publicly available for at least 3 years. The data collection firmware and the data exploration app will be made available on GitHub (omitted for double-blind reviewing).
|
| 14 |
+
|
| 15 |
+
§ 1 INTRODUCTION
|
| 16 |
+
|
| 17 |
+
Synchronous transmissions(ST)(also referred to as concurrent transmissions) is a wireless communication technique that allows multiple nodes to transmit a message at the "same time." A destination node may successfully receive (one of these) synchronous transmissions thanks to two artifacts of the physical layer: constructive interference and the capture effect. In a nutshell, ${ST}$ is likely to be successful if the incoming messages arrive at the receiving node's antenna within a small time offset (in the range of a few $\mu \mathrm{s}$ ) and/or with a sufficiently large difference in signal strength (a few dB). In 2011, Glossy [7] was the first protocol using ${ST}$ for fast and reliable communication over a low-power multi-hop wireless network. This triggered a decade of research, mainly focused on the IEEE 802.15.4 standard. Refer to [12] for more details on ${ST}$ and the associated literature.
|
| 18 |
+
|
| 19 |
+
In 2019, Al-Nahas et al. showed that ${ST}$ can also be successfully used on Bluetooth's physical layer [1]. The authors presented a first characterization of the conditions under which ${ST}$ can be received and presented BlueFlood, a Glossy-like communication protocol where nodes efficiently exchange Bluetooth-compatible advertisement packets.
|
| 20 |
+
|
| 21 |
+
While these first results were promising, there are still some gaps in our understanding of how ${ST}$ work on Bluetooth. In particular, while we have an intuition of the parameters that affect the success of ${ST}$ , previous results suggest cross-dependencies between these effects [1], which are yet to be fully characterized. We therefore ask ourselves the following questions:
|
| 22 |
+
|
| 23 |
+
< g r a p h i c s >
|
| 24 |
+
|
| 25 |
+
Figure 1: Experimental setup of the nRF52840 Dongles in an anechoic chamber. Left corner: receiver. Right corner: transmitters.
|
| 26 |
+
|
| 27 |
+
Can we reproduce the results from [1] and confirm the conditions under which ST can be successful on Bluetooth's physical layers?
|
| 28 |
+
|
| 29 |
+
Can more data help shed light on ST's underlying mechanisms?
|
| 30 |
+
|
| 31 |
+
Does ST work on cheap commercial-off-the-shelf platforms?
|
| 32 |
+
|
| 33 |
+
We attempt to answer these questions with an experimental campaign using the nRF52840 Dongle [9] which is capable of both Bluetooth 5 and IEEE 802.15.4. We focus on the link layer and measure the packet reception rate for different parameters known to affect ${ST}$ for all physical layers supported by the platform. We perform all experiments with two synchronous transmitters set in an anechoic chamber to avoid external interference (Figure 1).
|
| 34 |
+
|
| 35 |
+
§ 2 BACKGROUND
|
| 36 |
+
|
| 37 |
+
This study aims to investigate and compare the conditions for successful synchronous transmissions(ST)for the different physical layers (PHY) supported by the nRF52840 platform. Before presenting our results, this section briefly presents the platform (Sec. 2.1), summarizes the relevant properties of supported PHY (Sec. 2.2), and lists the different parameters known to affect ${ST}$ (Sec. 2.3).
|
| 38 |
+
|
| 39 |
+
§ 2.1 THE NRF52840 PLATFORM
|
| 40 |
+
|
| 41 |
+
We perform all our experiments using the Nordic Semiconductor nRF52840 Dongle (also known as PCA10059) [9]. The dongle embeds a PCB antenna, a few peripherals, and the nRF52840 system-on-chip [8] including an ARM Cortex-M4, 256 kB of RAM and 1 MB of flash. The dongle is about ${1.5}\mathrm{\;{cm}} \times {4.6}\mathrm{\;{cm}}$ -large and costs around ${10}\$$ as of today: it is a cheap and small commercial-off-the-shelf embedded platforms, suitable for all sorts of deployments.
|
| 42 |
+
|
| 43 |
+
Table 1: Summary of relevant PHY properties for ${ST}$ .
|
| 44 |
+
|
| 45 |
+
max width=
|
| 46 |
+
|
| 47 |
+
IEEE 802.15.4 Modulation Bitrate [bps] Chip rate [ps] Chip period [us] FEC
|
| 48 |
+
|
| 49 |
+
1-5
|
| 50 |
+
O-QPSK 250k 2M 0.5 1:8
|
| 51 |
+
|
| 52 |
+
1-5
|
| 53 |
+
Bluetooth 5 Modulation Bitrate [bps] Symbol rate [ps] Symbol period [us] FEC
|
| 54 |
+
|
| 55 |
+
1-5
|
| 56 |
+
GFSK 2M 2M 0.5 -
|
| 57 |
+
|
| 58 |
+
1-5
|
| 59 |
+
GFSK 1M 1M 1 -
|
| 60 |
+
|
| 61 |
+
1-5
|
| 62 |
+
GFSK 500k 1M 1 1:2
|
| 63 |
+
|
| 64 |
+
1-5
|
| 65 |
+
GFSK 125k 1M 1 1:8
|
| 66 |
+
|
| 67 |
+
1-5
|
| 68 |
+
|
| 69 |
+
The nRF52840 SoC features some hardware support particularly interesting for ${ST}$ . The programmable peripheral interconnect (PPI) allows peripherals to communicate with each other independently of the CPU. The PPI signals are synchronized to a ${16}\mathrm{{MHz}}$ clock, thus have a predictable delay of (at most) ${62.5}\mathrm{\;{ns}}$ . Certain peripherals also provide so-called shortcuts, which are connections between events and tasks within a peripheral. For example, one can use shortcuts to automatically stop or clear a timer when it has reached a user-defined value, then triggers a radio transmission using the PPI; all without involvement of the CPU, and thus predictable timing.
|
| 70 |
+
|
| 71 |
+
Finally, the nRF52840 supports multiple radio physical layers, which are further described in the following section.
|
| 72 |
+
|
| 73 |
+
§ 2.2 PHYSICAL LAYERS
|
| 74 |
+
|
| 75 |
+
The nRF52840 plaform supports five different PHY: IEEE 802.15.4 and the four modes specified in Bluetooth 5 [11], which all operate in the ${2.4}\mathrm{{GHz}}$ ISM band. We briefly summarize the parameters of these PHY that are relevant for our study of ${ST}$ (Table 1).
|
| 76 |
+
|
| 77 |
+
The IEEE 802.15.4 standard uses O-QPSK modulation with DSSS forward error correction (FEC), encoding 4 bits of data into a symbol made of 32 chips (a chip is an analog 0 or 1). The chip rate is $2\mathrm{{Mbps}}$ which yields a chip period of ${0.5\mu }\mathrm{s}$ and a data bitrate of ${250}\mathrm{{kbps}}$ .
|
| 78 |
+
|
| 79 |
+
The Bluetooth 5 standard describes 4 modes, all using GFSK modulation. They are best identified by their bitrate of $2\mathrm{{Mbps}},1\mathrm{{Mbps}}$ , ${500}\mathrm{{kbps}}$ , and ${125}\mathrm{{kbps}}$ respectively. These modes use different symbol rate; the slowest two use convolution coding for FEC, with different rate (see e.g., [2] for more details).
|
| 80 |
+
|
| 81 |
+
§ 2.3 PARAMETERS AFFECTING THE RECEPTION OF ${ST}$
|
| 82 |
+
|
| 83 |
+
${ST}$ refers to a situation where multiple transmitters in range of the same receiver simultaneously send packets. ${ST}$ is considered successful if the receiver correctly decodes one of the transmitted packets. The PHY supported by the nRF52840 (Sec. 2.2) are based on phase and frequency modulation for which the following parameters are known to affect the success of ${ST}$ .
|
| 84 |
+
|
| 85 |
+
Power delta By design of RF receivers based on frequency modulation, if one signal is sufficiently stronger than other interfering signals and still arrives during the preamble of previous transmissions, the receiver locks onto the stronger signal and decodes the corresponding packets with high probability. This is known as the capture effect and requires a power difference of 3 to ${10}\mathrm{\;{dB}}$ depending on the PHY.
|
| 86 |
+
|
| 87 |
+
< g r a p h i c s >
|
| 88 |
+
|
| 89 |
+
Figure 2: Schema of the experimental setup (pictured in Fig. 1).
|
| 90 |
+
|
| 91 |
+
Packet content In an ideal scenario, the signal from different transmitters arrive at the receiver with the same phase offset. If the packets are the same, this can lead to constructive interference, resulting in a strictly stronger signal.
|
| 92 |
+
|
| 93 |
+
Time delta Even if the packets are the same, the received signals invariably have an offset in the time domain. If this offset is larger than the symbol period $\tau$ , the received symbols superpose randomly and reception fails (assuming no capture effect). However, if the time offset is small ( $\tau$ or less), it has been experimentally shown that successful ${ST}$ is likely. This superposition is called "constructive interference."
|
| 94 |
+
|
| 95 |
+
Coding Some PHY use various coding mechanisms to improve the reliability of reception of transmissions. These techniques are known to affect the reception ${ST}$ as well.
|
| 96 |
+
|
| 97 |
+
Carrier frequency offsets Transmitters do not have the exact same carrier frequency, which is particularly true for cheap commercial-off-the-shelf platforms. With two transmitters, the envelop of received signal has a sinusoidal shape, which creates time windows with stronger and weaker signals. This is known as the beating effect.
|
| 98 |
+
|
| 99 |
+
Environment Naturally, interference from other radio sources affects ${ST}$ . Moreover, reflections of the transmitted signals may also reach the receiver (multipath effects), which creates additional signals with other phases and may affect the reception of ${ST}$ in peculiar ways.
|
| 100 |
+
|
| 101 |
+
Number of transmitters With more transmitters, all the previous effects add up and are mixed together in ways that are difficult to predict. In an ideal scenario, more transmitters could perfectly superimpose and produce a stronger signal; the reality is more complex and less predictable.
|
| 102 |
+
|
| 103 |
+
§ 3 EXPERIMENT DESIGN
|
| 104 |
+
|
| 105 |
+
Our experimental campaign aims to investigate the success rate of ${ST}$ with two synchronous transmitters while varying four variables: the physical layer used, whether the transmitted packets have the same content or not, and the time and power delta between the signals at the receiver. This section presents our experiment design, which is described in more detail in [3].
|
| 106 |
+
|
| 107 |
+
Setup. Our physical setup is illustrated in Fig. 2: two transmitters are placed at equal distance of the receiver, such that we can control the received signals time delta with the time offset at the transmitters. We run the experiments in an anechoic chamber to minimize external interference and signal reflections (Fig. 1).
|
| 108 |
+
|
| 109 |
+
Runs. Experiments are performed in runs where multiple parameter settings are tested in sequence. For each setting, we perform 20 ${ST}$ : the receiver simply counts and logs the number of successful attempts. The runs are time-triggered and controlled by the trigger node on the transmitting side (Fig. 2). The sequence of settings is predefined such that nodes know how to set their radio correctly. Finally, the packet payload contains the current setting to guarantee that the receiver logs the correct data, even in case of packet losses.
|
| 110 |
+
|
| 111 |
+
Time delta setting. The time delta between transmitters is controlled by the trigger node; it raises GPIO pins connected to the transmitters with a precise offset, which trigger the transmissions. We analyzed the jitter on the resulting time delta between transmissions, which results in an accuracy of 124 ns (i.e., $\approx 2$ ticks) [3].
|
| 112 |
+
|
| 113 |
+
Power delta setting. While the radio can be configured with different transmit power settings, it is not clear (i) how precise these settings are, and (ii) what the actual signal strength at the receiver is. Therefore, we chose to start each run with a series of RSSI measurements. For each setting, each transmitter sends 20 packets for which the receiver measures and logs the corresponding RSSI value. These measurements are used in post-processing to estimate the actual received power difference between the transmitters in each of the different settings. According to the nRF52840 datasheet, the RSSI measurements have an accuracy of $\pm 2\mathrm{\;{dB}}$ .
|
| 114 |
+
|
| 115 |
+
Parameters. We chose to test the following settings.
|
| 116 |
+
|
| 117 |
+
* We test the five physical layers supported (see Table 1).
|
| 118 |
+
|
| 119 |
+
* The time delta set by the triggering node are $\pm 0.{.15},{20},{25}$ , 30,35,40,45,50,60,70,80,90,100,120ticks.
|
| 120 |
+
|
| 121 |
+
* One transmitter sets its radio to $8\mathrm{{dBm}}$ while the other cycles through $- 8, - 4,0,2,3,4,5,6,7$ and $8\mathrm{\;{dBm}}$ .
|
| 122 |
+
|
| 123 |
+
* Transmitters send packets of 38 bytes (Bluetooth-compatible advertisement packets). When packet content should be different, 14 bytes (out of 38) are randomly generated.
|
| 124 |
+
|
| 125 |
+
We fix the packet content parameter in each run and cycle through the other settings; this yields 5900 combinations and ${118}{}^{\prime }{000}\mathrm{{ST}}$ attempts per run, resulting in about ${74}\mathrm{\;{min}}$ of runtime.
|
| 126 |
+
|
| 127 |
+
Data collection. For each of the 5900 setting, ${20}\mathrm{{ST}}$ attempts are performed. The receiver logs the number of successes, which fits in one byte. In addition, the receiver must log the RSSI measurement: 10 power settings, 5 modes, 2 transmitters, and 20 attempts result in 2000 measurements of one byte each. Thus one run produces ${7.9}\mathrm{{MB}}$ of data, which easily fits in the nRF52840 memory. Once the run is finished, the receiver is connected to a laptop; the RSSI and ${ST}$ data are written over UART and finally stored as CSV files.
|
| 128 |
+
|
| 129 |
+
§ 4 RESULTS
|
| 130 |
+
|
| 131 |
+
The main goal of our study is to (i) measure the tolerable time delay for "constructive interference" and the power delta threshold for the capture effect and (ii) compare these to previous studies [2, 10]; This goal is fulfilled by Table 2. The following figures provide more details: Fig. 3 illustrates that the capture effects works on all modes while Fig. 4 presents the data for the constructive interference case. Finally, Fig. 5 shows that there is a region where successful ${ST}$ cannot be explained by solely constructive interference or the capture effect.
|
| 132 |
+
|
| 133 |
+
Plotted data. In Fig. 3 to 5, the dots (when displayed) show the packet reception rate (PRR) for one run, which is computed over 20 ${ST}$ attempts (Sec. 3). Based on all the runs, we compute, for each setting, the median PRR (solid line) and its ${75}\%$ confidence interval (shaded areas); i.e., given the collected data, there is a 75% probability that the true median PRR is within the confidence interval. We use TriScale [5] to compute the confidence intervals.
|
| 134 |
+
|
| 135 |
+
Threshold definitions. In Table 2, we report the thresholds we observe for "constructive interference" and the capture effect. We consider "constructive interference" when the confidence interval of the median PRR raises above 0 . For the capture effect threshold, we take the minimal power delta such that the confidence interval of the median PRR is above ${75}\%$ for all time delta.
|
| 136 |
+
|
| 137 |
+
Findings. Overall, we obtain three main findings:
|
| 138 |
+
|
| 139 |
+
(1) We observe that ${ST}$ is viable on all Bluetooth physical layers included in the Bluetooth 5 standard [11]. This confirms the results presented in $\left\lbrack {1,2}\right\rbrack$ . Similarly, our observations for IEEE 802.15.4 match previous studies [6, 7, 10].
|
| 140 |
+
|
| 141 |
+
(2) We measure the packet reception rate (PRR) of ${ST}$ on the nRF52840 Dongles; the capture effect thresholds and tolerable delays for "constructive interference" that we obtain (Table 2) slightly defer from previous studies [2, 10].
|
| 142 |
+
|
| 143 |
+
(3) For the Bluetooth modes, we confirm the evidence of a region where good PRR is observed but can not solely be explain by capture effect or constructive interference (Fig. 5).
|
| 144 |
+
|
| 145 |
+
The raw data, processing scripts, and data visualizations are all publicly available [4]. We created an online app to facilitate the data exploration and reproduce all the plots presented in the paper [4] (reference contains only the link to the app due to blind review).
|
| 146 |
+
|
| 147 |
+
§ 5 LESSONS LEARNED
|
| 148 |
+
|
| 149 |
+
During this study, we learned a few lessons we deem worth sharing. - A careful study of ${ST}$ implies working with a lot of data with many dimensions. We found that having an efficient data visualization framework is not just nice to have, it is necessary. We make our visualization tools available hoping they can be useful for others too.
|
| 150 |
+
|
| 151 |
+
* Fig. 5 clearly shows that ${ST}$ can be successful on the Blue-tooth physical layers under conditions that can neither be attributed to the capture effect nor constructive inference. Success depends on many other parameters (e.g., coding scheme, radio transceiver design); this is not yet fully understood.
|
| 152 |
+
|
| 153 |
+
* One important parameter is the carrier frequency offset between transmitters, which results in the well-known "beating effect" [6]. It has been shown experimentally that the beating frequency strongly affects the performance of ${ST}\left\lbrack 2\right\rbrack$ . It is not clear whether imprecise and unstable oscillators of cheap commercial-off-the-self platforms like the nRF52840 Don-gle are detrimental or beneficial with respect to the beating effect. In our study, we tried different devices but obtained only mildly different carrier frequency offsets, leading to minor effects.
|
| 154 |
+
|
| 155 |
+
Table 2: Conditions for "constructive interference" and capture effect reported in previous studies, compared with our measurements (in bold). We find that Bluetooth modes can tolerate larger time delay than concluded in [2] while still benefiting from "constructive interference." Conversely, our threshold for capture effect are slightly more conservative. However, these differences are moderate and could be simply explained by the different definition of these time and power thresholds (not clearly defined in [2]). τ denotes the symbol or chip period for the different PHY (see Table 1); consequently, $\tau = {0.5\mu }\mathrm{s}$ for IEEE 802.15.4 and BLE 2 Mbit, and $\tau = {1\mu }\mathrm{s}$ for the other Bluetooth modes.
|
| 156 |
+
|
| 157 |
+
max width=
|
| 158 |
+
|
| 159 |
+
Conditions Units Study IEEE 802.15.4 BLE 2 Mbit BLE 1 Mbit BLE 500 kbit BLE 125 kbit
|
| 160 |
+
|
| 161 |
+
1-8
|
| 162 |
+
8|c|Tolerable delay for constructive interfence
|
| 163 |
+
|
| 164 |
+
1-8
|
| 165 |
+
- 0 dB delta X [10] $\tau /2\;\left( {0.25}\right)$ $\tau /2\left( {0.25}\right)$ $\tau /2\left( {0.5}\right)$ - -
|
| 166 |
+
|
| 167 |
+
1-8
|
| 168 |
+
- same payload $\tau \left( {\mu \mathrm{s}}\right)$ [2] - $\tau /4\left( {0.13}\right)$ $\tau /4\left( {0.25}\right)$ $\tau /4\left( {0.25}\right)$ $\tau /4\left( {0.25}\right)$
|
| 169 |
+
|
| 170 |
+
1-8
|
| 171 |
+
X X This study $\tau /2\left( {0.25}\right)$ $3/{4\tau }\left( {0.375}\right)$ $3/{4\tau }\left( {0.75}\right)$ $\tau /4\left( {0.25}\right)$ $\tau /2\left( {0.5}\right)$
|
| 172 |
+
|
| 173 |
+
1-8
|
| 174 |
+
8|c|Power difference threshold for the capture effect
|
| 175 |
+
|
| 176 |
+
1-8
|
| 177 |
+
3*- any time delta - any payload 3*dB [10] 3 10 10 - -
|
| 178 |
+
|
| 179 |
+
3-8
|
| 180 |
+
[2] - 8 8 8 2
|
| 181 |
+
|
| 182 |
+
3-8
|
| 183 |
+
This study 3 10 10 8 6
|
| 184 |
+
|
| 185 |
+
1-8
|
| 186 |
+
|
| 187 |
+
< g r a p h i c s >
|
| 188 |
+
|
| 189 |
+
Figure 3: [Measurements with ${0\mu }$ s time delta between the transmitters] ST is successful for all modes when the power delta at the receiver becomes sufficient. When the same packet is sent by the transmitters (Fig. 3a), the median PRR is close to or larger than 50% even without any power delta. In these conditions, the "constructive interference" effect helps the reception of ST. When different packets are sent (Fig. 3b), the PRR requires a larger power delta (between 2 and 10 dB depending on the mode) to reach ${100}\%$ . The minimum power delta beyond which ST is successful independently of the time delta (i.e., capture effect threshold) is even larger (see Fig. 5). These results match those presented in [2].
|
| 190 |
+
|
| 191 |
+
* The timing of operation executions on the nRF52840 can be made very predictable thanks to built-in hardware support (e.g., PPI and shortcuts [8]). This significantly facilitates the design of communication protocols based on ${ST}$ ; in particular, achieving the sub - $\mu$ s time synchronization accuracy necessary to benefit from "constructive interference" is now much easier than it used to be (e.g., for the initial Glossy [7]).
|
| 192 |
+
|
| 193 |
+
* The receptive field of the PCB antenna on the nRF52840 Don-gle is far from perfect. During our experimental campaign, we sometimes observed significant differences between runs after touching the boards. Such effects cannot be observed when using coaxial cables, like in [2].
|
| 194 |
+
|
| 195 |
+
* As mentioned in Sec. 3, the precision of time $\left( {\pm 2\text{ ticks }}\right)$ and power delta $\left( {\pm 2\mathrm{\;{dB}}}\right)$ is in the order of the threshold we try to identify: the results in Table 2 must be taken with care.
|
| 196 |
+
|
| 197 |
+
§ 6 CONCLUSIONS AND FUTURE WORK
|
| 198 |
+
|
| 199 |
+
With this study, we confirm that synchronous transmissions(ST) is viable on the physical layers of Bluetooth 5. We show that ${ST}$ works reliably even on cheap commercial-off-the-shelf platforms like the nRF52840 [9]. The data we collected sheds more light on the expected performance of ${ST}$ for various time and power delta between the signals of two transmitting devices. In particular, we observe that small power delta improves the reliability of ${ST}$ , even though the conditions for the capture effect are not met (Fig. 5).
|
| 200 |
+
|
| 201 |
+
Yet, there are still a lot we do not fully understand. In [2], the authors attempted to model the effect of beating on ${ST}$ for the different Bluetooth modes, and validated their theory with experiments with two transmitters. It is expected that the effect of beating "averages out" with more transmitters, which should yield an increase in performance for the uncoded modes ( 1 Mbit and 2 Mbit). This remains to be experimentally validated.
|
| 202 |
+
|
| 203 |
+
< g r a p h i c s >
|
| 204 |
+
|
| 205 |
+
Figure 4: [Measurements with $0\mathrm{\;{dB}}$ received signal difference at the receiver (estimated)] Without power delta, ${ST}$ can still be successful when (i) the same packets are sent and (ii) the time delta between the transmitters are sufficiently small. This is the so-called "constructive interference" effect. For the 4 Bluetooth modes (Fig. 4a to 4d) the median PRR drops to 0 when the time delta between transmitters becomes too big. The bounds found in our experiments are marked on the graph and labeled with the tolerable time delta (in ratio of the symbol period and the corresponding time in $\mu$ s). In [2], the authors conclude that Bluetooth modes cannot tolerate more than $\tau /4$ of delay; our results show that some modes can. For IEEE 802.15.4, the PRR never drops to 0 thanks to the DSSS error correction (Fig. 4e); thus we redefine the "constructive interference" region as the time deltas for which the PRR is 0 when transmitters send different packets (Fig. 4f). We observe a limit around $\tau /2$ (or ${0.25\mu }\mathrm{s}$ ), which matches previous studies [6,10].
|
| 206 |
+
|
| 207 |
+
Most importantly, apart BlueFlood, introduced in [1] as a proof-of-concept, the design on multi-hop communication protocols based on ${ST}$ using Bluetooth is largely unexplored. The different Bluetooth modes offer a broad design space in reliability, bandwidth, and range properties: How to leverage these to improve network-wide performance? How does this compare with the performance of Bluetooth Mesh (the multi-hop protocol included in the Bluetooth 5 standard)? How does it compare with existing solutions using the IEEE 802.15.4 physical layer? All these questions remain open.
|
papers/MobiCom/MobiCom 2020/MobiCom 2020 Workshop/MobiCom 2020 Workshop CPS-IoTBench/D8K3TY96hrz/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,193 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Quantifying the Latency and Possible Throughput of External Interrupts on Cyber-Physical Systems
|
| 2 |
+
|
| 3 |
+
## ABSTRACT
|
| 4 |
+
|
| 5 |
+
An important characteristic of cyber-physical systems is their capability to respond, in-time, to events from their physical environment. However, to the best of our knowledge there exists no benchmark for assessing and comparing the interrupt handling performance of different software stacks. Hence, we present a flexible evaluation method for measuring the interrupt latency and throughput on ARMv8-A based platforms. We define and validate seven test-cases that stress individual parts of the overall process and combine them to three benchmark functions that provoke the minimal and maximal interrupt latency, and maximal interrupt throughput.
|
| 6 |
+
|
| 7 |
+
## DATA AVAILABILITY STATEMENT
|
| 8 |
+
|
| 9 |
+
A snapshot of the exact version of the prototyping platform [] that was used to conduct the measurements described in this paper has been made available through Zenodo []. The snapshot also includes the captured, raw STM trace data and the matching processing scripts to produce the figures included in this paper. The latest version of the platform can be obtained from [].
|
| 10 |
+
|
| 11 |
+
## 1 INTRODUCTION
|
| 12 |
+
|
| 13 |
+
Cyber-physical systems (CPSs) are characterized by the fact that a computer system works together with a physical environment, or rather controls it. A specific characteristic of such control systems is their necessity to provide short reaction times on events in the physical world, to guarantee a good control quality. However, to properly design a CPS, the reaction times not only need to be fast, but also predictable [6]. Both properties are likewise essential for modern systems, such as tele-operated-driving [11], and classic systems, such as the control of internal combustion engines [5].
|
| 14 |
+
|
| 15 |
+
An important, but often neglected aspect of the achievable reaction time is the interrupt handling performance in both dimensions the interrupt handling latency and throughput capabilities of a system. Especially the effect of the utilized software stack has not yet been comprehensively assessed, even though the development of key-components of the CPS software stack, such as operating systems, virtualization solutions, and communication stacks, could benefit from the insight of such an assessment. Furthermore, such a systematic evaluation of the interrupt performance would also be of interest for system engineers, to evaluate architecture concepts or acquired software stacks for their suitability in a particularly latency-sensitive or throughput-hungry use-case. Previous approaches in this area require a very detailed knowledge of the utilized hardware and are therefore not easy to apply [10].
|
| 16 |
+
|
| 17 |
+
We present a flexible interrupt performance measurement method that can be applied to a wide variety of hardware platforms and software systems. As we see an increasing share of ARM based systems used in CPSs [8], we have chosen the ARMv8 hardware platform and its on-chip tracing technologies as basis for our measurement method. Based on the measurement method, we make a
|
| 18 |
+
|
| 19 |
+
first attempt to specify benchmark functions that provoke the minimal and maximal interrupt latency, as well as the maximal interrupt throughput achievable with a system under consideration.
|
| 20 |
+
|
| 21 |
+
To deduce appropriate benchmark functions, we designed seven independent test-cases, specifically tailored to stress test individual parts of the ARM interrupt handling process. We evaluated those test-case on a Xilinx Zynq UltraScale+ MPSoC ZCU102 evaluation board [12], by comparing the interrupt performance of two different software stacks. A minimal bare-metal application and a full FreeRTOS real-time operating system based stack. Out of the seven test-cases, we formed 10 distinctive combinations and assessed them as potential benchmark functions. Based on this assessment, we propose three benchmark functions to trigger the minimal and maximal interrupt latency and the maximal interrupt throughput with a given software stack.
|
| 22 |
+
|
| 23 |
+
Our measurements confirmed that the chosen software stack indeed has a considerable effect on the interrupt handling performance of a CPS. Hence, we envision that the proposed evaluation method could be used to benchmark different software solutions against each other, to create a profound basis for further optimizations of the interrupt handling path in CPS software architectures.
|
| 24 |
+
|
| 25 |
+
The rest of this paper is structured as follows: Section 2 describes the interrupt handling process on ARMv8-A platforms with a Generic Interrupt Controller (GIC) version 2, Section 3 continues with a presentation of the envisioned evaluation method consisting out of a measurement setup and procedure. Section 4 discusses the proposed test-cases and benchmarks along with the measured latency and throughput values. Section 5 concludes the paper.
|
| 26 |
+
|
| 27 |
+
## 2 INTERRUPT HANDLING PROCEDURE ON ARMv8-A PLATFORMS
|
| 28 |
+
|
| 29 |
+
Müller and Paul [9] define an interrupt as an event that causes a change in the execution flow of a program sequence other than a branch instruction. Its handling process starts with the activation through a stimulus and ends with the completion of the interrupt service routine (ISR), which is called in consequence and processes the stimulus. Until the ISR is executed several steps are undergone in hardware to cope for example with simultaneously arriving interrupt requests (IRQs) and masking of certain requests. In the following, we explain this process for ARMv8-A platforms, as specified in the GIC architecture specification version 2 [1]. In Section 4 this information is used to design suitable test cases.
|
| 30 |
+
|
| 31 |
+
The GIC architecture specification differentiates among four types of interrupts: peripheral, software-generated, virtual, and maintenance interrupts. In the course of this paper we focus solely on measuring interrupts triggered by external stimuli, the peripheral interrupts. They can be configured as edge-triggered or level-sensitive. This means that the corresponding interrupt is recognized either once on a rising edge in the input event signal, or continuously as long as the signal has a certain strength.
|
| 32 |
+
|
| 33 |
+

|
| 34 |
+
|
| 35 |
+
Figure 1: Repeatedly, per CPU interface executed selection and signaling process of the Generic Interrupt Controller (GIC) for handling triggered interrupt requests (IRQs).
|
| 36 |
+
|
| 37 |
+

|
| 38 |
+
|
| 39 |
+
Figure 2: Overall steps of the interrupt handling process on ARMv8 platforms with a Generic Interrupt Controller (GIC) version 2 and an ARMv8-A architecture profile core.
|
| 40 |
+
|
| 41 |
+
The GIC supervises the overall interrupt routing and management process up to the point that the ISR is called. Figure 1 shows the GIC architecture and the signaling path for the selected measurement hardware, the Xilinx UltraScale+ MPSoC (ZynqMP). The GIC manages all incoming event signals of the system and consists out of a central Distributor and a CPU interface per processor core. The Distributor manages the trigger-type of each interrupt, organizes their prioritization, and forwards requests to the responsible CPU interface. The CPU interfaces perform the priority masking and preemption handling for their associated core.
|
| 42 |
+
|
| 43 |
+
The timeline of the various steps in the interrupt handling process are illustrated in Fig. 2. The handling process of a certain IRQ begins with the arrival of an event signal at the Distributor (step 1). In case the signal matches the configured trigger type, the actual handling process is triggered through the recognition of a new IRQ (step 2), which eventually leads to the execution of the ISR (step 9).
|
| 44 |
+
|
| 45 |
+
After being recognized (step 2), the Distributor may select and forward the IRQ to the responsible CPU interface according to the process depicted in the upper half of Fig. 1. This selection process (step 3) is executed repeatedly and potentially in parallel for each CPU interface. When the next highest priority pending interrupt (HPPI) is identified the Distributor forwards the request to the currently inspected CPU interface (step 4). The CPU interface filters incoming requests according its configuration, the lower half of Fig. 1 depicts this filtering process. As a result the CPU interface may signal a pending IRQ to its associated core (step 5).
|
| 46 |
+
|
| 47 |
+

|
| 48 |
+
|
| 49 |
+
Figure 3: Chosen measurement setup, with four PL2PS interrupts generated by the programmable logic (PL) according to the configuration signaled by the processing system (PS) via its extended multiplexed I/O interface (EMIO). The generated interrupts, executed instructions, and global timestamps are recorded through the system trace macro-cell (STM) and embedded trace macrocell (ETM). The captured trace is read via an ARM DSTREAM unit.
|
| 50 |
+
|
| 51 |
+
Subsequent to the signaling of a new IRQ through the CPU interface the handling process continues in the core. In case of the ARMv8-A architecture [4], the core acknowledges the signaled IRQ by reading its id (step 6), which marks the IRQ as active within the GIC (step 7). Meanwhile, the processing core jumps to the vector table entry indicated by the id of the current IRQ (step 8), which finally leads to the execution of the ISR (step 9). Individual software stacks might add additional steps before the ISR finally starts.
|
| 52 |
+
|
| 53 |
+
An important factor in the described IRQ handling process is the priority of the interrupt id associated with a specific IRQ where a lower number indicates a higher priority. Besides the regular interrupt requests described above, the GIC architecture version 2 also supports the prioritized and more secure fast interrupt requests (FIQs), however, these lay out of the focus of this paper.
|
| 54 |
+
|
| 55 |
+
Besides the configured priorities and masks, the GIC internally also tracks already signaled and currently serviced IRQs through an internal state-machine, such that multiple IRQs of the same type can be signaled and handled simultaneously.
|
| 56 |
+
|
| 57 |
+
## 3 PROPOSED EVALUATION METHOD
|
| 58 |
+
|
| 59 |
+
With respect to the interrupt performance of a certain platform and software stack two properties are most relevant and need to be measured: the IRQ latency and throughput. Considering the overall interrupt handling process depicted in Fig. 2, we define the latency of an IRQ as the time between a newly signaled event (step 1) and the moment when the corresponding ISR is called (step 9). In case a software stack adds additional processing steps to the process, an appropriate alternative end-point shall be identified. As throughput we understand the number of completed ISRs per time unit.
|
| 60 |
+
|
| 61 |
+
Depending on the hardware platform and software stack under analysis the above figures may be influenced in variable proportions by software and hardware. This is especially true for the IRQ latency. However, from the system design point of view reliable bounds for the overall latency, between the signaling and processing of an interrupt, are more important than knowing the actual proportion. Thus, we measure the whole interrupt handling process end-to-end.
|
| 62 |
+
|
| 63 |
+
Nonetheless, it is important that the measurements do not include any intangible factors. Hence, we base our measurements on the hardware tracing functionality provided by the ARM CoreSight technologies [2]. They allow to precisely measure and correlate processor internal and external events in a common time domain.
|
| 64 |
+
|
| 65 |
+
### 3.1 Measurement Setup
|
| 66 |
+
|
| 67 |
+
Our measurements utilize a Xilinx Zynq UltraScale+ MPSoC ZCU102 evaluation board [12], with an ARM DSTREAM [3] hardware trace unit, and the prototyping platform [] . We have chosen the Zyn-qMP, as it features a seamlessly integrated programmable logic (PL) and versatile hardware tracing capabilities. Figure 3 illustrates the hardware side of the chosen setup.
|
| 68 |
+
|
| 69 |
+
The ZynqMP is divided into the PL and the processing system (PS). The PS provides two processor clusters, the application processing unit (APU) and real-time processing unit (RPU). The APU consists out of four ARM Cortex-A53 cores and the RPU out of two ARM Cortex-R5 cores, both equipped with their own interrupt handling infrastructure. In the following we will solely focus on the APU and its interrupt handling, as it most closely matches our definition of a commercial off-the-shelf (COTS) hardware platform.
|
| 70 |
+
|
| 71 |
+
The in-built tracing capabilities of the ZynqMP allows to record events, such as taken branches in the executed code, recognized external interrupt stimulation, and special software driven events on a common timeline that is given by global system-wide timestamps. The overall recording process is conducted by the system trace macrocell (STM) and embedded trace macrocells (ETMs) in hardware, and thus does not alter the behavior of the executed code, neither in a temporal nor semantic sense. Software driven events, however, are recorded by writing into special registers and thus marginally influence the timing of the executed code.
|
| 72 |
+
|
| 73 |
+
In addition to the in built hardware features, we deploy the custom interrupt generation block into the PL. The block simultaneously stimulates the APU and the STM with four signal lines, to capture each interrupt signal within the system trace. The stimulation scheme can be individually defined by setting two 32 bit counters. One controls the duration of the logical-high phase (On-Count) and the other that of the logical-low phase (Off-Count). Given our PL clock configuration, both counters allow to specify a duration between 0 s and ${17.1798}\mathrm{\;s}$ .
|
| 74 |
+
|
| 75 |
+
In parallel to the STM trace, the ETMs capture, among other things, indirect and direct branch instructions executed by the four cores in the APU. As mentioned before, all STM and ETM trace events are marked with global timestamps. We configure the ZynqMP to trigger the timestamp counter every $4\mathrm{\;{ns}}$ .
|
| 76 |
+
|
| 77 |
+
3
|
| 78 |
+
|
| 79 |
+
### 3.2 Measurement Procedure
|
| 80 |
+
|
| 81 |
+
Given the above measurement setup we propose two configurations for the interrupt generation block and two time measurements based on the timestamps of the captured trace events. One for throughput and one for latency measurements. Within both measurement types, we utilize core 0 of the APU as measurement core and the other cores of the APU for stimulation purposes, where needed. All measurements are conducted with two different software stacks: (i) bare-metal and (ii) FreeRTOS based systems.
|
| 82 |
+
|
| 83 |
+
Throughput. In case of a throughput evaluation, we configure the interrupt generation block to continuously signal the PL2PS interrupts for ${9.75}\mathrm{\;s}$ and then wait for another ${250}\mathrm{\;{ms}}$ on a rotating basis. We call each repetition of this pattern a stimulation phase. Core 0 is configured to handle the signaled private peripheral interrupt (PPI) on a level-sensitive basis and the corresponding ISR does nothing despite emitting a STM software event, i.e., executing a 8 bit store instruction. Hence, the time spent in each ISR and with that the possible reduction of the maximum throughput is negligible.
|
| 84 |
+
|
| 85 |
+
For each throughput measurement we capture a ${120}\mathrm{\;s}$ trace and evaluate the contained stimulation phases. The throughput $\left( \mu \right)$ in each stimulation phase $\left( {i \in \left\lbrack {0,{19}}\right\rbrack }\right)$ is obtained from the traced measurement samples by counting the ISR generated STM-software-events between the rising-edge STM-hardware-event of the ${\left( i\right) }^{\text{th }}$ and ${\left( i + 1\right) }^{\text{th }}$ stimulation phase and dividing it with the length of one logical-high-phase. The set of throughput values considered for the evaluation in Section 4 is then given by $M = \{ \mu \left( i\right) \mid i \in \left\lbrack {0,{19}}\right\rbrack \}$ .
|
| 86 |
+
|
| 87 |
+
Latency. The latency evaluation is conducted with an alternating scheme of a 1 msPL2PS interrupt trigger and a 4 mspause. The interrupt generation block is configured accordingly. Again we refer to each repetition of this scheme as a stimulation phase. In contrary to the throughput measurement, however, core 0 is configured to handle the signaled PPI on a rising-edge basis. Thus, every stimulation phase provokes only one interrupt. The corresponding ISR is the same as for the throughput measurements.
|
| 88 |
+
|
| 89 |
+
The results for the latency measurements are obtained by evaluating ${30}\mathrm{\;s}$ trace captures. The interrupt latency $\Delta {t}_{\text{latency }}\left( i\right)$ induced by each stimulation phase $i \in \left\lbrack {0,{2399}}\right\rbrack$ is given by $\Delta {t}_{\text{latency }} = B - A$ , with $A$ representing the point in time where the interrupt was stimulated and $B$ the point where the corresponding ISR was started. Both points can be obtained from the captured trace. $A$ is given by the timestamp of the STM hardware event associated with the rising-edge of the PL2PS interrupt signal. $B$ , on the other hand, has to be determined and defined for every analyzed software stack individually. In the course of this paper we utilize the timestamp of a STM event generated within the interrupt handler of our benchmark application that runs on top of the evaluated software stacks.
|
| 90 |
+
|
| 91 |
+
Similar to the throughput values, the set of latency values considered in Section 4 is given by $X = \left\{ {\Delta {t}_{\text{latency }}\left( i\right) \mid i \in \left\lbrack {0,{2399}}\right\rbrack }\right\}$ .
|
| 92 |
+
|
| 93 |
+
### 3.3 Precision and Limitations
|
| 94 |
+
|
| 95 |
+
In our measurement setup, we configure the PL, trace port, and timestamp generation clock to oscillate at ${250}\mathrm{{MHz}}$ . Hence, two consecutive timestamp ticks lay $4\mathrm{\;{ns}}$ apart from each other. Since each sampled event in the ETM and STM is assigned a timestamp, our measurement precision corresponds exactly to the system timestamp resolution, i.e., 4 ns. This is an order of magnitude smaller than the interrupt latency measured in a previous study for the same hardware platform [] and a quarter of the measured minimal interrupt latency of an ARM real-time core [10].
|
| 96 |
+
|
| 97 |
+
Even though state of the art oscilloscopes provide a sampling rate of up to ${20}\mathrm{{GSa}}/\mathrm{s}\left\lbrack 7\right\rbrack$ , which would correspond to a measuring precision of ${0.05}\mathrm{\;{ns}}$ , the actual measurement precision in case of interrupt latency measurements might be considerably lower. The reason for this is that the oscilloscope can only measure external signals of a processor. Thus, a in-depth knowledge of the internal structure of the hardware platform and executed instructions in the software stack during a measurement is required to counter this circumstance and utilize the full precision of the oscilloscope. This makes it less suited for the evaluation of different hardware platforms and software stacks. The Coresight based measurement setup, on the other hand, supports a flexible placement of the measurement points within and outside of the processor and does not require any specific knowledge about the hardware or software.
|
| 98 |
+
|
| 99 |
+
Besides the measurement precision and flexibility, we also need to ensure that the presented measurement setup is sane and triggered interrupts can actually be recognized by the processor. According to the ZynqMP technical reference manual [13, p. 312], a signal pulse that shall trigger a PL2PS interrupt needs to be at least ${40}\mathrm{\;{ns}}$ wide to guarantee that it is recognized as such. Hence, the presented stimulation scenarios for the two measurement procedures ensure that all triggered interrupts can be recognized.
|
| 100 |
+
|
| 101 |
+
The disadvantage of the presented measurement approach, however, is that it is only applicable for ARM based platforms. Given ARM’s ${40}\%$ share of the semiconductor market for IP designs [8], we believe this is an acceptable drawback. An additional limitation is that valid measurements can only be obtained for the interrupt with the highest priority among the active ones, but this limitation applies to measurement setups of any kind.
|
| 102 |
+
|
| 103 |
+
## 4 CONSTRUCTING A BENCHMARK
|
| 104 |
+
|
| 105 |
+
In order to create a benchmark for comparing the interrupt latency and throughput across platforms and software stacks we have designed seven test-cases specifically tailored to stress the ARMv8-A interrupt handling process. To judge the suitability of the individual test-cases for an overall benchmark, we evaluated all of them with two software stacks running on top of the ZynqMP. As software stacks, we utilize a bare-metal system that executes the ISRs and otherwise busy-waits in an endless loop, and a FreeRTOS based system with its own scheduler and timer tick that executes the same ISR routine. In Section 4.1 we evaluate the impact of each test-case by comparing the performance obtained with the test-case with the baseline performance of the two systems.
|
| 106 |
+
|
| 107 |
+
Given the results of the individual test-cases we compose three benchmarks out of them and show their suitability by applying them to the same system configurations.
|
| 108 |
+
|
| 109 |
+
### 4.1 Evaluated Test-Cases
|
| 110 |
+
|
| 111 |
+
Given the interrupt handling process in Section 2, we conclude that the time spent in the process can be influenced by: the core, caches, memory, and GIC. We have designed seven test-cases that aim
|
| 112 |
+
|
| 113 |
+
4 to reveal the influence of different configuration settings related to the aforementioned components onto the temporal behavior of the interrupt handling process. However, we do exclude the core from our considerations by only measuring interrupts with the highest priority. An overview on the proposed test-cases and their targeted components is given in Table 1. The remainder of this section elaborates on the intended influence of the listened test-cases on the interrupt handling process. The measurements for all test-cases follow the scheme presented in Section 3, unless indicated otherwise. Depending on the goal of each test-case they are either applied only for latency measurements or both latency and throughput measurements. The reason for that is that we aim at measuring the best and worst possible interrupt latency, but only the maximal possible interrupt throughput. The results of all evaluation runs are summarized in Figs. 4 and 5. The presented results are based on 848-6000 measurement samples per latency measurement and 11-12 samples per throughput measurement.
|
| 114 |
+
|
| 115 |
+
T1: Baseline. T1 is intended to provide a reference point to compare the other test-cases to and rate their impact. Hence, T1 assess the interrupt latency and throughput of a system in the most isolated way, with only one core and interrupt enabled and caches disabled. Hence, $\mathrm{T}1$ only enables the extended multiplexed $\mathrm{I}/\mathrm{O}$ interface (EMIO) pin driven interrupt and routes it to core 0 . As ISR the default handler, described in Section 3.2, is used. It is evaluated for its latency and throughput performance.
|
| 116 |
+
|
| 117 |
+
T2: Caches enabled. T2 equals T1, with the exception that all operations are executed with enabled caches. This test is conducted for both latency and throughput measurements.
|
| 118 |
+
|
| 119 |
+
T3: Caches invalidated. T3 is also based on T1, but the ISR additionally invalidates the data and instruction caches. Due to the fact that this is not feasible in throughput measurements, as new interrupts would arrive independently of the cache invalidation process, we conduct only latency measurements with T3.
|
| 120 |
+
|
| 121 |
+
T4: Enabled interrupts. T4 aims at stressing the GIC with the highest possible number of enabled interrupts, as the interrupt selection and signaling process, illustrated in Fig. 1, suggests that more checks have to be done the more interrupts are enabled/pending. Hence, this test-case enables up to 180 stressing interrupts, the maximum number supported by the ZynqMP, except those required for driving the measurements itself. All interrupts are routed to and handled by core 0 . The EMIO triggered PL-to-PS interrupt is assigned to the highest priority and all other interrupts to the lowest priority.
|
| 122 |
+
|
| 123 |
+
Core 0 installs an empty ISR that immediately returns after clearing the IRQ in the GIC for all interrupts, except the EMIO pin based PL-to-PS interrupt, which uses the same ISR as T1.
|
| 124 |
+
|
| 125 |
+
As this test aims at stressing the GIC to reduce its performance, we only evaluate it with respect to the interrupt latency. To be able to identify trends, we evaluated this test-case with1,36,72,108, 144, and 180 stressing interrupts. However, due to the marginal differences between the results of the different T4 variants and space constraints we only show the results of T4-180, T4 with 180 stressing interrupts, which provoked the highest latency.
|
| 126 |
+
|
| 127 |
+
T5: Order of priorities. T5 utilizes the same setup as T4 and is also applied to latency measurements only. However, in contrast to T4,
|
| 128 |
+
|
| 129 |
+
Table 1: Properties of the evaluated test-cases and benchmarks used to compare the interrupt latency (L) and throughput (T).
|
| 130 |
+
|
| 131 |
+
<table><tr><td rowspan="2">Description</td><td rowspan="2">Targeted Component</td><td colspan="2">Measurements</td><td rowspan="2">Enabled Interrupts</td><td rowspan="2">Cache Config</td><td rowspan="2">Enabled Cores</td><td colspan="3">Benchmarks</td></tr><tr><td>L</td><td>T</td><td>${\mathrm{L}}_{\min }$</td><td>${\mathrm{L}}_{\max }$</td><td>Tmax</td></tr><tr><td>T1: Baseline</td><td>-</td><td>X</td><td>X</td><td>1</td><td>Disabled</td><td>1</td><td/><td/><td/></tr><tr><td>T2: Caches enabled</td><td>Cache</td><td>X</td><td>X</td><td>1</td><td>Enabled</td><td>1</td><td rowspan="2">X</td><td rowspan="2"/><td>X</td></tr><tr><td>T3: Caches invalidated</td><td>Cache</td><td>X</td><td/><td>1</td><td>Invalidated</td><td>1</td><td/></tr><tr><td>T4: Enabled interrupts</td><td>GIC</td><td>X</td><td/><td>2-181</td><td>Disabled</td><td>2</td><td/><td>X</td><td rowspan="3">X</td></tr><tr><td>T5: Order of priorities</td><td>GIC</td><td>X</td><td/><td>15</td><td>Disabled</td><td>2</td><td/><td/></tr><tr><td>T6: Parallel interrupt handling</td><td>GIC</td><td>X</td><td>X</td><td>1</td><td>Disabled</td><td>2,3,4</td><td/><td>X</td></tr><tr><td>T7: Random memory accesses</td><td>Memory</td><td>X</td><td/><td>1</td><td>Disabled</td><td>4</td><td/><td/><td/></tr></table>
|
| 132 |
+
|
| 133 |
+

|
| 134 |
+
|
| 135 |
+
Figure 4: Latency measured with T1-T7 (a) and B- ${\mathrm{L}}_{\min }$ and B- ${\mathrm{L}}_{\max }$ (b-c). Figure a) uses a symlog scale with a linear threshold of 2496 ns, Fig. b) uses a symlog scale with a linear threshold of 240 ns, and Fig. c) uses a linear scale.
|
| 136 |
+
|
| 137 |
+

|
| 138 |
+
|
| 139 |
+
Figure 5: Throughput measured with T1, T2, T6, and B-Tmax. Figure a) compares the median of all measurements on a linear scale and Fig. b) illustrates the measured throughput ranges on a symlog scale with a linear threshold of $1\mathrm{\;{Hz}}$ , normalized to a ${500}\mathrm{{kHz}}$ range around the highlighted median.
|
| 140 |
+
|
| 141 |
+
T5 only utilizes as much interrupts as there are priorities, i.e., 15 . The measured interrupt remains at priority 0 and the priorities of the other 14 are assigned in an ascending order (i.e., 14 to 1). This design intends to provoke a maximal number of HPPI updates.
|
| 142 |
+
|
| 143 |
+
T6: Parallel interrupt handling. To test the influence of parallelly handled interrupts on the interrupt handling process, T6 enables up to 4 cores and configures all of them to handle the EMIO pin 0 interrupt. The interrupt is configured as level-sensitive with the highest priority. The PL ensures that this interrupt is signaled continuously and simultaneously as soon as the test is enabled. The ISRs on all cores generate a STM event, which are evaluated for throughput measurements. In case of latency measurements, however, only those STM events produced by core 0 are considered.
|
| 144 |
+
|
| 145 |
+
We evaluated T6 with 2,3 , and 4 enabled cores. The results showed a clear trend that the more enabled cores the higher the observed latency, but the lower the achieved throughput. Due to space constraints we thus only show the results for T6-4, with 4 enabled cores, in case of the latency considerations and T6-2 in case of the throughput measurements.
|
| 146 |
+
|
| 147 |
+
T7: Random memory accesses. As pointed out earlier, the shared memory and interconnecting buses of multi-core processor represent a major source of unforeseen delays. Accordingly, T7 is designed to delay memory accesses by overloading the interconnecting bus and memory interface. For this purpose all 4 cores are running random, concurrent memory accesses in form of constants that are written to random locations in a ${96}\mathrm{{MB}}$ large array. In parallel core 0 executes the standard latency test. Throughput evaluations are not considered with this test-case, as it targets to delay the interrupt handling process.
|
| 148 |
+
|
| 149 |
+
### 4.2 Proposed Benchmarks
|
| 150 |
+
|
| 151 |
+
Analyzing the measured interrupt performances under the different test-cases, shown in Figs. 4 and 5, we conclude that first of all different setups and software stacks indeed considerably influence the interrupt handling performance. All three targeted components, provoke a considerable effect on the interrupt latency and throughput. Particularly noticeable are the differences between the test-cases with enabled (T2, T3) and disabled caches (T1, T4-T7), for both the observed latency and throughput. As well as the effects of stressing the GIC on the measured latency (T4-T6).
|
| 152 |
+
|
| 153 |
+
T2 produces by far the shortest interrupt latency of ${232}\mathrm{\;{ns}}$ in average with only a few outliers. Hence, we propose to utilize T2 as benchmark for the minimal achievable latency $\left( {\mathrm{B} - {\mathrm{L}}_{\min }}\right)$ .
|
| 154 |
+
|
| 155 |
+
To obtain a suitable benchmark for the maximal latency, we analyzed all combination out of the test-cases T4-36, T4-144, T6-3, and T7. Except for the combination out of T6 and T7, all tested combinations showed a similar performance with only slight differences. An exception to that forms the interrupt latency performance of the combination out of T4-144 and T6 on FreeRTOS, which is considerably more confined than all other observed ranges. The highest latency is achieved with a combination out of T4-36 and T6, however, the combination of T4-36, T6, and T7 is close. Accordingly, we propose to use the combination out of T4-36 and T6 to benchmark the achievable maximal interrupt latency $\left( {\mathrm{B} - {\mathrm{L}}_{\max }}\right)$ .
|
| 156 |
+
|
| 157 |
+
For the maximal throughput benchmark $\left( {\mathrm{B} - {\mathrm{T}}_{\max }}\right)$ we evaluated all four variants of the T6 test-case with enabled caches (T2). Interestingly, the enabled caches seem to mitigate the effect of more enabled cores, as all combinations showed a similar throughput. However, the combination out of T6-2 and T2 still performed best. Even though the maximal achieved throughput of the combined test-cases lags a little behind that of T2 alone, in case of the baremetal software stack, it outperforms T2 by far in case of the FreeRTOS based stack. Hence, we propose the combination out of T6-2 and T2 to benchmark the maximal throughput of a system.
|
| 158 |
+
|
| 159 |
+
## 5 CONCLUSION AND OUTLOOK
|
| 160 |
+
|
| 161 |
+
We presented a flexible evaluation method based on the ARM Core-Sight technology, which enables the assessment of various software stacks on top of commodity ARMv8-A platforms with respect to their interrupt handling performance. Utilizing the evaluation method, we crafted seven specifically tailored test-cases that were shown to stress the ARM interrupt handling process. Out of these test-cases we deduced three benchmark functions, tailored to provoke the minimal $\left( {\mathrm{B} - {\mathrm{L}}_{\min }}\right)$ and maximal $\left( {\mathrm{B} - {\mathrm{L}}_{\max }}\right)$ interrupt latency, and the maximal throughput $\left( {\mathrm{B} - {\mathrm{T}}_{\max }}\right)$ , of a given software stack. We validated the test-cases and benchmark functions by comparing two software stacks (a simple bare-metal and FreeRTOS based environment) and measuring them on top of a Xilinx Zynq UltraScale+ MPSoC ZCU102 evaluation board [12].
|
| 162 |
+
|
| 163 |
+
Our measurements showed that different software stacks can have a considerable impact on the interrupt handling performance of a hardware platform. Hence, we hope to draw some attention on the importance of a good software design for CPS, with respect to interrupt processing and the need of a more profound analysis on how interrupt handling processes can be made more predictable with respect to the achievable latency and throughput.
|
| 164 |
+
|
| 165 |
+
## ACKNOWLEDGMENTS
|
| 166 |
+
|
| 167 |
+
## REFERENCES
|
| 168 |
+
|
| 169 |
+
[1] ARM Ltd. 2013. ARM Generic Interrupt Controller - Arch. ver. 2.0. (IHI 0048B.b).
|
| 170 |
+
|
| 171 |
+
[2] ARM Ltd. 2013. CoreSight Technical Introduction. White paper ARM-EPM-039795.
|
| 172 |
+
|
| 173 |
+
[3] ARM Ltd. 2015. ARM DS-5 Version 5 ARM DSTREAM User Guide. (DUI 0481K).
|
| 174 |
+
|
| 175 |
+
[4] ARM Ltd. 2018. ARM Architecture Reference Manual - ARMv8, for ARMv8-A architecture profile. (DDI 0487D.a).
|
| 176 |
+
|
| 177 |
+
[5] P. Frey. 2010. Case study: engine control application. Technical Report. Open Access Repositorium der Universität Ulm.
|
| 178 |
+
|
| 179 |
+
[6] E. D. Jensen. 2008. Wrong Assumptions and Neglected Areas in Real-Time Systems. In 11th Int. Symp. on Object and Component-Oriented Real-Time Distributed Computing (ISORC).
|
| 180 |
+
|
| 181 |
+
[7] Keysight Tech. 2020. InfiniiVision 6000 X Series Oscilloscopes Data Sheet.
|
| 182 |
+
|
| 183 |
+
[8] D. Manners. 2020. Design IP market grew 5.2% last year. https://www.electronicsweekly.com/news/business/732874-2020-04/.Accessed 06-05-2020.
|
| 184 |
+
|
| 185 |
+
[9] S. M. Müller and W. J. Paul. 1995. The Complexity of Simple Computer Architectures. Springer-Verlag, Chapter 8, 141-178.
|
| 186 |
+
|
| 187 |
+
[10] NXP Semiconductors. 2018. Measuring Interrupt Latency. Appl. Note AN12078.
|
| 188 |
+
|
| 189 |
+
[11] T. Tang, F. Chucholowski, and M. Lienkamp. 2014. Teleoperated driving basics and system design. ATZ worldwide 116, 2 (Jan. 2014), 16-19.
|
| 190 |
+
|
| 191 |
+
[12] Xilinx, Inc. 2018. ZCU102 Evaluation Board - User Guide. UG1182 (v1.4).
|
| 192 |
+
|
| 193 |
+
[13] Xilinx, Inc. 2018. Zynq UltraScale+ Device - TRM. UG1085 (v1.8).
|
papers/MobiCom/MobiCom 2020/MobiCom 2020 Workshop/MobiCom 2020 Workshop CPS-IoTBench/D8K3TY96hrz/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,195 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ QUANTIFYING THE LATENCY AND POSSIBLE THROUGHPUT OF EXTERNAL INTERRUPTS ON CYBER-PHYSICAL SYSTEMS
|
| 2 |
+
|
| 3 |
+
§ ABSTRACT
|
| 4 |
+
|
| 5 |
+
An important characteristic of cyber-physical systems is their capability to respond, in-time, to events from their physical environment. However, to the best of our knowledge there exists no benchmark for assessing and comparing the interrupt handling performance of different software stacks. Hence, we present a flexible evaluation method for measuring the interrupt latency and throughput on ARMv8-A based platforms. We define and validate seven test-cases that stress individual parts of the overall process and combine them to three benchmark functions that provoke the minimal and maximal interrupt latency, and maximal interrupt throughput.
|
| 6 |
+
|
| 7 |
+
§ DATA AVAILABILITY STATEMENT
|
| 8 |
+
|
| 9 |
+
A snapshot of the exact version of the prototyping platform [] that was used to conduct the measurements described in this paper has been made available through Zenodo []. The snapshot also includes the captured, raw STM trace data and the matching processing scripts to produce the figures included in this paper. The latest version of the platform can be obtained from [].
|
| 10 |
+
|
| 11 |
+
§ 1 INTRODUCTION
|
| 12 |
+
|
| 13 |
+
Cyber-physical systems (CPSs) are characterized by the fact that a computer system works together with a physical environment, or rather controls it. A specific characteristic of such control systems is their necessity to provide short reaction times on events in the physical world, to guarantee a good control quality. However, to properly design a CPS, the reaction times not only need to be fast, but also predictable [6]. Both properties are likewise essential for modern systems, such as tele-operated-driving [11], and classic systems, such as the control of internal combustion engines [5].
|
| 14 |
+
|
| 15 |
+
An important, but often neglected aspect of the achievable reaction time is the interrupt handling performance in both dimensions the interrupt handling latency and throughput capabilities of a system. Especially the effect of the utilized software stack has not yet been comprehensively assessed, even though the development of key-components of the CPS software stack, such as operating systems, virtualization solutions, and communication stacks, could benefit from the insight of such an assessment. Furthermore, such a systematic evaluation of the interrupt performance would also be of interest for system engineers, to evaluate architecture concepts or acquired software stacks for their suitability in a particularly latency-sensitive or throughput-hungry use-case. Previous approaches in this area require a very detailed knowledge of the utilized hardware and are therefore not easy to apply [10].
|
| 16 |
+
|
| 17 |
+
We present a flexible interrupt performance measurement method that can be applied to a wide variety of hardware platforms and software systems. As we see an increasing share of ARM based systems used in CPSs [8], we have chosen the ARMv8 hardware platform and its on-chip tracing technologies as basis for our measurement method. Based on the measurement method, we make a
|
| 18 |
+
|
| 19 |
+
first attempt to specify benchmark functions that provoke the minimal and maximal interrupt latency, as well as the maximal interrupt throughput achievable with a system under consideration.
|
| 20 |
+
|
| 21 |
+
To deduce appropriate benchmark functions, we designed seven independent test-cases, specifically tailored to stress test individual parts of the ARM interrupt handling process. We evaluated those test-case on a Xilinx Zynq UltraScale+ MPSoC ZCU102 evaluation board [12], by comparing the interrupt performance of two different software stacks. A minimal bare-metal application and a full FreeRTOS real-time operating system based stack. Out of the seven test-cases, we formed 10 distinctive combinations and assessed them as potential benchmark functions. Based on this assessment, we propose three benchmark functions to trigger the minimal and maximal interrupt latency and the maximal interrupt throughput with a given software stack.
|
| 22 |
+
|
| 23 |
+
Our measurements confirmed that the chosen software stack indeed has a considerable effect on the interrupt handling performance of a CPS. Hence, we envision that the proposed evaluation method could be used to benchmark different software solutions against each other, to create a profound basis for further optimizations of the interrupt handling path in CPS software architectures.
|
| 24 |
+
|
| 25 |
+
The rest of this paper is structured as follows: Section 2 describes the interrupt handling process on ARMv8-A platforms with a Generic Interrupt Controller (GIC) version 2, Section 3 continues with a presentation of the envisioned evaluation method consisting out of a measurement setup and procedure. Section 4 discusses the proposed test-cases and benchmarks along with the measured latency and throughput values. Section 5 concludes the paper.
|
| 26 |
+
|
| 27 |
+
§ 2 INTERRUPT HANDLING PROCEDURE ON ARMV8-A PLATFORMS
|
| 28 |
+
|
| 29 |
+
Müller and Paul [9] define an interrupt as an event that causes a change in the execution flow of a program sequence other than a branch instruction. Its handling process starts with the activation through a stimulus and ends with the completion of the interrupt service routine (ISR), which is called in consequence and processes the stimulus. Until the ISR is executed several steps are undergone in hardware to cope for example with simultaneously arriving interrupt requests (IRQs) and masking of certain requests. In the following, we explain this process for ARMv8-A platforms, as specified in the GIC architecture specification version 2 [1]. In Section 4 this information is used to design suitable test cases.
|
| 30 |
+
|
| 31 |
+
The GIC architecture specification differentiates among four types of interrupts: peripheral, software-generated, virtual, and maintenance interrupts. In the course of this paper we focus solely on measuring interrupts triggered by external stimuli, the peripheral interrupts. They can be configured as edge-triggered or level-sensitive. This means that the corresponding interrupt is recognized either once on a rising edge in the input event signal, or continuously as long as the signal has a certain strength.
|
| 32 |
+
|
| 33 |
+
< g r a p h i c s >
|
| 34 |
+
|
| 35 |
+
Figure 1: Repeatedly, per CPU interface executed selection and signaling process of the Generic Interrupt Controller (GIC) for handling triggered interrupt requests (IRQs).
|
| 36 |
+
|
| 37 |
+
< g r a p h i c s >
|
| 38 |
+
|
| 39 |
+
Figure 2: Overall steps of the interrupt handling process on ARMv8 platforms with a Generic Interrupt Controller (GIC) version 2 and an ARMv8-A architecture profile core.
|
| 40 |
+
|
| 41 |
+
The GIC supervises the overall interrupt routing and management process up to the point that the ISR is called. Figure 1 shows the GIC architecture and the signaling path for the selected measurement hardware, the Xilinx UltraScale+ MPSoC (ZynqMP). The GIC manages all incoming event signals of the system and consists out of a central Distributor and a CPU interface per processor core. The Distributor manages the trigger-type of each interrupt, organizes their prioritization, and forwards requests to the responsible CPU interface. The CPU interfaces perform the priority masking and preemption handling for their associated core.
|
| 42 |
+
|
| 43 |
+
The timeline of the various steps in the interrupt handling process are illustrated in Fig. 2. The handling process of a certain IRQ begins with the arrival of an event signal at the Distributor (step 1). In case the signal matches the configured trigger type, the actual handling process is triggered through the recognition of a new IRQ (step 2), which eventually leads to the execution of the ISR (step 9).
|
| 44 |
+
|
| 45 |
+
After being recognized (step 2), the Distributor may select and forward the IRQ to the responsible CPU interface according to the process depicted in the upper half of Fig. 1. This selection process (step 3) is executed repeatedly and potentially in parallel for each CPU interface. When the next highest priority pending interrupt (HPPI) is identified the Distributor forwards the request to the currently inspected CPU interface (step 4). The CPU interface filters incoming requests according its configuration, the lower half of Fig. 1 depicts this filtering process. As a result the CPU interface may signal a pending IRQ to its associated core (step 5).
|
| 46 |
+
|
| 47 |
+
< g r a p h i c s >
|
| 48 |
+
|
| 49 |
+
Figure 3: Chosen measurement setup, with four PL2PS interrupts generated by the programmable logic (PL) according to the configuration signaled by the processing system (PS) via its extended multiplexed I/O interface (EMIO). The generated interrupts, executed instructions, and global timestamps are recorded through the system trace macro-cell (STM) and embedded trace macrocell (ETM). The captured trace is read via an ARM DSTREAM unit.
|
| 50 |
+
|
| 51 |
+
Subsequent to the signaling of a new IRQ through the CPU interface the handling process continues in the core. In case of the ARMv8-A architecture [4], the core acknowledges the signaled IRQ by reading its id (step 6), which marks the IRQ as active within the GIC (step 7). Meanwhile, the processing core jumps to the vector table entry indicated by the id of the current IRQ (step 8), which finally leads to the execution of the ISR (step 9). Individual software stacks might add additional steps before the ISR finally starts.
|
| 52 |
+
|
| 53 |
+
An important factor in the described IRQ handling process is the priority of the interrupt id associated with a specific IRQ where a lower number indicates a higher priority. Besides the regular interrupt requests described above, the GIC architecture version 2 also supports the prioritized and more secure fast interrupt requests (FIQs), however, these lay out of the focus of this paper.
|
| 54 |
+
|
| 55 |
+
Besides the configured priorities and masks, the GIC internally also tracks already signaled and currently serviced IRQs through an internal state-machine, such that multiple IRQs of the same type can be signaled and handled simultaneously.
|
| 56 |
+
|
| 57 |
+
§ 3 PROPOSED EVALUATION METHOD
|
| 58 |
+
|
| 59 |
+
With respect to the interrupt performance of a certain platform and software stack two properties are most relevant and need to be measured: the IRQ latency and throughput. Considering the overall interrupt handling process depicted in Fig. 2, we define the latency of an IRQ as the time between a newly signaled event (step 1) and the moment when the corresponding ISR is called (step 9). In case a software stack adds additional processing steps to the process, an appropriate alternative end-point shall be identified. As throughput we understand the number of completed ISRs per time unit.
|
| 60 |
+
|
| 61 |
+
Depending on the hardware platform and software stack under analysis the above figures may be influenced in variable proportions by software and hardware. This is especially true for the IRQ latency. However, from the system design point of view reliable bounds for the overall latency, between the signaling and processing of an interrupt, are more important than knowing the actual proportion. Thus, we measure the whole interrupt handling process end-to-end.
|
| 62 |
+
|
| 63 |
+
Nonetheless, it is important that the measurements do not include any intangible factors. Hence, we base our measurements on the hardware tracing functionality provided by the ARM CoreSight technologies [2]. They allow to precisely measure and correlate processor internal and external events in a common time domain.
|
| 64 |
+
|
| 65 |
+
§ 3.1 MEASUREMENT SETUP
|
| 66 |
+
|
| 67 |
+
Our measurements utilize a Xilinx Zynq UltraScale+ MPSoC ZCU102 evaluation board [12], with an ARM DSTREAM [3] hardware trace unit, and the prototyping platform [] . We have chosen the Zyn-qMP, as it features a seamlessly integrated programmable logic (PL) and versatile hardware tracing capabilities. Figure 3 illustrates the hardware side of the chosen setup.
|
| 68 |
+
|
| 69 |
+
The ZynqMP is divided into the PL and the processing system (PS). The PS provides two processor clusters, the application processing unit (APU) and real-time processing unit (RPU). The APU consists out of four ARM Cortex-A53 cores and the RPU out of two ARM Cortex-R5 cores, both equipped with their own interrupt handling infrastructure. In the following we will solely focus on the APU and its interrupt handling, as it most closely matches our definition of a commercial off-the-shelf (COTS) hardware platform.
|
| 70 |
+
|
| 71 |
+
The in-built tracing capabilities of the ZynqMP allows to record events, such as taken branches in the executed code, recognized external interrupt stimulation, and special software driven events on a common timeline that is given by global system-wide timestamps. The overall recording process is conducted by the system trace macrocell (STM) and embedded trace macrocells (ETMs) in hardware, and thus does not alter the behavior of the executed code, neither in a temporal nor semantic sense. Software driven events, however, are recorded by writing into special registers and thus marginally influence the timing of the executed code.
|
| 72 |
+
|
| 73 |
+
In addition to the in built hardware features, we deploy the custom interrupt generation block into the PL. The block simultaneously stimulates the APU and the STM with four signal lines, to capture each interrupt signal within the system trace. The stimulation scheme can be individually defined by setting two 32 bit counters. One controls the duration of the logical-high phase (On-Count) and the other that of the logical-low phase (Off-Count). Given our PL clock configuration, both counters allow to specify a duration between 0 s and ${17.1798}\mathrm{\;s}$ .
|
| 74 |
+
|
| 75 |
+
In parallel to the STM trace, the ETMs capture, among other things, indirect and direct branch instructions executed by the four cores in the APU. As mentioned before, all STM and ETM trace events are marked with global timestamps. We configure the ZynqMP to trigger the timestamp counter every $4\mathrm{\;{ns}}$ .
|
| 76 |
+
|
| 77 |
+
3
|
| 78 |
+
|
| 79 |
+
§ 3.2 MEASUREMENT PROCEDURE
|
| 80 |
+
|
| 81 |
+
Given the above measurement setup we propose two configurations for the interrupt generation block and two time measurements based on the timestamps of the captured trace events. One for throughput and one for latency measurements. Within both measurement types, we utilize core 0 of the APU as measurement core and the other cores of the APU for stimulation purposes, where needed. All measurements are conducted with two different software stacks: (i) bare-metal and (ii) FreeRTOS based systems.
|
| 82 |
+
|
| 83 |
+
Throughput. In case of a throughput evaluation, we configure the interrupt generation block to continuously signal the PL2PS interrupts for ${9.75}\mathrm{\;s}$ and then wait for another ${250}\mathrm{\;{ms}}$ on a rotating basis. We call each repetition of this pattern a stimulation phase. Core 0 is configured to handle the signaled private peripheral interrupt (PPI) on a level-sensitive basis and the corresponding ISR does nothing despite emitting a STM software event, i.e., executing a 8 bit store instruction. Hence, the time spent in each ISR and with that the possible reduction of the maximum throughput is negligible.
|
| 84 |
+
|
| 85 |
+
For each throughput measurement we capture a ${120}\mathrm{\;s}$ trace and evaluate the contained stimulation phases. The throughput $\left( \mu \right)$ in each stimulation phase $\left( {i \in \left\lbrack {0,{19}}\right\rbrack }\right)$ is obtained from the traced measurement samples by counting the ISR generated STM-software-events between the rising-edge STM-hardware-event of the ${\left( i\right) }^{\text{ th }}$ and ${\left( i + 1\right) }^{\text{ th }}$ stimulation phase and dividing it with the length of one logical-high-phase. The set of throughput values considered for the evaluation in Section 4 is then given by $M = \{ \mu \left( i\right) \mid i \in \left\lbrack {0,{19}}\right\rbrack \}$ .
|
| 86 |
+
|
| 87 |
+
Latency. The latency evaluation is conducted with an alternating scheme of a 1 msPL2PS interrupt trigger and a 4 mspause. The interrupt generation block is configured accordingly. Again we refer to each repetition of this scheme as a stimulation phase. In contrary to the throughput measurement, however, core 0 is configured to handle the signaled PPI on a rising-edge basis. Thus, every stimulation phase provokes only one interrupt. The corresponding ISR is the same as for the throughput measurements.
|
| 88 |
+
|
| 89 |
+
The results for the latency measurements are obtained by evaluating ${30}\mathrm{\;s}$ trace captures. The interrupt latency $\Delta {t}_{\text{ latency }}\left( i\right)$ induced by each stimulation phase $i \in \left\lbrack {0,{2399}}\right\rbrack$ is given by $\Delta {t}_{\text{ latency }} = B - A$ , with $A$ representing the point in time where the interrupt was stimulated and $B$ the point where the corresponding ISR was started. Both points can be obtained from the captured trace. $A$ is given by the timestamp of the STM hardware event associated with the rising-edge of the PL2PS interrupt signal. $B$ , on the other hand, has to be determined and defined for every analyzed software stack individually. In the course of this paper we utilize the timestamp of a STM event generated within the interrupt handler of our benchmark application that runs on top of the evaluated software stacks.
|
| 90 |
+
|
| 91 |
+
Similar to the throughput values, the set of latency values considered in Section 4 is given by $X = \left\{ {\Delta {t}_{\text{ latency }}\left( i\right) \mid i \in \left\lbrack {0,{2399}}\right\rbrack }\right\}$ .
|
| 92 |
+
|
| 93 |
+
§ 3.3 PRECISION AND LIMITATIONS
|
| 94 |
+
|
| 95 |
+
In our measurement setup, we configure the PL, trace port, and timestamp generation clock to oscillate at ${250}\mathrm{{MHz}}$ . Hence, two consecutive timestamp ticks lay $4\mathrm{\;{ns}}$ apart from each other. Since each sampled event in the ETM and STM is assigned a timestamp, our measurement precision corresponds exactly to the system timestamp resolution, i.e., 4 ns. This is an order of magnitude smaller than the interrupt latency measured in a previous study for the same hardware platform [] and a quarter of the measured minimal interrupt latency of an ARM real-time core [10].
|
| 96 |
+
|
| 97 |
+
Even though state of the art oscilloscopes provide a sampling rate of up to ${20}\mathrm{{GSa}}/\mathrm{s}\left\lbrack 7\right\rbrack$ , which would correspond to a measuring precision of ${0.05}\mathrm{\;{ns}}$ , the actual measurement precision in case of interrupt latency measurements might be considerably lower. The reason for this is that the oscilloscope can only measure external signals of a processor. Thus, a in-depth knowledge of the internal structure of the hardware platform and executed instructions in the software stack during a measurement is required to counter this circumstance and utilize the full precision of the oscilloscope. This makes it less suited for the evaluation of different hardware platforms and software stacks. The Coresight based measurement setup, on the other hand, supports a flexible placement of the measurement points within and outside of the processor and does not require any specific knowledge about the hardware or software.
|
| 98 |
+
|
| 99 |
+
Besides the measurement precision and flexibility, we also need to ensure that the presented measurement setup is sane and triggered interrupts can actually be recognized by the processor. According to the ZynqMP technical reference manual [13, p. 312], a signal pulse that shall trigger a PL2PS interrupt needs to be at least ${40}\mathrm{\;{ns}}$ wide to guarantee that it is recognized as such. Hence, the presented stimulation scenarios for the two measurement procedures ensure that all triggered interrupts can be recognized.
|
| 100 |
+
|
| 101 |
+
The disadvantage of the presented measurement approach, however, is that it is only applicable for ARM based platforms. Given ARM’s ${40}\%$ share of the semiconductor market for IP designs [8], we believe this is an acceptable drawback. An additional limitation is that valid measurements can only be obtained for the interrupt with the highest priority among the active ones, but this limitation applies to measurement setups of any kind.
|
| 102 |
+
|
| 103 |
+
§ 4 CONSTRUCTING A BENCHMARK
|
| 104 |
+
|
| 105 |
+
In order to create a benchmark for comparing the interrupt latency and throughput across platforms and software stacks we have designed seven test-cases specifically tailored to stress the ARMv8-A interrupt handling process. To judge the suitability of the individual test-cases for an overall benchmark, we evaluated all of them with two software stacks running on top of the ZynqMP. As software stacks, we utilize a bare-metal system that executes the ISRs and otherwise busy-waits in an endless loop, and a FreeRTOS based system with its own scheduler and timer tick that executes the same ISR routine. In Section 4.1 we evaluate the impact of each test-case by comparing the performance obtained with the test-case with the baseline performance of the two systems.
|
| 106 |
+
|
| 107 |
+
Given the results of the individual test-cases we compose three benchmarks out of them and show their suitability by applying them to the same system configurations.
|
| 108 |
+
|
| 109 |
+
§ 4.1 EVALUATED TEST-CASES
|
| 110 |
+
|
| 111 |
+
Given the interrupt handling process in Section 2, we conclude that the time spent in the process can be influenced by: the core, caches, memory, and GIC. We have designed seven test-cases that aim
|
| 112 |
+
|
| 113 |
+
4 to reveal the influence of different configuration settings related to the aforementioned components onto the temporal behavior of the interrupt handling process. However, we do exclude the core from our considerations by only measuring interrupts with the highest priority. An overview on the proposed test-cases and their targeted components is given in Table 1. The remainder of this section elaborates on the intended influence of the listened test-cases on the interrupt handling process. The measurements for all test-cases follow the scheme presented in Section 3, unless indicated otherwise. Depending on the goal of each test-case they are either applied only for latency measurements or both latency and throughput measurements. The reason for that is that we aim at measuring the best and worst possible interrupt latency, but only the maximal possible interrupt throughput. The results of all evaluation runs are summarized in Figs. 4 and 5. The presented results are based on 848-6000 measurement samples per latency measurement and 11-12 samples per throughput measurement.
|
| 114 |
+
|
| 115 |
+
T1: Baseline. T1 is intended to provide a reference point to compare the other test-cases to and rate their impact. Hence, T1 assess the interrupt latency and throughput of a system in the most isolated way, with only one core and interrupt enabled and caches disabled. Hence, $\mathrm{T}1$ only enables the extended multiplexed $\mathrm{I}/\mathrm{O}$ interface (EMIO) pin driven interrupt and routes it to core 0 . As ISR the default handler, described in Section 3.2, is used. It is evaluated for its latency and throughput performance.
|
| 116 |
+
|
| 117 |
+
T2: Caches enabled. T2 equals T1, with the exception that all operations are executed with enabled caches. This test is conducted for both latency and throughput measurements.
|
| 118 |
+
|
| 119 |
+
T3: Caches invalidated. T3 is also based on T1, but the ISR additionally invalidates the data and instruction caches. Due to the fact that this is not feasible in throughput measurements, as new interrupts would arrive independently of the cache invalidation process, we conduct only latency measurements with T3.
|
| 120 |
+
|
| 121 |
+
T4: Enabled interrupts. T4 aims at stressing the GIC with the highest possible number of enabled interrupts, as the interrupt selection and signaling process, illustrated in Fig. 1, suggests that more checks have to be done the more interrupts are enabled/pending. Hence, this test-case enables up to 180 stressing interrupts, the maximum number supported by the ZynqMP, except those required for driving the measurements itself. All interrupts are routed to and handled by core 0 . The EMIO triggered PL-to-PS interrupt is assigned to the highest priority and all other interrupts to the lowest priority.
|
| 122 |
+
|
| 123 |
+
Core 0 installs an empty ISR that immediately returns after clearing the IRQ in the GIC for all interrupts, except the EMIO pin based PL-to-PS interrupt, which uses the same ISR as T1.
|
| 124 |
+
|
| 125 |
+
As this test aims at stressing the GIC to reduce its performance, we only evaluate it with respect to the interrupt latency. To be able to identify trends, we evaluated this test-case with1,36,72,108, 144, and 180 stressing interrupts. However, due to the marginal differences between the results of the different T4 variants and space constraints we only show the results of T4-180, T4 with 180 stressing interrupts, which provoked the highest latency.
|
| 126 |
+
|
| 127 |
+
T5: Order of priorities. T5 utilizes the same setup as T4 and is also applied to latency measurements only. However, in contrast to T4,
|
| 128 |
+
|
| 129 |
+
Table 1: Properties of the evaluated test-cases and benchmarks used to compare the interrupt latency (L) and throughput (T).
|
| 130 |
+
|
| 131 |
+
max width=
|
| 132 |
+
|
| 133 |
+
2*Description 2*Targeted Component 2|c|Measurements 2*Enabled Interrupts 2*Cache Config 2*Enabled Cores 3|c|Benchmarks
|
| 134 |
+
|
| 135 |
+
3-4
|
| 136 |
+
8-10
|
| 137 |
+
L T ${\mathrm{L}}_{\min }$ ${\mathrm{L}}_{\max }$ Tmax
|
| 138 |
+
|
| 139 |
+
1-10
|
| 140 |
+
T1: Baseline - X X 1 Disabled 1 X X X
|
| 141 |
+
|
| 142 |
+
1-10
|
| 143 |
+
T2: Caches enabled Cache X X 1 Enabled 1 2*X 2*X X
|
| 144 |
+
|
| 145 |
+
1-7
|
| 146 |
+
10-10
|
| 147 |
+
T3: Caches invalidated Cache X X 1 Invalidated 1 X
|
| 148 |
+
|
| 149 |
+
1-10
|
| 150 |
+
T4: Enabled interrupts GIC X X 2-181 Disabled 2 X X 3*X
|
| 151 |
+
|
| 152 |
+
1-9
|
| 153 |
+
T5: Order of priorities GIC X X 15 Disabled 2 X X
|
| 154 |
+
|
| 155 |
+
1-9
|
| 156 |
+
T6: Parallel interrupt handling GIC X X 1 Disabled 2,3,4 X X
|
| 157 |
+
|
| 158 |
+
1-10
|
| 159 |
+
T7: Random memory accesses Memory X X 1 Disabled 4 X X X
|
| 160 |
+
|
| 161 |
+
1-10
|
| 162 |
+
|
| 163 |
+
< g r a p h i c s >
|
| 164 |
+
|
| 165 |
+
Figure 4: Latency measured with T1-T7 (a) and B- ${\mathrm{L}}_{\min }$ and B- ${\mathrm{L}}_{\max }$ (b-c). Figure a) uses a symlog scale with a linear threshold of 2496 ns, Fig. b) uses a symlog scale with a linear threshold of 240 ns, and Fig. c) uses a linear scale.
|
| 166 |
+
|
| 167 |
+
< g r a p h i c s >
|
| 168 |
+
|
| 169 |
+
Figure 5: Throughput measured with T1, T2, T6, and B-Tmax. Figure a) compares the median of all measurements on a linear scale and Fig. b) illustrates the measured throughput ranges on a symlog scale with a linear threshold of $1\mathrm{\;{Hz}}$ , normalized to a ${500}\mathrm{{kHz}}$ range around the highlighted median.
|
| 170 |
+
|
| 171 |
+
T5 only utilizes as much interrupts as there are priorities, i.e., 15 . The measured interrupt remains at priority 0 and the priorities of the other 14 are assigned in an ascending order (i.e., 14 to 1). This design intends to provoke a maximal number of HPPI updates.
|
| 172 |
+
|
| 173 |
+
T6: Parallel interrupt handling. To test the influence of parallelly handled interrupts on the interrupt handling process, T6 enables up to 4 cores and configures all of them to handle the EMIO pin 0 interrupt. The interrupt is configured as level-sensitive with the highest priority. The PL ensures that this interrupt is signaled continuously and simultaneously as soon as the test is enabled. The ISRs on all cores generate a STM event, which are evaluated for throughput measurements. In case of latency measurements, however, only those STM events produced by core 0 are considered.
|
| 174 |
+
|
| 175 |
+
We evaluated T6 with 2,3, and 4 enabled cores. The results showed a clear trend that the more enabled cores the higher the observed latency, but the lower the achieved throughput. Due to space constraints we thus only show the results for T6-4, with 4 enabled cores, in case of the latency considerations and T6-2 in case of the throughput measurements.
|
| 176 |
+
|
| 177 |
+
T7: Random memory accesses. As pointed out earlier, the shared memory and interconnecting buses of multi-core processor represent a major source of unforeseen delays. Accordingly, T7 is designed to delay memory accesses by overloading the interconnecting bus and memory interface. For this purpose all 4 cores are running random, concurrent memory accesses in form of constants that are written to random locations in a ${96}\mathrm{{MB}}$ large array. In parallel core 0 executes the standard latency test. Throughput evaluations are not considered with this test-case, as it targets to delay the interrupt handling process.
|
| 178 |
+
|
| 179 |
+
§ 4.2 PROPOSED BENCHMARKS
|
| 180 |
+
|
| 181 |
+
Analyzing the measured interrupt performances under the different test-cases, shown in Figs. 4 and 5, we conclude that first of all different setups and software stacks indeed considerably influence the interrupt handling performance. All three targeted components, provoke a considerable effect on the interrupt latency and throughput. Particularly noticeable are the differences between the test-cases with enabled (T2, T3) and disabled caches (T1, T4-T7), for both the observed latency and throughput. As well as the effects of stressing the GIC on the measured latency (T4-T6).
|
| 182 |
+
|
| 183 |
+
T2 produces by far the shortest interrupt latency of ${232}\mathrm{\;{ns}}$ in average with only a few outliers. Hence, we propose to utilize T2 as benchmark for the minimal achievable latency $\left( {\mathrm{B} - {\mathrm{L}}_{\min }}\right)$ .
|
| 184 |
+
|
| 185 |
+
To obtain a suitable benchmark for the maximal latency, we analyzed all combination out of the test-cases T4-36, T4-144, T6-3, and T7. Except for the combination out of T6 and T7, all tested combinations showed a similar performance with only slight differences. An exception to that forms the interrupt latency performance of the combination out of T4-144 and T6 on FreeRTOS, which is considerably more confined than all other observed ranges. The highest latency is achieved with a combination out of T4-36 and T6, however, the combination of T4-36, T6, and T7 is close. Accordingly, we propose to use the combination out of T4-36 and T6 to benchmark the achievable maximal interrupt latency $\left( {\mathrm{B} - {\mathrm{L}}_{\max }}\right)$ .
|
| 186 |
+
|
| 187 |
+
For the maximal throughput benchmark $\left( {\mathrm{B} - {\mathrm{T}}_{\max }}\right)$ we evaluated all four variants of the T6 test-case with enabled caches (T2). Interestingly, the enabled caches seem to mitigate the effect of more enabled cores, as all combinations showed a similar throughput. However, the combination out of T6-2 and T2 still performed best. Even though the maximal achieved throughput of the combined test-cases lags a little behind that of T2 alone, in case of the baremetal software stack, it outperforms T2 by far in case of the FreeRTOS based stack. Hence, we propose the combination out of T6-2 and T2 to benchmark the maximal throughput of a system.
|
| 188 |
+
|
| 189 |
+
§ 5 CONCLUSION AND OUTLOOK
|
| 190 |
+
|
| 191 |
+
We presented a flexible evaluation method based on the ARM Core-Sight technology, which enables the assessment of various software stacks on top of commodity ARMv8-A platforms with respect to their interrupt handling performance. Utilizing the evaluation method, we crafted seven specifically tailored test-cases that were shown to stress the ARM interrupt handling process. Out of these test-cases we deduced three benchmark functions, tailored to provoke the minimal $\left( {\mathrm{B} - {\mathrm{L}}_{\min }}\right)$ and maximal $\left( {\mathrm{B} - {\mathrm{L}}_{\max }}\right)$ interrupt latency, and the maximal throughput $\left( {\mathrm{B} - {\mathrm{T}}_{\max }}\right)$ , of a given software stack. We validated the test-cases and benchmark functions by comparing two software stacks (a simple bare-metal and FreeRTOS based environment) and measuring them on top of a Xilinx Zynq UltraScale+ MPSoC ZCU102 evaluation board [12].
|
| 192 |
+
|
| 193 |
+
Our measurements showed that different software stacks can have a considerable impact on the interrupt handling performance of a hardware platform. Hence, we hope to draw some attention on the importance of a good software design for CPS, with respect to interrupt processing and the need of a more profound analysis on how interrupt handling processes can be made more predictable with respect to the achievable latency and throughput.
|
| 194 |
+
|
| 195 |
+
§ ACKNOWLEDGMENTS
|
papers/MobiCom/MobiCom 2020/MobiCom 2020 Workshop/MobiCom 2020 Workshop CPS-IoTBench/H6a78knAFnY/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,261 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# FlockLab 2: Multi-Modal Testing and Validation for Wireless IoT
|
| 2 |
+
|
| 3 |
+
Anonymous Author(s)
|
| 4 |
+
|
| 5 |
+
## ABSTRACT
|
| 6 |
+
|
| 7 |
+
The development, evaluation, and comparison of wireless IoT and cyber-physical systems requires testbeds supporting inspection of logical states and accurate observations of physical performance metrics. We present FlockLab 2, a second generation testbed supporting multi-modal, high-accuracy and high-dynamic range measurements of power and logic timing and at the same time in-situ debug and trace infrastructure of modern microcontrollers allowing for reproducible evaluation and benchmarking. We detail the architecture, provide a characterization and demonstrate the interface, the supported services and the tools of the FlockLab 2 testbed.
|
| 8 |
+
|
| 9 |
+
Data Availability Statement. The hardware design and the software for server and observer of the presented testbed architecture and the data for the plots in this paper are openly available at XXX.
|
| 10 |
+
|
| 11 |
+
## 1 INTRODUCTION
|
| 12 |
+
|
| 13 |
+
The ever-increasing complexity and care for detail that must be mastered in developing state-of-the-art distributed networked embedded applications requires modern and adequate tool support for experimentation. In scaling to large distributed applications, simulations can help but cannot replace experiments on real hardware. Simulation always implies simplifications, significant especially at the hardware level. The latest microcontrollers and radios used in wireless Internet of Things (IoT) applications feature numerous power modes that need to be accurately fine-tuned and orchestrated for efficiency. The interaction between peripherals and the system core needs to be well-understood and validated for reliable operation down to the instruction level. Timing needs to be controlled at application as well as driver level up to the speed of light, as recent work on network protocols incorporating the time-of-flight of radio signals has shown [10].
|
| 14 |
+
|
| 15 |
+
The development of embedded software is commonly based on state-of-the-art debug and trace infrastructure integrated into the hardware of modern microcontroller architectures [15]. This in-situ infrastructure is supported by a multitude of development tools that can be used on the user's desk and also remotely. Today, such tooling is limited to a single device-under-test (DUT), therefore severely limiting capabilities to develop and test algorithms and systems for distributed wireless IoT devices. It is exactly this distributed nature of many devices, coupled over variable wireless channels and directly influenced by the embedding environment, that is known to be a challenging task in designing, implementing and validating IoT and cyber-physical systems. Therefore, distributed testbeds with representative hardware deployed in a real environment are widely used. Such testbeds allow (1) the reuse of the testing infrastructure, (2) controlled and reproducible testing and validation, and (3) the comparison of different implementations on a common platform (benchmarking). A number of testbeds exist that support a subset of the aspects mentioned above that are required for contemporary software development and evaluation for wireless IoT devices. An overview of existing testbeds and their capabilities as well as our remarks on 8+ years of testbed development and operation is provided in Sec. 2. However, none of the existing testbeds supports combined native in-situ debug and trace infrastructure, accurate timing measurements as well as the detailed assessment of power consumption over a large dynamic range.
|
| 16 |
+
|
| 17 |
+

|
| 18 |
+
|
| 19 |
+
Figure 1: FlockLab 2 observer with 4 target slots.
|
| 20 |
+
|
| 21 |
+
In this work, we present a versatile testbed with capabilities addressing the aforementioned requirements. In jointly addressing challenges in power measurement, timing, functional correctness based on native, in-hardware debug and trace functionality integrated at testbed scale, this work takes the methodological aspect of developing IoT and cyber-physical systems to the next level. The integration of native hardware-based debug and real-time tracing into every observer node allows full testbed-wide access to the ARM real-time debug and trace collection infrastructure (CoreSight $\left\lbrack {{12},{15}}\right\rbrack$ ) at the program execution level. Access is provided remotely in exactly the same manner as on a single developer's desk to all devices-under-test. This alleviates the need for inflexible instrumentation of the software code run on a DUT as well as invasive run-stop debugging. In addition, this testbed integrates high fidelity power profiling at $\mathrm{{nA}}$ resolution, dynamic control of the power supply and highest precision tracing and actuation of a set of DUTs based on Global Navigation Satellite System (GNSS) time synchronization. The testbed features a well-defined and open interface for test creation and test result fetching. A Python-based library and command line tool provides support for automated test management and visualization. This paper contains the following contributions:
|
| 22 |
+
|
| 23 |
+
- It proposes a testbed architecture that combines state-of-the-art debug and trace capabilities with accurate high-dynamic range measurements and actuation.
|
| 24 |
+
|
| 25 |
+
- Characterization of the system implementation.
|
| 26 |
+
|
| 27 |
+
- Demonstration of capabilities of FlockLab 2 in a case study.
|
| 28 |
+
|
| 29 |
+
- Open-source hardware design and software source code.
|
| 30 |
+
|
| 31 |
+
Sec. 2 gives an overview of the testbed landscape. In Sec. 3, we discuss the design of FlockLab 2 and characterize its implementation. In Sec. 4 we demonstrate the capabilities of FlockLab 2.
|
| 32 |
+
|
| 33 |
+
## 2 PAST EXPERIENCE AND RELATED WORK
|
| 34 |
+
|
| 35 |
+
The design of FlockLab 2 is heavily influenced by 8 + years of experience in developing and operating the FlockLab 1 testbed [8]. This testbed was based on the very successful target-observer model $\left\lbrack {8,{11}}\right\rbrack$ with multi-modal capabilities to monitor and influence devices-under-test at very high precision and fine-grained resolution. The FlockLab 1 testbed has been operated publicly since 2012. It ran over 70,000 tests by more than 370 users from more than 130 institutions in 30 countries. In addition, the testbed has been used by students in hands-on courses and many student projects.
|
| 36 |
+
|
| 37 |
+
Over time, a number of extensions based on the original concept of FlockLab 1 have been implemented $\left\lbrack {9,{10}}\right\rbrack$ calling for a revisit of the original concept with improved performance figures. Existing competitor testbeds each provide interesting features. However, none of them combine all three capabilities: (1) in-situ debug and trace, (2) high-dynamic range power profiling, and (3) accurate timing. In the following, we give an overview of the current testbed landscape.
|
| 38 |
+
|
| 39 |
+
TWIST [6] and Indriya2 [2] are both based on USB interconnects. Therefore they do not provide elaborate debug and trace features or accurate observations of hardware behavior like precise timing or power. On TWIST the power supply can be controlled by turning the USB interface to the targets on or off. Furthermore, a hierarchical back-channel using USB and Ethernet allows scalability.
|
| 40 |
+
|
| 41 |
+
The D-Cube [14] testbed focuses on benchmarking wireless protocols in pre-defined scenarios with a technique to embed test parameters directly in the software for the DUT, control RF interference and the automated publication of test metrics. In addition to serial logging, it supports setting and tracing GPIO pins and allows power consumption measurements. However, it does not support the use of debug and trace capabilities of modern MCUs.
|
| 42 |
+
|
| 43 |
+
FIT IoT-Lab [1] supports a wide range of sensor nodes (MSP430 to ARM Cortex-M8) at many different locations. Basic debug and trace based on JTAG and monitoring of power consumption is supported. Furthermore, the testbed supports injecting and sniffing radio packets and monitoring on a single frequency RF channel. To the best of our knowledge, it does not support accurate timing for control and measurements.
|
| 44 |
+
|
| 45 |
+
Shepherd [5] focuses on recording and replaying energy harvesting power traces for research in batteryless IoT devices. The architecture supports basic debugging and GPIO tracing. Power measurements are supported up to ${50}\mathrm{\;{mA}}$ which is limiting for modern long-range radios with high transmit power. Currently, there is no publicly available instance of the Shepherd testbed.
|
| 46 |
+
|
| 47 |
+
## 3 A REAL-TIME TRACING ARCHITECTURE
|
| 48 |
+
|
| 49 |
+
An IoT testbed needs to support multi-modal distributed interaction and tracing. We identify the following key requirements for a state-of-the-art testbed for wireless IoT devices:
|
| 50 |
+
|
| 51 |
+
- Support for native debug and trace infrastructure.
|
| 52 |
+
|
| 53 |
+
- Accurate and high-dynamic range power measurements (sub- $\mu \mathrm{A}$ sleep current up to radio TX current of ${170}\mathrm{\;{mA}}$ ).
|
| 54 |
+
|
| 55 |
+
- High-precision timing (sub- $\mu$ s accuracy) across the distributed testbed.
|
| 56 |
+
|
| 57 |
+
The FlockLab 2 testbed architecture consists of a testbed server hosting data services and the web interface, a set of distributed observers carrying the instrumentation and providing connectivity and the devices-under-test (DUTs) also termed the target devices. In FlockLab 2, multiple targets, typically manifested by different sensor node architectures, are supported on each observer system. Each target device is connected to the observer hardware using a multiplexer crossbar allowing a user to select a distinct target hardware architecture without physical intervention (see Fig. 2). Using this multiplexing allows to run tests on different target device architectures physically collocated, e.g. to compare different radio architectures side-by-side. The independent and stateful observer, which stores tracing data locally, allows for a strong coupling between observer and target (see Fig. 3). This enables highest accuracy and throughput of the DUT instrumentation especially when comparing to direct out-of band back-channels of early testbed architectures [17].
|
| 58 |
+
|
| 59 |
+

|
| 60 |
+
|
| 61 |
+
Figure 2: FlockLab 2 observer architecture.
|
| 62 |
+
|
| 63 |
+
The basic services of FlockLab 1 are continued: actuation and tracing of serial port and GPIO pins, target programming, power tracing and adjustable target supply voltage. The new system supports a native in-situ debug and trace service and generally a higher fidelity of the aforementioned basic services as well as an extended testbed layout covering also wide-area distances [16]. The instrumentation for measuring the power consumption has been significantly improved: nA current measurements resolution, peak power up to ${500}\mathrm{\;{mA}}$ , a sampling rate of ${64}\mathrm{{kHz}}$ , electrical isolation of target devices to not perturb low-power measurements.
|
| 64 |
+
|
| 65 |
+
### 3.1 Observer Instrumentation Platform
|
| 66 |
+
|
| 67 |
+
Each observer consists of a Linux host system, a main board and several target adapter boards hosting up to four different target devices. The four target slots are connected using a multiplexing unit that routes all signals on the observer main board.
|
| 68 |
+
|
| 69 |
+
3.1.1 Linux Single-Board Computer as Observer Host Platform. A standard Linux single-board computer (SBC), a BeagleBone Green, is used as host platform for the decentralized stateful observers. All tracing data is recorded on the observer in order to alleviate the inherent bottleneck to the testbed server. The BeagleBone Green SBC includes a single core ARM Cortex-A8 processor and two single-cycle Programmable Real-Time Unit (PRU) co-processors for low latency tracing. It further features an Ethernet interface with integrated hardware support for network-based time synchronization (NTP/PTP), generic IO extensions and local Flash memory.
|
| 70 |
+
|
| 71 |
+
3.1.2 Embedded Debug and Trace Integration. The key feature on the FlockLab 2 observer is an integrated Segger J-Link OB debug probe. This gives native access to state-of-the-art ARM Cortex-M CoreSight debug and trace facilities which are built into modern systems-on-chip (SoC) [12]. This allows to utilize simple halting debug mode (where architectural state can be observed), single step execution, breakpoint units and Performance Monitoring Units (PMUs). CoreSight further provides an Embedded Cross Trigger mechanism to synchronize or distribute debug requests and profiling information across the SoC. Embedded Trace Macrocells (ETM trace unit) or Program Trace Macrocells (PTM trace unit) allow to trace program execution at runtime and without instrumentation in the code that (i) alters program behavior and (ii) needs to be adapted for every single analysis step. The trace macrocells can either be captured using an on-chip trace buffer or accessed via the generic Serial Wire Debug (SWD) connection implemented on the J-Link debug probe acting as off-chip trace port analyzer (see Fig. 3). Dedicated synchronization points and global 64-bit timestamps across the whole $\mathrm{{SoC}}$ architecture can be enabled in the tracing architecture to gain accurate temporal context of an application and its interaction with the underlying hardware at runtime. The debug and tracing architecture is implementation specific and can be found in the respective microprocessor documentation.
|
| 72 |
+
|
| 73 |
+

|
| 74 |
+
|
| 75 |
+
Figure 3: Tracing instrumentation based on the in-situ debug and trace unit and external capabilities.
|
| 76 |
+
|
| 77 |
+
The use of SWD on FlockLab 2 with the SWDCLK and SWDIO signals as well as a dedicated Serial Wire Output (SWO) trace port is a tradeoff between bandwidth and pin count. By using data buffer exchange capabilities of the debug probe, e.g. with Segger Real Time Transfer (RTT) software technology that can be easily integrated with user code, high-speed and little impact data transfer from the target to the observer is supported. For example, this allows more efficient printf( )-style logging on the target.
|
| 78 |
+
|
| 79 |
+
Besides the native hardware support for debug and trace, the main advantage lies in the ability to connect to all standard developer tools allowing interactive debug sessions on the testbed directly from a developer's IDE or use ready made tooling for automating tasks. A lot of time in sensor networks and IoT research has been spent in developing custom tooling and a host of scripts incompatible with industry standard debugging and profiling tools.
|
| 80 |
+
|
| 81 |
+
3.1.3 Power Tracing. Highly accurate and high-dynamic range power profiling is based on the RocketLogger embedded measurement device [13]. It combines two measurement principles using seamless autoranging: a shunt ammeter (high current range), as well as a feedback ammeter (low current range) provide the necessary precision and range to measure sleep currents (sub- $\mu$ A and peak power consumption (100 ’s of mA) of modern radios. The measurement circuit allows to measure the current through as well as the voltage at the target. A careful electrical isolation of all target IO lines allows to accurately measure on the order of $\mathrm{{nAs}}$ for the lowest power modes. Measurements are performed by precision analog-to-digital converters (ADCs) and periodically transferred to the single-board computer's storage via PRU0. To compensate for hardware and manufacturing variations, the voltage and current measurement circuit on each observer is calibrated before first use.
|
| 82 |
+
|
| 83 |
+
3.1.4 Serial Logging and Forwarding. Serial port communication via UART or USB is supported on each observer. This allows logging of simple printf( ) based console output and supports interactive communication with the target during tests by forwarding the serial port via TCP to the testbed user. To achieve high performance and accurate timing, logging is implemented in $\mathrm{C}$ and events are timestamped using the GNSS or PTP disciplined system clock.
|
| 84 |
+
|
| 85 |
+
3.1.5 Logic Actuation and Tracing. On each observer, 5 target GPIO pins can be captured and 2 target GPIO pins can be actuated. In FlockLab 2, logic tracing is implemented on the programmable real-time unit PRU1, which allows to acquire highly accurate timing trace data. Logic actuation is implemented by a Linux kernel module and the actuation events are logged by the PRU based logic tracing as well.
|
| 86 |
+
|
| 87 |
+
3.1.6 Testbed-wide Time Synchronization. Accurate timing across all signals for tracing and actuation on a single observer platform as well as across the whole testbed is one of the most important success factors of a distributed IoT testbed. With recent advances in higher system clock rates and ever more timing critical behavior in advanced communication schemes [10] the requirements for accurate timing is in the sub-microsecond scale. Since this testbed is designated to support long-range communication where observers might be distributed over kilometers, synchronization needs to work independent of other observers and their location.
|
| 88 |
+
|
| 89 |
+

|
| 90 |
+
|
| 91 |
+
Figure 4: Logging of PPS signal alongside the target signals for accurate time synchronization.
|
| 92 |
+
|
| 93 |
+
The local Linux system time referenced to UTC and disciplined by GNSS serves as time reference for all testbed services. The integrated GNSS receiver (u-blox M8) generates an accurate pulse-per-second (PPS) signal which is tracked in dedicated hardware timers on the single-board computer. For accurate timing of the logic tracing, the PPS pulse is logged alongside the target signals (see Fig. 4). A linear correction factor is calculated for each epoch $i$ and applied to timestamp of the logic tracing event (numbered by $k$ ) once a test has completed.
|
| 94 |
+
|
| 95 |
+
$$
|
| 96 |
+
{t}_{\text{Global }}^{\text{event }}\left\lbrack {i, k}\right\rbrack = {t}_{\mathrm{{PRU}}}^{\text{event }}\left\lbrack {i, k}\right\rbrack \cdot \frac{{t}_{\text{Global }}^{\mathrm{{pps}}}\left\lbrack {i + 1}\right\rbrack - {t}_{\text{Global }}^{\mathrm{{pps}}}\left\lbrack i\right\rbrack }{{t}_{\mathrm{{PRU}}}^{\mathrm{{pps}}}\left\lbrack {i + 1}\right\rbrack - {t}_{\mathrm{{PRU}}}^{\mathrm{{pps}}}\left\lbrack i\right\rbrack }
|
| 97 |
+
$$
|
| 98 |
+
|
| 99 |
+
Observers at locations with limited GNSS signal reception can use the Precision Time Protocol (PTP) as a fallback solution to discipline the Linux system clock. A prerequisite for this is a network infrastructure which fulfills the requirements of the PTP protocol. The PPS signal required for accurate logic tracing is based on Linux system time generated by a kernel module. Using hardware assisted timestamping at the PHY and MAC layer of the single-board computer's Ethernet interface allows to achieve synchronization accuracy in the order of $\sim {1\mu }\mathrm{s}$ for PTP [7] compared to $\sim {50}\mathrm{{ns}}$ for GNSS [10].
|
| 100 |
+
|
| 101 |
+
For the debug probe, incoming messages from the SWO trace port are timestamped using system time. The debug unit can be configured to export local timestamps via SWO. These can be converted to system time by applying a piece-wise linear regression similar to the correction factor for logic tracing described earlier.
|
| 102 |
+
|
| 103 |
+
3.1.7 Target Adapter. The target adapter is mainly a hardware adapter bridging form-factor and pinout. It may contain configuration options (e.g. jumpers) and extra debug pins depending on the target platform. Additionally, it contains a serial ID chip for automated identification of every target connected to an observer.
|
| 104 |
+
|
| 105 |
+
3.1.8 Power Generation and Reset. The target supply voltage is generated using an low-dropout (LDO) regulator controlled by a digital-to-analog converter (DAC) based reference voltage. This allows to dynamically control the target supply voltage. The target can be reset either by controlling the reset pin or by a full power-off-reset (POR). These power and reset capabilities enable to test under different operating conditions and under most realistic conditions.
|
| 106 |
+
|
| 107 |
+
3.1.9 Target Programming. Programming of the microcontroller of the target devices is performed either using a bootloader (BSL) or native single wire debug (SWD) for ARM based devices.
|
| 108 |
+
|
| 109 |
+
### 3.2 Testbed Management and User Interface
|
| 110 |
+
|
| 111 |
+
3.2.1 Testbed Infrastructure. The testbed is orchestrated by a server which executes the scheduled tests, provides a MySQL database, provides storage space for test results, hosts the web interface and exposes an API for automated test scheduling and fetching. The database stores test scheduling information and configuration as well as the current state of the hardware infrastructure (e.g. which target is connected to which observer).
|
| 112 |
+
|
| 113 |
+
3.2.2 API and Visualization. FlockLab 2 focuses on fully autonomous test execution but also supports live interactions. Tests are configured and scheduled using a single XML file. This test configuration file allows to (1) select the target platform and the testbed nodes, (2) enable and configure actuation and tracing services, and (3) include one or more program images which will then be flashed to the targets. Real-time interaction during test execution is supported via the serial communication service (read/write) and via a remote debug session using the integrated Segger J-Link debug probe which allows to set breakpoints, halt the execution, read processor state and to retrieve data via SWO.
|
| 114 |
+
|
| 115 |
+
The results are stored on a server and can be downloaded as an archive file. Test management (creating and stopping tests, retrieving status information, downloading results) as well as an intuitive visualization of results is supported via web interface or the flocklab-tools command-line tool executed on the user's computer.
|
| 116 |
+
|
| 117 |
+
### 3.3 Publicly Available Testbed
|
| 118 |
+
|
| 119 |
+
The FlockLab 2 testbed is implemented as a public service with currently 10 active observers (Fig. 1) distributed on the floor of an office building. Additional observer hardware will extend the testbed to 30+ nodes including remote long-distance rooftop locations [16]. The testbed can be publicly accessed via the website ${}^{1}$ where we also publish documentation, examples, the hardware design and software source code.
|
| 120 |
+
|
| 121 |
+
Currently, the four target platforms listed in Tab. 1 are available. Additional target platforms can easily be added thanks to the generic target interface.
|
| 122 |
+
|
| 123 |
+
<table><tr><td>Target</td><td>$\mathbf{{MCU}}$</td><td>Arch.</td><td>Radio</td></tr><tr><td>Tmote Sky / TelosB</td><td>MSP430F1611</td><td>MSP430</td><td>CC2420, 802.15.4, 2.4 GHz</td></tr><tr><td>DPP2 CC430 [3]</td><td>CC430F5147</td><td>MSP430</td><td>CC430 SoC, CC1101-based, 868 MHz</td></tr><tr><td>DPP2 LoRa [3]</td><td>STM32L4</td><td>ARM M4</td><td>SX1262, LoRa/FSK, 868 MHz</td></tr><tr><td>nRF52840 Dongle</td><td>nRF52840</td><td>ARM M4</td><td>nRF52 SoC, 802.15.4/BLE, 2.4 GHz</td></tr></table>
|
| 124 |
+
|
| 125 |
+
Table 1: Target devices supported on FlockLab 2.
|
| 126 |
+
|
| 127 |
+
### 3.4 FlockLab 2 Observer Key Characteristics
|
| 128 |
+
|
| 129 |
+
We provide a characterization of the FlockLab 2 observer in Tab. 2.
|
| 130 |
+
|
| 131 |
+
<table><tr><td colspan="2">$\mathbf{{TargetPowerSupply}}$</td></tr><tr><td>Voltage range</td><td>${1.1}\mathrm{\;V} - {3.6}\mathrm{\;V}$</td></tr><tr><td>Voltage resolution</td><td>13.5 mV</td></tr><tr><td>Max. current</td><td>${500}\mathrm{\;{mA}}$</td></tr><tr><td colspan="2">Logic Actuation</td></tr><tr><td>Timing accuracy</td><td><100 μs (typ.)</td></tr><tr><td colspan="2">Power Tracing (see [13] for details)</td></tr><tr><td>Max resolution / sampling rate</td><td>${15.625\mu }\mathrm{s}/{64}\mathrm{{kHz}}$</td></tr><tr><td>Voltage accuracy</td><td>0.37% + 4 mV</td></tr><tr><td>Current accuracy (low range 0 - 2 mA)</td><td>0.01% + 60 nA</td></tr><tr><td>Current accuracy (high range 2 - 500 mA)</td><td>0.02% + 48μA</td></tr><tr><td colspan="2">Logic Tracing</td></tr><tr><td>Max resolution / sampling rate</td><td>${0.1\mu }\mathrm{s}/{10}\mathrm{{MHz}}$</td></tr><tr><td>Max burst event rate ( $\leq {2000}$ edges)</td><td>10 MHz</td></tr><tr><td>Max continuous event rate (typ.)</td><td>900 kHz</td></tr><tr><td>Timing Accuracy (GNSS)</td><td><0.25 μs (typ.)</td></tr><tr><td colspan="2">Serial Tracing</td></tr><tr><td>Max continuous throughput</td><td>460 kbaud</td></tr><tr><td>Timing accuracy</td><td><10 ms (typ.)</td></tr></table>
|
| 132 |
+
|
| 133 |
+
Table 2: Characterization of the FlockLab 2 observer.
|
| 134 |
+
|
| 135 |
+
## 4 USING FLOCKLAB 2 IN PRACTICE
|
| 136 |
+
|
| 137 |
+
In order to demonstrate the capabilities of FlockLab 2 regarding features and performance, we discuss an example of network flooding based on synchronous transmissions [4] on a FSK/LoRa radio [3]. Building and scaling-up a communication protocol based on synchronous transmissions requires a very careful arbitration of all radio activities and extensive debugging as transmissions and corresponding interrupts need to be correctly aligned. It is therefore a suitable example to showcase the capabilities of FlockLab 2. In this example, we use the DPP2 LoRa target which consists of an STM32L433 Cortex-M4 microcontroller and a Semtech SX1262 radio [3] with a ${0.28\mu }\mathrm{A}$ standby current and ${7\mu }\mathrm{s}$ wake-up latency.
|
| 138 |
+
|
| 139 |
+
### 4.1 Simple Synchronous Transmission Protocol
|
| 140 |
+
|
| 141 |
+
Gloria is an optimized multi-hop network flooding protocol based on Glossy [4]. As depicted in Fig. 5, all nodes synchronously retransmit a received message a pre-defined number of times ( 3 times in our example) in subsequent transmission slots. Contrary to Glossy, Gloria nodes listen to a message only once. For the setting used in this example, the relevant values are InitOverhead = ${1.783}\mathrm{\;{ms}}$ and SlotTime $= {4.548}\mathrm{\;{ms}}$ (see Fig. 5). Both values are fixed for each specific radio configuration (in this case FSK 250 kbit/s) and have been determined using the datasheet and measurements.
|
| 142 |
+
|
| 143 |
+

|
| 144 |
+
|
| 145 |
+
Figure 5: Gloria floods use concurrent re-transmission.
|
| 146 |
+
|
| 147 |
+
### 4.2 Testing Workflow
|
| 148 |
+
|
| 149 |
+
4.2.1 Creating a Test. First, the software for the target platform is compiled using the standard toolchain/IDE. Then, an XML test configuration file is created containing the nodes, images and platform to be used, test duration, actuation, tracing and debugging services configuration. A minimalist example is shown in Listing 1. This is then uploaded to the FlockLab 2 server using the web interface or the flocklab-tools. On the server, the test is then scheduled. The server initiates the start of the test at the time specified, distributes the target images and configures all testbed services.
|
| 150 |
+
|
| 151 |
+
4.2.2 Interaction During Test Execution. Progress is monitored on the web interface where information on configuration and status is available. If the serial forwarding service is used it is possible to connect to an individual observer for the duration of the test using a TCP connection, e.g. by using netcat. Likewise an interactive debug session can be opened from the IDE on the user's computer to a GDB debug server running on the observer.
|
| 152 |
+
|
| 153 |
+
4.2.3 Analyzing Test Results. After the test completes, the server fetches all results from the observers and combines them into a single test result archive file that can be used for custom postprocessing or can be visualized using the flocklab-tools (an example is depicted in Fig. 7).
|
| 154 |
+
|
| 155 |
+
---
|
| 156 |
+
|
| 157 |
+
<testConf xmlns="http://www.flocklab.ethz.ch">
|
| 158 |
+
|
| 159 |
+
<generalConf>
|
| 160 |
+
|
| 161 |
+
<name>FlockLab XML template</name>
|
| 162 |
+
|
| 163 |
+
<schedule><duration>60</duration></schedule>
|
| 164 |
+
|
| 165 |
+
</generalConf>
|
| 166 |
+
|
| 167 |
+
<targetConf>
|
| 168 |
+
|
| 169 |
+
<obsIds>24 6 7 9</obsIds>
|
| 170 |
+
|
| 171 |
+
<voltage>3.3</voltage>
|
| 172 |
+
|
| 173 |
+
<embeddedImageId>Image_1</embeddedImageId>
|
| 174 |
+
|
| 175 |
+
</targetConf>
|
| 176 |
+
|
| 177 |
+
<serialConf>
|
| 178 |
+
|
| 179 |
+
<obsIds>24 6 7 9</obsIds>
|
| 180 |
+
|
| 181 |
+
<baudrate>115200</baudrate>
|
| 182 |
+
|
| 183 |
+
</serialConf>
|
| 184 |
+
|
| 185 |
+
<powerProfilingConf>
|
| 186 |
+
|
| 187 |
+
<obsIds>2 4</obsIds>
|
| 188 |
+
|
| 189 |
+
<samplingRate>1000</samplingRate>
|
| 190 |
+
|
| 191 |
+
</powerProfilingConf>
|
| 192 |
+
|
| 193 |
+
</testConf>
|
| 194 |
+
|
| 195 |
+
Listing 1: FlockLab 2 test configuration example.
|
| 196 |
+
|
| 197 |
+
---
|
| 198 |
+
|
| 199 |
+
### 4.3 Debugging and Analysis of the Protocol
|
| 200 |
+
|
| 201 |
+
4.3.1 Embedded Debugging at Testbed Scale. In this example, 8 nodes are performing a Gloria network flood. To validate the correct protocol implementation, we use the debugger functionality which allows to extract internal variables. Concretely, we want to verify the correct calculation of the time of the next transmission (TxMarker in Fig. 5). In Figure 6, for each node, radio activity is shown in the first row (orange bars), the radio interrupts are shown in the second row (black bars). For node 4, the power trace (black line) is shown as well. A breakpoint has been set on the first TxDone interrupt of node 9 that can be inspected using a remote debug session to the target (see Sec. 3.1.2). Since the breakpoint halts this specific microprocessor, the captured traces show no more GPIO events after the node reached the breakpoint. The variables inspected at the breakpoint show e.g. slot_index $= 2$ ; message_size $= {30}$ as expected. The variables reconstructed_marker and current_tx_marker correspond to RefTime and TxMarker in Fig. 5, respectively. The extracted time difference value of ${10.879}\mathrm{\;{ms}}$ conforms with the expected value for FloodOverhead $+ 2 \cdot$ SlotTime. This confirms the correct calculation of TxMarker in the synchronous Gloria flood.
|
| 202 |
+
|
| 203 |
+
4.3.2 Timing Validation using Logic Tracing. In synchronous protocols, transmissions and corresponding interrupts need to be correctly aligned. This is traditionally done by instrumenting code and tracing GPIO pins with the logic tracing service (see Sec. 3.1.5). In Fig. 6, radio activity with two interrupts (SyncWordValid and RxDone) corresponds to a message reception and radio activity with a single interrupt (TxDone) corresponds to a transmission. Re-transmissions are scheduled based on the timing of received messages. For this, the SyncWordValid timestamp is used to calculate individual start times on each node. For this, the exact time offset between the start of the transmission and the SyncWordValid interrupt needs to be calibrated. In the example in Fig. 7, this offset is
|
| 204 |
+
|
| 205 |
+
... not set correctly and consequently synchronous transmissions are not aligned. Using logic tracing, this malfunction can be detected (green lower bars) and the correct value can be determined.
|
| 206 |
+
|
| 207 |
+

|
| 208 |
+
|
| 209 |
+
Figure 6: The embedded debug service allows to extract detailed internal debug information using breakpoints.
|
| 210 |
+
|
| 211 |
+

|
| 212 |
+
|
| 213 |
+
Figure 7: Gloria slot timing is skewed due to a wrong timing parameter. The logic tracing service allows to detect and correct erroneous behavior at interrupt-level granularity.
|
| 214 |
+
|
| 215 |
+
4.3.3 Optimizing for Low Power Consumption. To maximize the lifetime of a battery-powered cyber-physical system, careful optimization and orchestration of the low-power modes is required. In this example, we validate the low-power behavior of the Gloria flood implementation by using the power tracing service (Sec. 3.1.3) together with the logic tracing capabilities (Sec. 3.1.5). In a simple implementation communication is executed in a fixed-length active window (see Fig. 8). In the optimized case a node will transit to low-power sleep mode immediately after completing a required action, e.g. sending and receiving data. The difference between fixed-length and dynamic active window sizes can be seen in Fig. 8 together with the respective radio interrupts.
|
| 216 |
+
|
| 217 |
+
## 5 CONCLUSIONS
|
| 218 |
+
|
| 219 |
+
In this paper, we present the second-generation testbed FlockLab 2 which combines industry standard debug and trace support with accurate high-dynamic range power and timing measurements. Relevant design aspects including the distributed testbed-wide time synchronization and the in-situ debug and logging capabilities have been demonstrated with real-world applications. These aspects make the testbed a valuable tool for developing and benchmarking distributed IoT systems.
|
| 220 |
+
|
| 221 |
+

|
| 222 |
+
|
| 223 |
+
Figure 8: High-dynamic range power tracing is used to validate and optimize low-power behavior.
|
| 224 |
+
|
| 225 |
+
## REFERENCES
|
| 226 |
+
|
| 227 |
+
[1] Cedric Adjih, Emmanuel Baccelli, Eric Fleury, Gaetan Harter, Nathalie Mitton,
|
| 228 |
+
|
| 229 |
+
Thomas Noel, Roger Pissard-Gibollet, Frederic Saint-Marcel, Guillaume Schreiner, et al. 2015. FIT IoT-LAB: A large scale open experimental IoT testbed. In 2015 IEEE 2nd World Forum on Internet of Things (WF-IoT). IEEE, 459-464.
|
| 230 |
+
|
| 231 |
+
[2] Paramasiven Appavoo, Ebram Kamal William, Mun Choon Chan, and Mobashir Mohammad. 2018. Indriya2: A heterogeneous wireless sensor network (wsn) testbed. In Int'l Conf. Testbeds and Research Infrastructures. Springer, 3-19.
|
| 232 |
+
|
| 233 |
+
[3] Jan Beutel, Roman Trüb, Reto Da Forno, Markus Wegmann, Tonio Gsell, Romain Jacob, Michael Keller, Felix Sutton, and Lothar Thiele. 2019. The Dual Processor Platform Architecture. In Proc. 18th Int'l Conf. Information Processing in Sensor Networks (IPSN '19). ACM, New York, NY, 335-336.
|
| 234 |
+
|
| 235 |
+
[4] Federico Ferrari, Marco Zimmerling, Lothar Thiele, and Olga Saukh. 2011. Efficient Network Flooding and Time Synchronization with Glossy. In Proc. 10th ACM/IEEE Int'l Conf. Information Processing in Sensor Networks. IEEE, 73-84.
|
| 236 |
+
|
| 237 |
+
[5] Kai Geissdoerfer, Mikolaj Chwalisz, and Marco Zimmerling. 2019. Shepherd: a portable testbed for the batteryless IoT. In Proc. 17th Conf. Embedded Networked Sensor Systems. 83-95.
|
| 238 |
+
|
| 239 |
+
[6] Vlado Handziski, Andreas Köpke, Andreas Willig, and Adam Wolisz. 2006. Twist: a scalable and reconfigurable testbed for wireless indoor experiments with sensor networks. In Proc. 2nd Int'l Workshop on Multi-hop Ad-hoc Networks. 63-70.
|
| 240 |
+
|
| 241 |
+
[7] Antonio Libri, Andrea Bartolini, Michele Magno, and Luca Benini. 2016. Evaluation of Synchronization Protocols for Fine-grain HPC Sensor Data Time-tamping and Collection. In 2016 International Conference on High Performance Computing & Simulation (HPCS). IEEE, 818-825.
|
| 242 |
+
|
| 243 |
+
[8] Roman Lim, Federico Ferrari, Marco Zimmerling, Christoph Walser, Philipp Sommer, and Jan Beutel. 2013. FlockLab: A Testbed for Distributed, Synchronized Tracing and Profiling of Wireless Embedded Systems. In Proc. 12th Int'l Conf. Information Proc. in Sensor Networks (IPSN '13). ACM, New York, NY, 153-166.
|
| 244 |
+
|
| 245 |
+
[9] Roman Lim, Balz Maag, Bernhard Dissler, Jan Beutel, and Lothar Thiele. 2015. A testbed for fine-grained tracing of time sensitive behavior in wireless sensor networks. In Proc. IEEE 40th Local Computer Networks Conference. 619-626.
|
| 246 |
+
|
| 247 |
+
[10] Roman Lim, Balz Maag, and Lothar Thiele. 2016. Time-of-Flight Aware Time Synchronization for Wireless Embedded Systems.. In EWSN. 149-158.
|
| 248 |
+
|
| 249 |
+
[11] Roman Lim, Christoph Walser, Federico Ferrari, Marco Zimmerling, and Jan Beutel. 2012. Distributed and synchronized measurements with FlockLab. In 10th ACM Conf. on Embedded Networked Sensor Systems (SenSys' 12). 373-374.
|
| 250 |
+
|
| 251 |
+
[12] ARM Limited. 2013. CoreSight Technical Introduction. White Paper.
|
| 252 |
+
|
| 253 |
+
[13] Lukas Sigrist and Andres Gomez and Roman Lim and Stefan Lippuner and Matthias Leubin and Lothar Thiele. 2017. Measurement and Validation of Energy Harvesting IoT Devices. In Proc. 2017 Design, Automation & Test in Europe Conf. & Exhibition (DATE 2017). Lausanne, Switzerland.
|
| 254 |
+
|
| 255 |
+
[14] Markus Schuß and Carlo Alberto Boano and Manuel Weber and Kay Römer. 2017. A Competition to Push the Dependability of Low-Power Wireless Protocols to the Edge. In Proceedings of the 14th International Conference on Embedded Wireless Systems and Networks (EWSN) (Uppsala, Sweden). Junction Publishing, 54-65.
|
| 256 |
+
|
| 257 |
+
[15] Neal Stollon. 2010. On-Chip Instrumentation: Design and Debug for Systems on Chip. Springer Science & Business Media.
|
| 258 |
+
|
| 259 |
+
[16] Roman Trüb, Reto Da Forno, Tonio Gsell, Jan Beutel, and Lothar Thiele. 2019. A Testbed for Long-Range LoRa Communication. In Proc. 18th Int'l Conf. Information Processing in Sensor Networks (IPSN '19). ACM, New York, NY, 342-343.
|
| 260 |
+
|
| 261 |
+
[17] Geoffrey Werner-Allen, Patrick Swieskowski, and Matt Welsh. 2005. Motelab: A wireless sensor network testbed. In Fourth Int'l Symp. Information Processing in Sensor Networks (IPSN), 2005. IEEE, 483-488.
|
papers/MobiCom/MobiCom 2020/MobiCom 2020 Workshop/MobiCom 2020 Workshop CPS-IoTBench/H6a78knAFnY/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,293 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ FLOCKLAB 2: MULTI-MODAL TESTING AND VALIDATION FOR WIRELESS IOT
|
| 2 |
+
|
| 3 |
+
Anonymous Author(s)
|
| 4 |
+
|
| 5 |
+
§ ABSTRACT
|
| 6 |
+
|
| 7 |
+
The development, evaluation, and comparison of wireless IoT and cyber-physical systems requires testbeds supporting inspection of logical states and accurate observations of physical performance metrics. We present FlockLab 2, a second generation testbed supporting multi-modal, high-accuracy and high-dynamic range measurements of power and logic timing and at the same time in-situ debug and trace infrastructure of modern microcontrollers allowing for reproducible evaluation and benchmarking. We detail the architecture, provide a characterization and demonstrate the interface, the supported services and the tools of the FlockLab 2 testbed.
|
| 8 |
+
|
| 9 |
+
Data Availability Statement. The hardware design and the software for server and observer of the presented testbed architecture and the data for the plots in this paper are openly available at XXX.
|
| 10 |
+
|
| 11 |
+
§ 1 INTRODUCTION
|
| 12 |
+
|
| 13 |
+
The ever-increasing complexity and care for detail that must be mastered in developing state-of-the-art distributed networked embedded applications requires modern and adequate tool support for experimentation. In scaling to large distributed applications, simulations can help but cannot replace experiments on real hardware. Simulation always implies simplifications, significant especially at the hardware level. The latest microcontrollers and radios used in wireless Internet of Things (IoT) applications feature numerous power modes that need to be accurately fine-tuned and orchestrated for efficiency. The interaction between peripherals and the system core needs to be well-understood and validated for reliable operation down to the instruction level. Timing needs to be controlled at application as well as driver level up to the speed of light, as recent work on network protocols incorporating the time-of-flight of radio signals has shown [10].
|
| 14 |
+
|
| 15 |
+
The development of embedded software is commonly based on state-of-the-art debug and trace infrastructure integrated into the hardware of modern microcontroller architectures [15]. This in-situ infrastructure is supported by a multitude of development tools that can be used on the user's desk and also remotely. Today, such tooling is limited to a single device-under-test (DUT), therefore severely limiting capabilities to develop and test algorithms and systems for distributed wireless IoT devices. It is exactly this distributed nature of many devices, coupled over variable wireless channels and directly influenced by the embedding environment, that is known to be a challenging task in designing, implementing and validating IoT and cyber-physical systems. Therefore, distributed testbeds with representative hardware deployed in a real environment are widely used. Such testbeds allow (1) the reuse of the testing infrastructure, (2) controlled and reproducible testing and validation, and (3) the comparison of different implementations on a common platform (benchmarking). A number of testbeds exist that support a subset of the aspects mentioned above that are required for contemporary software development and evaluation for wireless IoT devices. An overview of existing testbeds and their capabilities as well as our remarks on 8+ years of testbed development and operation is provided in Sec. 2. However, none of the existing testbeds supports combined native in-situ debug and trace infrastructure, accurate timing measurements as well as the detailed assessment of power consumption over a large dynamic range.
|
| 16 |
+
|
| 17 |
+
< g r a p h i c s >
|
| 18 |
+
|
| 19 |
+
Figure 1: FlockLab 2 observer with 4 target slots.
|
| 20 |
+
|
| 21 |
+
In this work, we present a versatile testbed with capabilities addressing the aforementioned requirements. In jointly addressing challenges in power measurement, timing, functional correctness based on native, in-hardware debug and trace functionality integrated at testbed scale, this work takes the methodological aspect of developing IoT and cyber-physical systems to the next level. The integration of native hardware-based debug and real-time tracing into every observer node allows full testbed-wide access to the ARM real-time debug and trace collection infrastructure (CoreSight $\left\lbrack {{12},{15}}\right\rbrack$ ) at the program execution level. Access is provided remotely in exactly the same manner as on a single developer's desk to all devices-under-test. This alleviates the need for inflexible instrumentation of the software code run on a DUT as well as invasive run-stop debugging. In addition, this testbed integrates high fidelity power profiling at $\mathrm{{nA}}$ resolution, dynamic control of the power supply and highest precision tracing and actuation of a set of DUTs based on Global Navigation Satellite System (GNSS) time synchronization. The testbed features a well-defined and open interface for test creation and test result fetching. A Python-based library and command line tool provides support for automated test management and visualization. This paper contains the following contributions:
|
| 22 |
+
|
| 23 |
+
* It proposes a testbed architecture that combines state-of-the-art debug and trace capabilities with accurate high-dynamic range measurements and actuation.
|
| 24 |
+
|
| 25 |
+
* Characterization of the system implementation.
|
| 26 |
+
|
| 27 |
+
* Demonstration of capabilities of FlockLab 2 in a case study.
|
| 28 |
+
|
| 29 |
+
* Open-source hardware design and software source code.
|
| 30 |
+
|
| 31 |
+
Sec. 2 gives an overview of the testbed landscape. In Sec. 3, we discuss the design of FlockLab 2 and characterize its implementation. In Sec. 4 we demonstrate the capabilities of FlockLab 2.
|
| 32 |
+
|
| 33 |
+
§ 2 PAST EXPERIENCE AND RELATED WORK
|
| 34 |
+
|
| 35 |
+
The design of FlockLab 2 is heavily influenced by 8 + years of experience in developing and operating the FlockLab 1 testbed [8]. This testbed was based on the very successful target-observer model $\left\lbrack {8,{11}}\right\rbrack$ with multi-modal capabilities to monitor and influence devices-under-test at very high precision and fine-grained resolution. The FlockLab 1 testbed has been operated publicly since 2012. It ran over 70,000 tests by more than 370 users from more than 130 institutions in 30 countries. In addition, the testbed has been used by students in hands-on courses and many student projects.
|
| 36 |
+
|
| 37 |
+
Over time, a number of extensions based on the original concept of FlockLab 1 have been implemented $\left\lbrack {9,{10}}\right\rbrack$ calling for a revisit of the original concept with improved performance figures. Existing competitor testbeds each provide interesting features. However, none of them combine all three capabilities: (1) in-situ debug and trace, (2) high-dynamic range power profiling, and (3) accurate timing. In the following, we give an overview of the current testbed landscape.
|
| 38 |
+
|
| 39 |
+
TWIST [6] and Indriya2 [2] are both based on USB interconnects. Therefore they do not provide elaborate debug and trace features or accurate observations of hardware behavior like precise timing or power. On TWIST the power supply can be controlled by turning the USB interface to the targets on or off. Furthermore, a hierarchical back-channel using USB and Ethernet allows scalability.
|
| 40 |
+
|
| 41 |
+
The D-Cube [14] testbed focuses on benchmarking wireless protocols in pre-defined scenarios with a technique to embed test parameters directly in the software for the DUT, control RF interference and the automated publication of test metrics. In addition to serial logging, it supports setting and tracing GPIO pins and allows power consumption measurements. However, it does not support the use of debug and trace capabilities of modern MCUs.
|
| 42 |
+
|
| 43 |
+
FIT IoT-Lab [1] supports a wide range of sensor nodes (MSP430 to ARM Cortex-M8) at many different locations. Basic debug and trace based on JTAG and monitoring of power consumption is supported. Furthermore, the testbed supports injecting and sniffing radio packets and monitoring on a single frequency RF channel. To the best of our knowledge, it does not support accurate timing for control and measurements.
|
| 44 |
+
|
| 45 |
+
Shepherd [5] focuses on recording and replaying energy harvesting power traces for research in batteryless IoT devices. The architecture supports basic debugging and GPIO tracing. Power measurements are supported up to ${50}\mathrm{\;{mA}}$ which is limiting for modern long-range radios with high transmit power. Currently, there is no publicly available instance of the Shepherd testbed.
|
| 46 |
+
|
| 47 |
+
§ 3 A REAL-TIME TRACING ARCHITECTURE
|
| 48 |
+
|
| 49 |
+
An IoT testbed needs to support multi-modal distributed interaction and tracing. We identify the following key requirements for a state-of-the-art testbed for wireless IoT devices:
|
| 50 |
+
|
| 51 |
+
* Support for native debug and trace infrastructure.
|
| 52 |
+
|
| 53 |
+
* Accurate and high-dynamic range power measurements (sub- $\mu \mathrm{A}$ sleep current up to radio TX current of ${170}\mathrm{\;{mA}}$ ).
|
| 54 |
+
|
| 55 |
+
* High-precision timing (sub- $\mu$ s accuracy) across the distributed testbed.
|
| 56 |
+
|
| 57 |
+
The FlockLab 2 testbed architecture consists of a testbed server hosting data services and the web interface, a set of distributed observers carrying the instrumentation and providing connectivity and the devices-under-test (DUTs) also termed the target devices. In FlockLab 2, multiple targets, typically manifested by different sensor node architectures, are supported on each observer system. Each target device is connected to the observer hardware using a multiplexer crossbar allowing a user to select a distinct target hardware architecture without physical intervention (see Fig. 2). Using this multiplexing allows to run tests on different target device architectures physically collocated, e.g. to compare different radio architectures side-by-side. The independent and stateful observer, which stores tracing data locally, allows for a strong coupling between observer and target (see Fig. 3). This enables highest accuracy and throughput of the DUT instrumentation especially when comparing to direct out-of band back-channels of early testbed architectures [17].
|
| 58 |
+
|
| 59 |
+
< g r a p h i c s >
|
| 60 |
+
|
| 61 |
+
Figure 2: FlockLab 2 observer architecture.
|
| 62 |
+
|
| 63 |
+
The basic services of FlockLab 1 are continued: actuation and tracing of serial port and GPIO pins, target programming, power tracing and adjustable target supply voltage. The new system supports a native in-situ debug and trace service and generally a higher fidelity of the aforementioned basic services as well as an extended testbed layout covering also wide-area distances [16]. The instrumentation for measuring the power consumption has been significantly improved: nA current measurements resolution, peak power up to ${500}\mathrm{\;{mA}}$ , a sampling rate of ${64}\mathrm{{kHz}}$ , electrical isolation of target devices to not perturb low-power measurements.
|
| 64 |
+
|
| 65 |
+
§ 3.1 OBSERVER INSTRUMENTATION PLATFORM
|
| 66 |
+
|
| 67 |
+
Each observer consists of a Linux host system, a main board and several target adapter boards hosting up to four different target devices. The four target slots are connected using a multiplexing unit that routes all signals on the observer main board.
|
| 68 |
+
|
| 69 |
+
3.1.1 Linux Single-Board Computer as Observer Host Platform. A standard Linux single-board computer (SBC), a BeagleBone Green, is used as host platform for the decentralized stateful observers. All tracing data is recorded on the observer in order to alleviate the inherent bottleneck to the testbed server. The BeagleBone Green SBC includes a single core ARM Cortex-A8 processor and two single-cycle Programmable Real-Time Unit (PRU) co-processors for low latency tracing. It further features an Ethernet interface with integrated hardware support for network-based time synchronization (NTP/PTP), generic IO extensions and local Flash memory.
|
| 70 |
+
|
| 71 |
+
3.1.2 Embedded Debug and Trace Integration. The key feature on the FlockLab 2 observer is an integrated Segger J-Link OB debug probe. This gives native access to state-of-the-art ARM Cortex-M CoreSight debug and trace facilities which are built into modern systems-on-chip (SoC) [12]. This allows to utilize simple halting debug mode (where architectural state can be observed), single step execution, breakpoint units and Performance Monitoring Units (PMUs). CoreSight further provides an Embedded Cross Trigger mechanism to synchronize or distribute debug requests and profiling information across the SoC. Embedded Trace Macrocells (ETM trace unit) or Program Trace Macrocells (PTM trace unit) allow to trace program execution at runtime and without instrumentation in the code that (i) alters program behavior and (ii) needs to be adapted for every single analysis step. The trace macrocells can either be captured using an on-chip trace buffer or accessed via the generic Serial Wire Debug (SWD) connection implemented on the J-Link debug probe acting as off-chip trace port analyzer (see Fig. 3). Dedicated synchronization points and global 64-bit timestamps across the whole $\mathrm{{SoC}}$ architecture can be enabled in the tracing architecture to gain accurate temporal context of an application and its interaction with the underlying hardware at runtime. The debug and tracing architecture is implementation specific and can be found in the respective microprocessor documentation.
|
| 72 |
+
|
| 73 |
+
< g r a p h i c s >
|
| 74 |
+
|
| 75 |
+
Figure 3: Tracing instrumentation based on the in-situ debug and trace unit and external capabilities.
|
| 76 |
+
|
| 77 |
+
The use of SWD on FlockLab 2 with the SWDCLK and SWDIO signals as well as a dedicated Serial Wire Output (SWO) trace port is a tradeoff between bandwidth and pin count. By using data buffer exchange capabilities of the debug probe, e.g. with Segger Real Time Transfer (RTT) software technology that can be easily integrated with user code, high-speed and little impact data transfer from the target to the observer is supported. For example, this allows more efficient printf( )-style logging on the target.
|
| 78 |
+
|
| 79 |
+
Besides the native hardware support for debug and trace, the main advantage lies in the ability to connect to all standard developer tools allowing interactive debug sessions on the testbed directly from a developer's IDE or use ready made tooling for automating tasks. A lot of time in sensor networks and IoT research has been spent in developing custom tooling and a host of scripts incompatible with industry standard debugging and profiling tools.
|
| 80 |
+
|
| 81 |
+
3.1.3 Power Tracing. Highly accurate and high-dynamic range power profiling is based on the RocketLogger embedded measurement device [13]. It combines two measurement principles using seamless autoranging: a shunt ammeter (high current range), as well as a feedback ammeter (low current range) provide the necessary precision and range to measure sleep currents (sub- $\mu$ A and peak power consumption (100 ’s of mA) of modern radios. The measurement circuit allows to measure the current through as well as the voltage at the target. A careful electrical isolation of all target IO lines allows to accurately measure on the order of $\mathrm{{nAs}}$ for the lowest power modes. Measurements are performed by precision analog-to-digital converters (ADCs) and periodically transferred to the single-board computer's storage via PRU0. To compensate for hardware and manufacturing variations, the voltage and current measurement circuit on each observer is calibrated before first use.
|
| 82 |
+
|
| 83 |
+
3.1.4 Serial Logging and Forwarding. Serial port communication via UART or USB is supported on each observer. This allows logging of simple printf( ) based console output and supports interactive communication with the target during tests by forwarding the serial port via TCP to the testbed user. To achieve high performance and accurate timing, logging is implemented in $\mathrm{C}$ and events are timestamped using the GNSS or PTP disciplined system clock.
|
| 84 |
+
|
| 85 |
+
3.1.5 Logic Actuation and Tracing. On each observer, 5 target GPIO pins can be captured and 2 target GPIO pins can be actuated. In FlockLab 2, logic tracing is implemented on the programmable real-time unit PRU1, which allows to acquire highly accurate timing trace data. Logic actuation is implemented by a Linux kernel module and the actuation events are logged by the PRU based logic tracing as well.
|
| 86 |
+
|
| 87 |
+
3.1.6 Testbed-wide Time Synchronization. Accurate timing across all signals for tracing and actuation on a single observer platform as well as across the whole testbed is one of the most important success factors of a distributed IoT testbed. With recent advances in higher system clock rates and ever more timing critical behavior in advanced communication schemes [10] the requirements for accurate timing is in the sub-microsecond scale. Since this testbed is designated to support long-range communication where observers might be distributed over kilometers, synchronization needs to work independent of other observers and their location.
|
| 88 |
+
|
| 89 |
+
< g r a p h i c s >
|
| 90 |
+
|
| 91 |
+
Figure 4: Logging of PPS signal alongside the target signals for accurate time synchronization.
|
| 92 |
+
|
| 93 |
+
The local Linux system time referenced to UTC and disciplined by GNSS serves as time reference for all testbed services. The integrated GNSS receiver (u-blox M8) generates an accurate pulse-per-second (PPS) signal which is tracked in dedicated hardware timers on the single-board computer. For accurate timing of the logic tracing, the PPS pulse is logged alongside the target signals (see Fig. 4). A linear correction factor is calculated for each epoch $i$ and applied to timestamp of the logic tracing event (numbered by $k$ ) once a test has completed.
|
| 94 |
+
|
| 95 |
+
$$
|
| 96 |
+
{t}_{\text{ Global }}^{\text{ event }}\left\lbrack {i,k}\right\rbrack = {t}_{\mathrm{{PRU}}}^{\text{ event }}\left\lbrack {i,k}\right\rbrack \cdot \frac{{t}_{\text{ Global }}^{\mathrm{{pps}}}\left\lbrack {i + 1}\right\rbrack - {t}_{\text{ Global }}^{\mathrm{{pps}}}\left\lbrack i\right\rbrack }{{t}_{\mathrm{{PRU}}}^{\mathrm{{pps}}}\left\lbrack {i + 1}\right\rbrack - {t}_{\mathrm{{PRU}}}^{\mathrm{{pps}}}\left\lbrack i\right\rbrack }
|
| 97 |
+
$$
|
| 98 |
+
|
| 99 |
+
Observers at locations with limited GNSS signal reception can use the Precision Time Protocol (PTP) as a fallback solution to discipline the Linux system clock. A prerequisite for this is a network infrastructure which fulfills the requirements of the PTP protocol. The PPS signal required for accurate logic tracing is based on Linux system time generated by a kernel module. Using hardware assisted timestamping at the PHY and MAC layer of the single-board computer's Ethernet interface allows to achieve synchronization accuracy in the order of $\sim {1\mu }\mathrm{s}$ for PTP [7] compared to $\sim {50}\mathrm{{ns}}$ for GNSS [10].
|
| 100 |
+
|
| 101 |
+
For the debug probe, incoming messages from the SWO trace port are timestamped using system time. The debug unit can be configured to export local timestamps via SWO. These can be converted to system time by applying a piece-wise linear regression similar to the correction factor for logic tracing described earlier.
|
| 102 |
+
|
| 103 |
+
3.1.7 Target Adapter. The target adapter is mainly a hardware adapter bridging form-factor and pinout. It may contain configuration options (e.g. jumpers) and extra debug pins depending on the target platform. Additionally, it contains a serial ID chip for automated identification of every target connected to an observer.
|
| 104 |
+
|
| 105 |
+
3.1.8 Power Generation and Reset. The target supply voltage is generated using an low-dropout (LDO) regulator controlled by a digital-to-analog converter (DAC) based reference voltage. This allows to dynamically control the target supply voltage. The target can be reset either by controlling the reset pin or by a full power-off-reset (POR). These power and reset capabilities enable to test under different operating conditions and under most realistic conditions.
|
| 106 |
+
|
| 107 |
+
3.1.9 Target Programming. Programming of the microcontroller of the target devices is performed either using a bootloader (BSL) or native single wire debug (SWD) for ARM based devices.
|
| 108 |
+
|
| 109 |
+
§ 3.2 TESTBED MANAGEMENT AND USER INTERFACE
|
| 110 |
+
|
| 111 |
+
3.2.1 Testbed Infrastructure. The testbed is orchestrated by a server which executes the scheduled tests, provides a MySQL database, provides storage space for test results, hosts the web interface and exposes an API for automated test scheduling and fetching. The database stores test scheduling information and configuration as well as the current state of the hardware infrastructure (e.g. which target is connected to which observer).
|
| 112 |
+
|
| 113 |
+
3.2.2 API and Visualization. FlockLab 2 focuses on fully autonomous test execution but also supports live interactions. Tests are configured and scheduled using a single XML file. This test configuration file allows to (1) select the target platform and the testbed nodes, (2) enable and configure actuation and tracing services, and (3) include one or more program images which will then be flashed to the targets. Real-time interaction during test execution is supported via the serial communication service (read/write) and via a remote debug session using the integrated Segger J-Link debug probe which allows to set breakpoints, halt the execution, read processor state and to retrieve data via SWO.
|
| 114 |
+
|
| 115 |
+
The results are stored on a server and can be downloaded as an archive file. Test management (creating and stopping tests, retrieving status information, downloading results) as well as an intuitive visualization of results is supported via web interface or the flocklab-tools command-line tool executed on the user's computer.
|
| 116 |
+
|
| 117 |
+
§ 3.3 PUBLICLY AVAILABLE TESTBED
|
| 118 |
+
|
| 119 |
+
The FlockLab 2 testbed is implemented as a public service with currently 10 active observers (Fig. 1) distributed on the floor of an office building. Additional observer hardware will extend the testbed to 30+ nodes including remote long-distance rooftop locations [16]. The testbed can be publicly accessed via the website ${}^{1}$ where we also publish documentation, examples, the hardware design and software source code.
|
| 120 |
+
|
| 121 |
+
Currently, the four target platforms listed in Tab. 1 are available. Additional target platforms can easily be added thanks to the generic target interface.
|
| 122 |
+
|
| 123 |
+
max width=
|
| 124 |
+
|
| 125 |
+
Target $\mathbf{{MCU}}$ Arch. Radio
|
| 126 |
+
|
| 127 |
+
1-4
|
| 128 |
+
Tmote Sky / TelosB MSP430F1611 MSP430 CC2420, 802.15.4, 2.4 GHz
|
| 129 |
+
|
| 130 |
+
1-4
|
| 131 |
+
DPP2 CC430 [3] CC430F5147 MSP430 CC430 SoC, CC1101-based, 868 MHz
|
| 132 |
+
|
| 133 |
+
1-4
|
| 134 |
+
DPP2 LoRa [3] STM32L4 ARM M4 SX1262, LoRa/FSK, 868 MHz
|
| 135 |
+
|
| 136 |
+
1-4
|
| 137 |
+
nRF52840 Dongle nRF52840 ARM M4 nRF52 SoC, 802.15.4/BLE, 2.4 GHz
|
| 138 |
+
|
| 139 |
+
1-4
|
| 140 |
+
|
| 141 |
+
Table 1: Target devices supported on FlockLab 2.
|
| 142 |
+
|
| 143 |
+
§ 3.4 FLOCKLAB 2 OBSERVER KEY CHARACTERISTICS
|
| 144 |
+
|
| 145 |
+
We provide a characterization of the FlockLab 2 observer in Tab. 2.
|
| 146 |
+
|
| 147 |
+
max width=
|
| 148 |
+
|
| 149 |
+
2|c|$\mathbf{{TargetPowerSupply}}$
|
| 150 |
+
|
| 151 |
+
1-2
|
| 152 |
+
Voltage range ${1.1}\mathrm{\;V} - {3.6}\mathrm{\;V}$
|
| 153 |
+
|
| 154 |
+
1-2
|
| 155 |
+
Voltage resolution 13.5 mV
|
| 156 |
+
|
| 157 |
+
1-2
|
| 158 |
+
Max. current ${500}\mathrm{\;{mA}}$
|
| 159 |
+
|
| 160 |
+
1-2
|
| 161 |
+
2|c|Logic Actuation
|
| 162 |
+
|
| 163 |
+
1-2
|
| 164 |
+
Timing accuracy <100 μs (typ.)
|
| 165 |
+
|
| 166 |
+
1-2
|
| 167 |
+
2|c|Power Tracing (see [13] for details)
|
| 168 |
+
|
| 169 |
+
1-2
|
| 170 |
+
Max resolution / sampling rate ${15.625\mu }\mathrm{s}/{64}\mathrm{{kHz}}$
|
| 171 |
+
|
| 172 |
+
1-2
|
| 173 |
+
Voltage accuracy 0.37% + 4 mV
|
| 174 |
+
|
| 175 |
+
1-2
|
| 176 |
+
Current accuracy (low range 0 - 2 mA) 0.01% + 60 nA
|
| 177 |
+
|
| 178 |
+
1-2
|
| 179 |
+
Current accuracy (high range 2 - 500 mA) 0.02% + 48μA
|
| 180 |
+
|
| 181 |
+
1-2
|
| 182 |
+
2|c|Logic Tracing
|
| 183 |
+
|
| 184 |
+
1-2
|
| 185 |
+
Max resolution / sampling rate ${0.1\mu }\mathrm{s}/{10}\mathrm{{MHz}}$
|
| 186 |
+
|
| 187 |
+
1-2
|
| 188 |
+
Max burst event rate ( $\leq {2000}$ edges) 10 MHz
|
| 189 |
+
|
| 190 |
+
1-2
|
| 191 |
+
Max continuous event rate (typ.) 900 kHz
|
| 192 |
+
|
| 193 |
+
1-2
|
| 194 |
+
Timing Accuracy (GNSS) <0.25 μs (typ.)
|
| 195 |
+
|
| 196 |
+
1-2
|
| 197 |
+
2|c|Serial Tracing
|
| 198 |
+
|
| 199 |
+
1-2
|
| 200 |
+
Max continuous throughput 460 kbaud
|
| 201 |
+
|
| 202 |
+
1-2
|
| 203 |
+
Timing accuracy <10 ms (typ.)
|
| 204 |
+
|
| 205 |
+
1-2
|
| 206 |
+
|
| 207 |
+
Table 2: Characterization of the FlockLab 2 observer.
|
| 208 |
+
|
| 209 |
+
§ 4 USING FLOCKLAB 2 IN PRACTICE
|
| 210 |
+
|
| 211 |
+
In order to demonstrate the capabilities of FlockLab 2 regarding features and performance, we discuss an example of network flooding based on synchronous transmissions [4] on a FSK/LoRa radio [3]. Building and scaling-up a communication protocol based on synchronous transmissions requires a very careful arbitration of all radio activities and extensive debugging as transmissions and corresponding interrupts need to be correctly aligned. It is therefore a suitable example to showcase the capabilities of FlockLab 2. In this example, we use the DPP2 LoRa target which consists of an STM32L433 Cortex-M4 microcontroller and a Semtech SX1262 radio [3] with a ${0.28\mu }\mathrm{A}$ standby current and ${7\mu }\mathrm{s}$ wake-up latency.
|
| 212 |
+
|
| 213 |
+
§ 4.1 SIMPLE SYNCHRONOUS TRANSMISSION PROTOCOL
|
| 214 |
+
|
| 215 |
+
Gloria is an optimized multi-hop network flooding protocol based on Glossy [4]. As depicted in Fig. 5, all nodes synchronously retransmit a received message a pre-defined number of times ( 3 times in our example) in subsequent transmission slots. Contrary to Glossy, Gloria nodes listen to a message only once. For the setting used in this example, the relevant values are InitOverhead = ${1.783}\mathrm{\;{ms}}$ and SlotTime $= {4.548}\mathrm{\;{ms}}$ (see Fig. 5). Both values are fixed for each specific radio configuration (in this case FSK 250 kbit/s) and have been determined using the datasheet and measurements.
|
| 216 |
+
|
| 217 |
+
< g r a p h i c s >
|
| 218 |
+
|
| 219 |
+
Figure 5: Gloria floods use concurrent re-transmission.
|
| 220 |
+
|
| 221 |
+
§ 4.2 TESTING WORKFLOW
|
| 222 |
+
|
| 223 |
+
4.2.1 Creating a Test. First, the software for the target platform is compiled using the standard toolchain/IDE. Then, an XML test configuration file is created containing the nodes, images and platform to be used, test duration, actuation, tracing and debugging services configuration. A minimalist example is shown in Listing 1. This is then uploaded to the FlockLab 2 server using the web interface or the flocklab-tools. On the server, the test is then scheduled. The server initiates the start of the test at the time specified, distributes the target images and configures all testbed services.
|
| 224 |
+
|
| 225 |
+
4.2.2 Interaction During Test Execution. Progress is monitored on the web interface where information on configuration and status is available. If the serial forwarding service is used it is possible to connect to an individual observer for the duration of the test using a TCP connection, e.g. by using netcat. Likewise an interactive debug session can be opened from the IDE on the user's computer to a GDB debug server running on the observer.
|
| 226 |
+
|
| 227 |
+
4.2.3 Analyzing Test Results. After the test completes, the server fetches all results from the observers and combines them into a single test result archive file that can be used for custom postprocessing or can be visualized using the flocklab-tools (an example is depicted in Fig. 7).
|
| 228 |
+
|
| 229 |
+
<testConf xmlns="http://www.flocklab.ethz.ch">
|
| 230 |
+
|
| 231 |
+
<generalConf>
|
| 232 |
+
|
| 233 |
+
<name>FlockLab XML template</name>
|
| 234 |
+
|
| 235 |
+
<schedule><duration>60</duration></schedule>
|
| 236 |
+
|
| 237 |
+
</generalConf>
|
| 238 |
+
|
| 239 |
+
<targetConf>
|
| 240 |
+
|
| 241 |
+
<obsIds>24 6 7 9</obsIds>
|
| 242 |
+
|
| 243 |
+
<voltage>3.3</voltage>
|
| 244 |
+
|
| 245 |
+
<embeddedImageId>Image_1</embeddedImageId>
|
| 246 |
+
|
| 247 |
+
</targetConf>
|
| 248 |
+
|
| 249 |
+
<serialConf>
|
| 250 |
+
|
| 251 |
+
<obsIds>24 6 7 9</obsIds>
|
| 252 |
+
|
| 253 |
+
<baudrate>115200</baudrate>
|
| 254 |
+
|
| 255 |
+
</serialConf>
|
| 256 |
+
|
| 257 |
+
<powerProfilingConf>
|
| 258 |
+
|
| 259 |
+
<obsIds>2 4</obsIds>
|
| 260 |
+
|
| 261 |
+
<samplingRate>1000</samplingRate>
|
| 262 |
+
|
| 263 |
+
</powerProfilingConf>
|
| 264 |
+
|
| 265 |
+
</testConf>
|
| 266 |
+
|
| 267 |
+
Listing 1: FlockLab 2 test configuration example.
|
| 268 |
+
|
| 269 |
+
§ 4.3 DEBUGGING AND ANALYSIS OF THE PROTOCOL
|
| 270 |
+
|
| 271 |
+
4.3.1 Embedded Debugging at Testbed Scale. In this example, 8 nodes are performing a Gloria network flood. To validate the correct protocol implementation, we use the debugger functionality which allows to extract internal variables. Concretely, we want to verify the correct calculation of the time of the next transmission (TxMarker in Fig. 5). In Figure 6, for each node, radio activity is shown in the first row (orange bars), the radio interrupts are shown in the second row (black bars). For node 4, the power trace (black line) is shown as well. A breakpoint has been set on the first TxDone interrupt of node 9 that can be inspected using a remote debug session to the target (see Sec. 3.1.2). Since the breakpoint halts this specific microprocessor, the captured traces show no more GPIO events after the node reached the breakpoint. The variables inspected at the breakpoint show e.g. slot_index $= 2$ ; message_size $= {30}$ as expected. The variables reconstructed_marker and current_tx_marker correspond to RefTime and TxMarker in Fig. 5, respectively. The extracted time difference value of ${10.879}\mathrm{\;{ms}}$ conforms with the expected value for FloodOverhead $+ 2 \cdot$ SlotTime. This confirms the correct calculation of TxMarker in the synchronous Gloria flood.
|
| 272 |
+
|
| 273 |
+
4.3.2 Timing Validation using Logic Tracing. In synchronous protocols, transmissions and corresponding interrupts need to be correctly aligned. This is traditionally done by instrumenting code and tracing GPIO pins with the logic tracing service (see Sec. 3.1.5). In Fig. 6, radio activity with two interrupts (SyncWordValid and RxDone) corresponds to a message reception and radio activity with a single interrupt (TxDone) corresponds to a transmission. Re-transmissions are scheduled based on the timing of received messages. For this, the SyncWordValid timestamp is used to calculate individual start times on each node. For this, the exact time offset between the start of the transmission and the SyncWordValid interrupt needs to be calibrated. In the example in Fig. 7, this offset is
|
| 274 |
+
|
| 275 |
+
... not set correctly and consequently synchronous transmissions are not aligned. Using logic tracing, this malfunction can be detected (green lower bars) and the correct value can be determined.
|
| 276 |
+
|
| 277 |
+
< g r a p h i c s >
|
| 278 |
+
|
| 279 |
+
Figure 6: The embedded debug service allows to extract detailed internal debug information using breakpoints.
|
| 280 |
+
|
| 281 |
+
< g r a p h i c s >
|
| 282 |
+
|
| 283 |
+
Figure 7: Gloria slot timing is skewed due to a wrong timing parameter. The logic tracing service allows to detect and correct erroneous behavior at interrupt-level granularity.
|
| 284 |
+
|
| 285 |
+
4.3.3 Optimizing for Low Power Consumption. To maximize the lifetime of a battery-powered cyber-physical system, careful optimization and orchestration of the low-power modes is required. In this example, we validate the low-power behavior of the Gloria flood implementation by using the power tracing service (Sec. 3.1.3) together with the logic tracing capabilities (Sec. 3.1.5). In a simple implementation communication is executed in a fixed-length active window (see Fig. 8). In the optimized case a node will transit to low-power sleep mode immediately after completing a required action, e.g. sending and receiving data. The difference between fixed-length and dynamic active window sizes can be seen in Fig. 8 together with the respective radio interrupts.
|
| 286 |
+
|
| 287 |
+
§ 5 CONCLUSIONS
|
| 288 |
+
|
| 289 |
+
In this paper, we present the second-generation testbed FlockLab 2 which combines industry standard debug and trace support with accurate high-dynamic range power and timing measurements. Relevant design aspects including the distributed testbed-wide time synchronization and the in-situ debug and logging capabilities have been demonstrated with real-world applications. These aspects make the testbed a valuable tool for developing and benchmarking distributed IoT systems.
|
| 290 |
+
|
| 291 |
+
< g r a p h i c s >
|
| 292 |
+
|
| 293 |
+
Figure 8: High-dynamic range power tracing is used to validate and optimize low-power behavior.
|
papers/MobiCom/MobiCom 2020/MobiCom 2020 Workshop/MobiCom 2020 Workshop CPS-IoTBench/PXwOPetJ2bF/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,279 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Towards an Automated Monitoring of RF Activity in Low-Power Wireless Testbeds
|
| 2 |
+
|
| 3 |
+
Double-blind submission
|
| 4 |
+
|
| 5 |
+
## ABSTRACT
|
| 6 |
+
|
| 7 |
+
To rigorously benchmark the performance of low-power wireless protocols, it is essential to monitor and quantify the RF activity in a given testing environment. Indeed, unwanted radio interference in the surroundings of wireless nodes may worsen their communication performance. Similarly, an inconsistent RF noise across multiple test runs may prevent the ability to fairly compare their results. Unfortunately, to date, this aspect is largely neglected by the community, especially due to the lack of monitoring tools enabling a quantitative assessment of RF activity in large testing facilities. In this paper, we move the first steps towards the creation of a low-cost tool automating the distributed monitoring of RF usage in a low-power wireless testbed. Specifically, we first instrument the latest generation Raspberry Pi devices to sense any ongoing activity on the RF channel, enabling a functionality that is typically not available on off-the-shelf Wi-Fi hardware. We then show that one can synchronize the RF measurements of multiple Raspberry Pi connected to a common Ethernet backbone with an average error below ${200\mu }\mathrm{s}$ . We further devise exemplary strategies to quantify the difference in RF activity across test runs, and enable the real-time detection of deviations in the current RF channel usage compared to what was measured in earlier runs. We finally showcase the ability to compare the RF activity during several test runs and detect when additional interference was present in the environment, as well as when diverse interference patterns were artificially generated.
|
| 8 |
+
|
| 9 |
+
Data availability statement. The firmware used for the data collection and the scripts developed to process the raw data and generate the plots will be made publicly available on an institutional repository (link omitted due to double-blind review). The authors commit to keep the data publicly available for at least two years.
|
| 10 |
+
|
| 11 |
+
## 1 INTRODUCTION
|
| 12 |
+
|
| 13 |
+
The research community traditionally validates low-power wireless solutions experimentally on real-world testbeds [18]. A large variety of testbeds exist: from small-scale installations used internally by various research groups $\left\lbrack {{12},{25}}\right\rbrack$ , to large-scale publicly-available facilities such as FIT-IoTLab [1], Indriya [11], and FlockLab [23].
|
| 14 |
+
|
| 15 |
+
An aspect that is common to most of these low-power wireless testbeds is that they are located in office or university buildings, i.e., the nodes are deployed in open spaces and dynamic environments. As a consequence, these testbeds are often subject to a level of uncontrollable RF activity, e.g., generated by laptops, smart-phones, and other devices used by people operating in close proximity.
|
| 16 |
+
|
| 17 |
+
This is especially relevant as low-power wireless nodes are highly susceptible to radio interference [6]. For example, the transmissions of IEEE 802.15.4 and Bluetooth Low Energy devices are highly vulnerable to transmissions of surrounding Wi-Fi devices, which share the same frequencies (the ${2.4}\mathrm{{GHz}}$ ISM band), use a wider channel bandwidth (20 to ${40}\mathrm{{MHz}}$ ), and operate at a transmission power that is higher by several orders of magnitude (up to ${20}\mathrm{{dBm}}$ ).
|
| 18 |
+
|
| 19 |
+
As a result, when evaluating the performance of low-power wireless protocols and comparing it to the state-of-the-art, it is common to make use of testbeds during night or during weekends [6, $9,{14},{19},{20},{34},{37}\rbrack$ , i.e., when buildings are at their quietest, so to minimize the impact of external interference on the experiments.
|
| 20 |
+
|
| 21 |
+
However, some occasional RF activity may still be present in the testbed area, e.g., due to the idle activities of Wi-Fi access points installed nearby, or due to night owls working until late. Such RF activity may be sufficient to bias the experiments and lead to wrong conclusions, for example when comparing the reliability of state-of-the-art protocols, which is nowadays often close to ${100}\%$ [7].
|
| 22 |
+
|
| 23 |
+
Generalizing, whenever benchmarking protocol performance, it is important to account for the inherent variability of the experimental conditions and to detect any deviations in the RF environment. The same holds true when carrying out experiments involving the generation of artificial radio interference to stress-test protocols (e.g., using tools similar to JamLab-NG [32]): the synthetic interference patterns should remain consistent throughout different runs and no uncontrolled RF noise should be present in the surroundings. Only this way, one can ensure reproducible and comparable results.
|
| 24 |
+
|
| 25 |
+
Ideally, such an RF activity monitoring is fully automated and integrated into the experimentation chain, i.e., offered by low-power wireless testbed facilities, as highlighted by Boano et al. in an open manifesto to the community [5]. This way, the testing infrastructure can autonomously refute and rerun measurements: for example, in case the RF activity largely varies from that of previous runs.
|
| 26 |
+
|
| 27 |
+
Challenges. However, in order to integrate such functionality in existing testing facilities, several challenges need to be tackled. Accurate monitoring of RF activity on a large scale. First, one needs the ability to observe the RF spectrum across an entire testbed installation. One approach to do this consists in using low-power wireless nodes (e.g., TelosB nodes and nRF52840 dongles) spread across the testbed to scan the received signal strength in their surroundings. However, besides introducing extra costs, this approach is not optimal due to the limited channel bandwidth of these devices, which makes them unsuitable to accurately detect Wi-Fi activity. Using Wi-Fi devices to fulfil the same task is not feasible, as Wi-Fi hardware does not allow developers to measure RF activity. Therefore, one currently has to resort to spectrum analyzers and software defined radios $\left\lbrack {3,{21}}\right\rbrack$ , which is very expensive and does not scale when testbed installations span several floors or large buildings.
|
| 28 |
+
|
| 29 |
+
Synchronization of distributed RF measurements. A second challenge is to establish a common timebase in order to correlate and fuse the RF measurements of several nodes. However, depending on the employed hardware, this may be complex: for example, when connecting RF monitoring devices via USB, the jitter of the FTDI interface makes it hard to accurately time-stamp their measurements [30]. Quantitative assessment of RF activity during a run. The ability to monitor the RF spectrum, alone, is insufficient. Without a metric quantifying the RF activity during a test run, indeed, only a visual inspection of the RF channel is possible, which is subjective and only allows a qualitative assessment [22, 33]. Instead, to rigorously benchmark protocols and claim reproducibility and comparability, a quantitative assessment is necessary. To this end, one needs to identify which data a device should collect to objectively and unambiguously quantify the amount of RF activity in its surroundings. Moreover, when using this data to derive a metric capturing the amount of RF activity, one should be able to filter the transmissions of low-power wireless nodes that are part of the testing facility (i.e., the devices running the solution being tested). Without doing so, the computed metric would not only capture the amount of RF noise, but also the "spectrum friendliness" of the tested solution. Ideally, one would have the ability to distinguish between the two. Comparing RF activity across test runs. Finally, as the ultimate goal is to compare whether different test runs have been executed under similar settings, one needs to weigh the currently-measured RF activity against that of previous runs. Specifically, it should be possible to juxtapose the metrics computed across different runs and return whether there were major deviations in RF activity (e.g., additional RF noise or different interference patterns). One should not only account for temporal deviations, but also for spatial discrepancies, as wireless nodes are typically spread across a large area. This comparison process should ideally require a limited amount of time and not be resource-intensive. This way, right at the end of an experiment, one can deem whether a rerun is necessary.
|
| 30 |
+
|
| 31 |
+
Contributions. In this paper, we tackle these challenges and move the first steps towards the creation of a low-cost tool automating the monitoring of RF activity in low-power wireless testbeds.
|
| 32 |
+
|
| 33 |
+
We first show that it is possible to use the Wi-Fi module embedded on off-the-shelf Raspberry Pi $3\mathrm{\;B} + /4$ hardware to monitor RF usage at sufficient granularity to recognize common interference sources in the ${2.4}\mathrm{{GHz}}$ band. This is important, as these devices are often already used as observer nodes in low-power wireless testbeds to orchestrate activities, measure performance, and generate RF noise [26, 30, 32]. We achieve this by using Nexmon, a C-based firmware patching framework for Cypress Wi-Fi chips [28, 29].
|
| 34 |
+
|
| 35 |
+
We then show that one can synchronize the RF measurements of multiple Raspberry Pi 4B (RPi4) connected to a common Ethernet backbone (i.e., in the same way as observer nodes are connected in a testbed facility), with an average error below ${200\mu }\mathrm{s}$ . This enables us to correlate distributed RF measurements and devise exemplary strategies to quantify the difference in RF activity across test runs.
|
| 36 |
+
|
| 37 |
+
Specifically, we use the distribution of the observed power over time at the various nodes and illustrate different techniques allowing the real-time detection of deviations in the RF channel usage compared to what was measured in earlier runs. We further show how this approach allows to filter the activity of a testbed's own nodes and showcase the ability to detect when additional radio interference was present in the environment, as well as when diverse interference patterns were artificially generated by JamLab-NG.
|
| 38 |
+
|
| 39 |
+
After describing related work in $\$ 2$ , this paper proceeds as follows:
|
| 40 |
+
|
| 41 |
+
- We instrument RPi4 devices to monitor nearby RF activity (§3).
|
| 42 |
+
|
| 43 |
+
- We illustrate how we can observe the same RF activity across multiple RPi 4 with low synchronization errors (§ 4).
|
| 44 |
+
|
| 45 |
+
- We describe how to quantify the difference and detect deviations in the measured RF activity across several test runs (§5).
|
| 46 |
+
|
| 47 |
+
- We close the paper in $\$ 6$ along with a discussion on future work.
|
| 48 |
+
|
| 49 |
+
## 2 RELATED WORK
|
| 50 |
+
|
| 51 |
+
To account for variations in RF activity and avoid inconsistencies across test runs, researchers often monitor the RF spectrum and determine if its usage is steady. To this end, they use low-cost spectrum analyzers such as the Wi-Spy ${}^{1}\left\lbrack {{21},{22},{33}}\right\rbrack$ , or low-power wireless nodes to sample the received signal strength $\left\lbrack {{13},{16}}\right\rbrack$ . However, this process mostly consists in a visual inspection of the RF channel usage, which is subjective and only allows a qualitative assessment. Instead, we aim to provide a quantitative assessment of RF activity.
|
| 52 |
+
|
| 53 |
+
A few low-power wireless testbed facilities (e.g., FlockLab [36], TWIST [10, 21], and w-iLab.2 [35]) embed software-defined radios, Wi-Spy, or high-end spectrum analyzers to allow their users to monitor RF activity. However, they have just one monitoring node across the testbed [3], operate in sub-GHz frequency only [36], or rely on old Wi-Fi hardware such as the ath9k chips [35]. Moreover, they do not perform any synchronization of the distributed measurements and do not endeavour to compute a metric quantifying the RF activity, so to enable a better reproducibility and comparability of results, which is the ultimate goal of our work.
|
| 54 |
+
|
| 55 |
+
A few researchers have analyzed the RF activity on a channel using off-the-shelf hardware for different purposes. Hermans et al. [17] aim to identify the source of interference in IEEE 802.15.4 networks. Similarly, Grimaldi et al. [15] aim to classify external interference in real-time via supervised learning. Noda et al. [27] try to quantify the quality of the channel to build interference-aware wireless sensor networks. Brown et al. [8] measure the probability distribution function of idle periods to estimate the packet reception rate of an IEEE 802.15.4 network before deployment. These works monitor the RF channel with the goal of mitigating radio interference. In contrast, in this work, we aim to perform a distributed RF monitoring to account for the inherent variability of the conditions in the testing environment and inform the user accordingly.
|
| 56 |
+
|
| 57 |
+
## 3 MONITORING SURROUNDING RF ACTIVITY USING OFF-THE-SHELF WI-FI HARDWARE
|
| 58 |
+
|
| 59 |
+
Our aim is to design a low-cost solution to monitor the RF activity within a low-power wireless testbed. To this end, as discussed in $§2$ , the use of specialized hardware such as spectrum analyzers and software-defined radios (SDR) is not an option due to their high costs. Indeed, even the cheapest SDR costs over ${100} \in$ , and requires a dedicated powerful computer to orchestrate its operations. For this reason, instrumenting a testbed with several of these high-end devices would be both expensive and labor-intensive.
|
| 60 |
+
|
| 61 |
+
Also using a fraction of the low-power wireless nodes embedded in the testbed is not an option to directly observe the RF spectrum. Indeed, tools such as TI’s SmartRF Studio ${}^{2}$ , Nordic Semiconductors’ nRF Connect RSSI viewer ${}^{3}$ , and Contiki’s RSSI scanner ${}^{4}$ , have only a limited channel bandwidth: one would either require several nodes to monitor a single Wi-Fi channel, or let a single node continuously shift frequency at a cost of a lower sampling rate. Moreover, these low-power radios would need to be connected to one of the observer nodes in the testbed for further data processing or storage.
|
| 62 |
+
|
| 63 |
+
---
|
| 64 |
+
|
| 65 |
+
${}^{1}$ https://www.metageek.com/products/hardware/
|
| 66 |
+
|
| 67 |
+
${}^{2}$ http://www.ti.com/tool/SMARTRFTM-STUDIO
|
| 68 |
+
|
| 69 |
+
${}^{3}$ https://github.com/NordicSemiconductor/pc-nrfconnect-rssi
|
| 70 |
+
|
| 71 |
+
${}^{4}$ https://github.com/contiki-os/contiki/tree/master/examples/rssi-scanner
|
| 72 |
+
|
| 73 |
+
---
|
| 74 |
+
|
| 75 |
+

|
| 76 |
+
|
| 77 |
+
Figure 1: Sketch of our RF monitoring functionality on RPi4.
|
| 78 |
+
|
| 79 |
+
The ideal case would be to reuse a testbed's observer nodes for this purpose. For example, many low-power wireless testbeds make use of devices such as the Raspberry $\mathrm{{Pi}}$ as observer nodes to orchestrate activities, measure performance, and generate RF noise $\lbrack {24},{26},{30}$ , 32]. As these devices embed radio modules operating in the ${2.4}\mathrm{{GHz}}$ band, they could be used to also monitor the surrounding RF activities. Unfortunately, the Raspberry Pi 3B and later revisions embed a Wi-Fi module, but they do not allow to measure the strength of the RF signal at an arbitrary point in time - a problem that is common to most off-the-shelf Wi-Fi hardware ${}^{5}$ .
|
| 80 |
+
|
| 81 |
+
In this work, we tackle this limitation and enable off-the-shelf observer nodes to monitor the surrounding RF activity. While also the Cypress (former Broadcom) BCM43455C0 Wi-Fi module found on recent Raspberry Pi devices does not provide a way to instantaneously measure the strength of the RF signal, one can use reverse engineering and flash patching tools to craft an RF power estimator on these low-cost Wi-Fi modules. To this end, we use Nexmon, a C-based firmware patching framework that has been used in the past to i.a., enable monitor mode on Cypress Wi-Fi chips [29].
|
| 82 |
+
|
| 83 |
+
Thanks to Nexmon, one can already record each individual Wi-Fi transmission (even those from networks with which a device is not associated) using tools such as Wireshark or tcpdump, and derive a list of sniffed packets in pcap format. However, besides Wi-Fi activity, no other source of RF noise can be currently monitored.
|
| 84 |
+
|
| 85 |
+
Therefore, we extend Nexmon as follows. First, we make use of Ghidra ${}^{6}$ , a software reverse engineering suite, to gleam into the inner workings of the BCM43455C0 firmware and spot leftover functionality that is usually not accessible to end-users (e.g., hidden functions that are only partially implemented, as well as a remnant of calibration and compliance testing features). We identify one function (wlc_phy_rx_iq_est_acphy), that fits exactly our purposes: it instructs the RF front-end to compute the power estimate over a given number of samples ( ${2}^{10}$ in our implementation). While the power estimate returned by this function is sufficient to monitor RF activity (we use this value in the remainder of this paper), a manual calibration is needed to express the power estimate in $\mathrm{{dBm}}$ .
|
| 86 |
+
|
| 87 |
+
Building upon this function, we create a userland application (RF Measurement App) and a scheduler running within the BCM43455C0 radio firmware (RF Measurement Scheduler) that interact in order to collect a sequence of RF power estimates, as shown in Fig. 1. Specifically, we use Nexmon's nexutil tool to trigger commands for the RF measurement scheduler using input/output control (IOCTL) system calls. As the overhead of the system calls is significant, polling the radio for RF power measurements would result in a limited and non-deterministic sampling rate. Therefore, similar to the approach used in JamLab-NG [32], we make use of the IOCTL interface to only instruct the radio to begin periodic measurements on a specific channel with a given bandwidth.
|
| 88 |
+
|
| 89 |
+

|
| 90 |
+
|
| 91 |
+
Figure 2: Power estimates returned by a RPi4 in the presence of different devices generating RF noise in the ${2.4}\mathrm{{GHz}}$ band.
|
| 92 |
+
|
| 93 |
+
Internally, the RF measurement scheduler uses a timer (hw_timer) to periodically call the wlc_phy_rx_iq_est_acphy function. We timestamp the power estimates returned by this function with a $\mu$ s-precision timer (tsf_timer), and generate a UDP packet to be injected into the wlan0 interface. Each UDP packet contains a single timestamped power estimate and it is sent to port 5555 , such that it can be captured by the RF measurement app accordingly.
|
| 94 |
+
|
| 95 |
+
Following this procedure, we can instrument the RPi 4 to estimate surrounding RF activity at ${7.8}{\mathrm{{kHz}}}^{7}$ . Fig. 2 shows exemplary power estimates returned by the RF measurement app in the presence of Wi-Fi and BLE traffic, as well as a microwave oven operating nearby. The Wi-Fi traffic is generated by a second RPi 4 placed $2\mathrm{\;m}$ away, whereas the BLE traffic is generated by an nRF52840DK node placed ${30}\mathrm{\;{cm}}$ away. The microwave oven was used to heat up water and was located ${1.5}\mathrm{\;m}$ away from the RPi 4 sniffing nearby RF activity. All measurements make use of a channel bandwidth of ${20}\mathrm{{MHz}}$ .
|
| 96 |
+
|
| 97 |
+
We also place a third RPi4 using Nexmon's monitor mode and tcpdump to capture the duration and strength of the generated Wi-Fi traffic in a pcap file. This third RPi 4 is also placed $2\mathrm{\;m}$ away from the one generating Wi-Fi traffic and its measurements are synchronized with those of the RPi4 sniffing RF activity as described in § 4. Fig. 2a shows that the power estimate returned by the sniffing RPi4 (blue line) correctly captures the over-the-air duration of the Wi-Fi packet extracted by tcpdump's pcap file (orange line).
|
| 98 |
+
|
| 99 |
+
## 4 SYNCHRONIZATION OF DISTRIBUTED RF MEASUREMENTS
|
| 100 |
+
|
| 101 |
+
To measure the RF activity across large-scale testbed installations (i.e., to have a good spatial coverage) and to monitor the activities in the entire ${2.4}\mathrm{{GHz}}$ ISM band (i.e., to monitor several Wi-Fi channels at once), several RPi4 devices should be used to perform RF measurements at the same time. Therefore, it is important to synchronize their activities and establish a common time-base.
|
| 102 |
+
|
| 103 |
+
---
|
| 104 |
+
|
| 105 |
+
${}^{5}$ A notable exception is the at th9k series of Wi-Fi cards from Qualcomm. These chips allow fast polling of a binary channel clear assessment (CCA). Although the CCA threshold can be manually configured, the device can only return a true/false assessment, which is insufficient for an accurate analysis of nearby RF activity [2].
|
| 106 |
+
|
| 107 |
+
${}^{6}$ https://ghidra-sre.org/
|
| 108 |
+
|
| 109 |
+
${}^{7}$ Note that one can achieve a higher rate by decreasing the number of samples and by sending several measurements in a single UDP packet. In this work, we focus on a prototypic implementation and leave these optimizations as future work.
|
| 110 |
+
|
| 111 |
+
---
|
| 112 |
+
|
| 113 |
+

|
| 114 |
+
|
| 115 |
+
Figure 3: Power estimate returned by five RPi4, spread over ${16}{m}^{2}$ in a room, observing the same source of Wi-Fi traffic.
|
| 116 |
+
|
| 117 |
+

|
| 118 |
+
|
| 119 |
+
Figure 4: Synchronization error across five RPi4 monitoring the same RF activity. The boxes and whiskers in Fig. 4b show the median (green center line), the first and third quantile (box body), as well as the 1.5 interquartile range (whiskers).
|
| 120 |
+
|
| 121 |
+
To this end, we have implemented time-stamping twice throughout the measurement chain shown in Fig. 1. We first timestamp the collected power estimates using the RF measurement app. To do this, we employ the RPi4's unix timestamp: as this is the operating system's time, it can be kept in sync across different RPi4 in a testbed using the network time protocol (NTP), as shown in [32].
|
| 122 |
+
|
| 123 |
+
However, as each power estimate is independently passed from the RF front-end to the RF measurement app through the operating system and its network stack, one experiences non-deterministic delays affecting the accuracy of individual samples.
|
| 124 |
+
|
| 125 |
+
Therefore, we add a second timestamp as soon as the power estimates have been sampled by the RF front-end, i.e., right after the wlc_phy_rx_iq_est_acphy function has returned, using the time synchronization function timer (tsf_timer ${}^{8}$ ). This second timestamp provides a more fine-grained resolution that allows to accurately account for ephemeral RF events (e.g., a sequence of short BLE beacons). One can hence use this timestamp to correct the unix timestamps added by the RF measurement app.
|
| 126 |
+
|
| 127 |
+
Following this procedure, we instrument five RPi4 located in the same room and interconnected by an Ethernet backbone to sense the ongoing RF activity on the same Wi-Fi channel. We also place in the same room another RPi 4 running JamLab-NG to generate periodic Wi-Fi packets. Fig. 3 shows the power estimates collected from each RPi4 using the unix timestamp. As expected, due to the different location of the nodes and their distance from the RPi4 generating Wi-Fi traffic, the absolute value of the estimated power is different. However, each spike, which corresponds to the on-air time of a Wi-Fi packet, is well-synced across the five RPi4.
|
| 128 |
+
|
| 129 |
+

|
| 130 |
+
|
| 131 |
+
Figure 5: Probability density function of power estimates computed with $\theta = {300}\mathrm{\;s}$ on five RPi4 observing the same Wi-Fi activity. The portion in red marks noise floor samples.
|
| 132 |
+
|
| 133 |
+
To better quantify the synchronization error across the different RPi4, we consider more than 2000 Wi-Fi packets and compare the timestamp of each rising edge in the estimated power (i.e., the beginning of each spike in Fig. 3). Fig. 4 shows the maximum deviation across all the five RPi4 and the relative error to a specific device (pi5). Regardless of which RPi4 is used as reference, the median synchronization error including all uncertainties in our measurement chain does not deviate by more than ${22\mu }\mathrm{s}$ , with ${95}\%$ of the samples never exceeding an error of ${209\mu }\mathrm{s}$ .
|
| 134 |
+
|
| 135 |
+
As shown in Fig. 3 and 4, the synchronization accuracy of the unix timestamp is quite satisfactory. One can further increase accuracy by correcting these timestamps using those obtained with the tsf_timer. Moreover, as the unix timestamps are already drift-corrected using NTP, one could use linear regression to calculate a correction factor for the timestamps obtained with the tsf_timer. The latter exhibits a drift that strongly depends on the temperature of the RPi4, which varies as a function of its computational load ${}^{9}$ .
|
| 136 |
+
|
| 137 |
+
## 5 QUANTIFYING THE DIFFERENCE IN RF ACTIVITY ACROSS TEST RUNS
|
| 138 |
+
|
| 139 |
+
With the ability to measure the RF activity using off-the-shelf RPi 4 outlined in $§3$ and $§4$ , one can manually inspect the measurements and look for outliers. As this results in subjective and qualitative assessments only (see $§2$ ), in this section we derive a metric that allows to determine in real-time whether the ongoing RF activities are similar to those recorded in a previous experiment. Note that our aim is not to derive the best metric to compare the RF conditions in different experiments, but rather to showcase the feasibility of such real-time comparison as a seed for future work in the area.
|
| 140 |
+
|
| 141 |
+
Selecting a metric. In order to enable a real-time detection of deviations in the RF channel usage compared to what was measured in earlier runs, a first necessary step is the selection of a metric capturing the large number of power estimates sampled over time into a compact representation. While one could model or learn different RF interference patterns and compare those with the currently measured power estimates, we choose to make no assumptions about the characteristics of the RF activity and forgo any training. Instead, we derive a probability density function (PDF) of the observed power estimates over a time window $\theta$ . This approach is more generic, it allows to account for the strength of the RF signal (e.g., to capture whether sources of interference have moved closer or further away over time), and is more practical than learning in the presence of several sources of RF noise at the same time.
|
| 142 |
+
|
| 143 |
+
Fig. 5 shows the PDF computed for five RPi4 deployed in the same configuration used earlier (i.e., in the presence of an additional RPi 4 in the same room generating periodic Wi-Fi traffic) over a time window $\theta$ of approximately five minutes. The region marked in red indicates the absence of RF noise (i.e., the noise floor of each RPi4). Although the PDF shown in Fig. 5 allows to capture the ongoing RF activity measured by each RPi4, it does not contain a fine-grained information about the power estimates in the time domain. For this reason, RF interference occurring for a short amount of time gets averaged out and cannot be accounted for. To mitigate this problem, one can simply shorten the observation window $\theta$ , such that one can also account for ephemeral RF interference.
|
| 144 |
+
|
| 145 |
+
---
|
| 146 |
+
|
| 147 |
+
${}^{8}$ The tsf_timer is used by Nexmon’s monitor mode to perform time-stamping at the MAC layer and keep synchronized the Wi-Fi stations connected to the same access point (AP). However, as a RPi4 is not connected to an AP when collecting power estimates, its measurements are not automatically synced to those of nearby nodes.
|
| 148 |
+
|
| 149 |
+
${}^{9}$ We observed clock differences in the range of $5\mathrm{\;{ms}}$ within 10 minutes (i.e.,8 ppm).
|
| 150 |
+
|
| 151 |
+
---
|
| 152 |
+
|
| 153 |
+

|
| 154 |
+
|
| 155 |
+
Figure 6: Deviation between the power estimates obtained in three exemplary runs (see Fig. 7a) using three different methods.
|
| 156 |
+
|
| 157 |
+

|
| 158 |
+
|
| 159 |
+
Figure 7: Recorded power estimate by a RPi4 (a) and PRR of two TelosB nodes (b) in the presence of Wi-Fi interference.
|
| 160 |
+
|
| 161 |
+
Quantifying deviations in RF channel usage. In order to enable an automatic comparison of RF activity between runs, we investigate how to quantitatively compare two PDFs (such as the one shown in Fig. 5). To this end, we reuse existing methods included in the popular computer vision suite opencv to compare histograms ${}^{10}$ . Among others, we make use of correlation (Eq. 1), Hellinger distance (Eq. 2), as well as Kullback-Leibler divergence (Eq. 3), which are defined as follows:
|
| 162 |
+
|
| 163 |
+
$$
|
| 164 |
+
d\left( {{H}_{1},{H}_{2}}\right) = \frac{\mathop{\sum }\limits_{I}\left( {{H}_{1}\left( I\right) - \bar{{H}_{1}}}\right) \left( {{H}_{2}\left( I\right) - \bar{{H}_{2}}}\right) }{\sqrt{\mathop{\sum }\limits_{I}{\left( {H}_{1}\left( I\right) - \bar{{H}_{1}}\right) }^{2}\mathop{\sum }\limits_{I}{\left( {H}_{2}\left( I\right) - \bar{{H}_{2}}\right) }^{2}}} \tag{1}
|
| 165 |
+
$$
|
| 166 |
+
|
| 167 |
+
$$
|
| 168 |
+
d\left( {{H}_{1},{H}_{2}}\right) = \sqrt{1 - \frac{1}{\sqrt{{\bar{H}}_{1}{\bar{H}}_{2}{N}^{2}}}\mathop{\sum }\limits_{I}\sqrt{{H}_{1}\left( I\right) \cdot {H}_{2}\left( I\right) )}} \tag{2}
|
| 169 |
+
$$
|
| 170 |
+
|
| 171 |
+
$$
|
| 172 |
+
d\left( {{H}_{1},{H}_{2}}\right) = \mathop{\sum }\limits_{I}{H}_{1}\left( I\right) \log \left( \frac{{H}_{1}\left( I\right) }{{H}_{1}\left( I\right) }\right) \tag{3}
|
| 173 |
+
$$
|
| 174 |
+
|
| 175 |
+
where $N$ is the number of histogram bins, ${H}_{k}\left( I\right)$ represents the bin of histogram $k$ using a power estimate $I,{\bar{H}}_{k} = \frac{1}{N}\mathop{\sum }\limits_{J}{H}_{k}\left( J\right)$ , and $d\left( {{H}_{1},{H}_{2}}\right)$ is a representation of the "distance" across histograms computed based on the three aforementioned techniques.
|
| 176 |
+
|
| 177 |
+
We compare the deviation in RF activity across different runs using these three methods as follows. Using the same setup illustrated previously, we let five RPi4 record power estimates on Wi-Fi channel $7\left( {{2442}\mathrm{{MHz}}}\right)$ over 5-minutes runs. During each run, a nearby Raspberry Pi 3B using JamLab-NG generates a reproducible interference pattern on the same Wi-Fi channel during the first minute and the last three minutes (i.e., no interference is generated from ${60}\mathrm{\;s}$ to ${120}\mathrm{\;s}$ ). We also collocate two TelosB nodes running Contiki in the same room. These two nodes periodically exchange 8 packets/sec using nullmac, nullrdc, and Rime on IEEE 802.15.4 channel 18 (2440 MHz); logging the packet reception rate (PRR), i.e., the number of correctly received packets over time.
|
| 178 |
+
|
| 179 |
+
Fig. 7 shows the recorded power estimates of an exemplary RPi 4, as well as the PRR of the TelosB nodes for three different runs. Whilst the first and the second run are identical, in the third run we purposely create changes in the RF environment: a short burst of strong interference at ${90}\mathrm{\;s}$ lasting five seconds and a switch to a different (lighter) interference pattern in the last two minutes of the runs, i.e., after ${180}\mathrm{\;s}$ . Note that the spikes in Fig. 7a at ${120}\mathrm{\;s}$ and ${240}\mathrm{\;s}$ are due to a periodic self-calibration function of the Wi-Fi module and can be easily filtered due to their high value (up to 5000).
|
| 180 |
+
|
| 181 |
+
Fig. 6 shows the deviation between the three runs using the aforementioned histogram comparison methods. We employ $\theta = 1\mathrm{\;s}$ and use histograms of 128 bins in the range $\left\lbrack {-{92},0}\right\rbrack$ . While correlation has the advantage of having an upper bound, obtained by the comparison of the first run (run1) with itself, the Hellinger distance and Kullback-Leibler divergence start from 0 for identical histograms and increase proportionally to the difference between runs. From these results, we can conclude that the Hellinger distance is especially sensitive to small changes, whereas correlation does not penalize smaller deviations in the RF activity. Conversely, the Kullback-Leibler divergence can capture and significantly penalize ephemeral changes such as the self-calibration spike at ${240}\mathrm{\;s}$ .
|
| 182 |
+
|
| 183 |
+
All three methods clearly identify the artificial changes in RF usage introduced in the third run and are suitable to detect significant deviations in RF activity. To ultimately assess whether one should invalidate a test run, one can use Fig. 7b to gauge the impact of the changes in RF activity on the PRR between the TelosB nodes. Note that all three methods are computed within a fraction of a second while the computation of the histogram takes roughly $2\mathrm{\;s}$ on a single core of a modern processor. Hence, one can quickly detect deviations in the RF channel usage at the end of a test run, and autonomously make a decision on whether refuting the results and re-running the same experiment, as envisioned in [5].
|
| 184 |
+
|
| 185 |
+
Filtering the activity of a testbed's own nodes. So far, we have only focused on the detection of surrounding RF activity and ignored the impact of the transmissions of co-located low-power wireless nodes. However, when integrating such a solution into a low-power wireless testbed facility, one should be able to filter the transmissions of low-power wireless nodes that are part of the testing facility (i.e., the devices running the solution being tested).
|
| 186 |
+
|
| 187 |
+
---
|
| 188 |
+
|
| 189 |
+
${}^{10}$ https://docs.opencv.org/3.4/d6/dc7/group_imgproc_hist.html# ga994f53817d621e2e4228fc646342d386
|
| 190 |
+
|
| 191 |
+
---
|
| 192 |
+
|
| 193 |
+

|
| 194 |
+
|
| 195 |
+
Figure 8: Power estimate for five RPi4 spread out across a room observing a single TelosB generating RF activity.
|
| 196 |
+
|
| 197 |
+
Traditionally, such low-power wireless nodes are directly attached to the observer nodes in the testbed, i.e., they are located in very close proximity. Our experiments have actually shown that one can easily recognize and filter transmissions of low-power wireless nodes located in very close proximity to the RPi4 due to the high magnitude of the power estimate. Fig. 8 shows the power estimate of five RPi4 devices in the presence of a TelosB node sending packets periodically (a) and emitting a continuous modulated carrier tone (b) as in [4] using a transmission power of $0\mathrm{{dBm}}$ . The TelosB node is attached to pi5 and it is located about ${50}\mathrm{\;{cm}}$ away from pi2, $1\mathrm{\;m}$ away from pi3, and $2\mathrm{\;m}$ away from pi1 and pi4. As Fig. 8 shows, the BCM43455C0 heavily overestimates the narrowband signal to over 400 on pi5 , whereas pi2 still reports a power estimate of about 50 , which is easily distinguished from surrounding RF interference, which typically returns a lower power estimate value. At about $2\mathrm{\;m}$ the signal is indistinguishable from the noise floor, and cannot be detected by pi1 and pi4. Therefore, one can, in principle be agnostic to the transmissions of the nodes attached to a RPi4 acting as observer node in a low-power wireless testbed.
|
| 198 |
+
|
| 199 |
+
## 6 CONCLUSIONS AND FUTURE WORK
|
| 200 |
+
|
| 201 |
+
When benchmarking the performance of low-power wireless systems, it is important to account for the inherent variability of the RF conditions in the testing environment. In this paper, we have put the basis for the creation of a low-cost tool automating the distributed monitoring of RF activity in low-power wireless testbeds. After instrumenting several Raspberry Pi $4\mathrm{\;B}$ nodes to monitor the RF activity in their surroundings and synchronizing their measurements, we have showcased the ability to quantitatively compare the RF usage during several test runs and detect critical deviations.
|
| 202 |
+
|
| 203 |
+
Our work represents an important step towards a better reproducibility and comparability of results. However, to ensure that the experimental conditions are exactly the same across multiple runs, it is not sufficient to only check the amount of RF activity in the surroundings of the wireless nodes. For example, the ability to monitor that the link quality between the nodes in the testbed (which may vary if nodes are slightly moved, nearby shelves are moved, and doors are opened) did not change across multiple runs, is an orthogonal effort that goes beyond this paper. Similarly, the exemplary strategies to quantify the difference in RF activity across test runs presented in this paper only allow to objectively conclude how similar the RF conditions were when running several experiments: determining whether the variability of the RF conditions is sufficient to deem two or more test runs as comparable is not in the scope of this work. In the future, we plan to tackle also these issues, and to integrate our full-fledged RF monitoring approach into the framework of an existing benchmarking facility (e.g., D-Cube [31]).
|
| 204 |
+
|
| 205 |
+
REFERENCES
|
| 206 |
+
|
| 207 |
+
[1] C. Adjih et al. 2015. FIT IoT-LAB: A Large Scale Open Experimental IoT Testbed. In Proc. of the 2nd World Forum on the Internet of Things (WF-IoT). IEEE.
|
| 208 |
+
|
| 209 |
+
[2] Alex Bereza et al. 2017. Cross-Technology Communication between BLE and Wi-Fi using Commodity Hardware. In Proc. of the 14th EWSN Conf., demo session.
|
| 210 |
+
|
| 211 |
+
[3] B. Bloessl et al. 2013. A GNU Radio-based IEEE 802.15.4 Testbed. In Proc. of the 12th GI/ITG KuVS Fachgespräch Drahtlose Sensornetze (FGSN).
|
| 212 |
+
|
| 213 |
+
[4] C.A. Boano et al. 2011. JamLab: Augmenting Sensornet Testbeds with Realistic and Controlled Interference Generation. In Proc. of the 10th IPSN Conf. IEEE.
|
| 214 |
+
|
| 215 |
+
[5] C.A. Boano et al. 2018. Towards a Benchmark for Low-power Wireless Networking. In Proc. of the 1st CPSBench Workshop. IEEE.
|
| 216 |
+
|
| 217 |
+
[6] C.A. Boano and K. Römer. 2013. External Radio Interference. In Radio Link Quality Estimation in Low-Power Wireless Networks. Springer International Publishing.
|
| 218 |
+
|
| 219 |
+
[7] C.A. Boano, M. Schuß, and K. Römer. 2017. EWSN Dependability Competition: Experiences and Lessons Learned. IEEE Internet of Things Newsletter (2017).
|
| 220 |
+
|
| 221 |
+
[8] J. Brown et al. 2014. Estimating Packet Reception Rate in Noisy Environments. In Proc. of the 9th SenseApp Workshop. IEEE.
|
| 222 |
+
|
| 223 |
+
[9] Y. Chen and A. Terzis. 2010. On the Mechanisms and Effects of Calibrating RSSI Measurements for 802.15.4 Radios. In Proc. of the 7th EWSN Conference. Springer.
|
| 224 |
+
|
| 225 |
+
[10] CREW Project - Cognitive Radio Experimentation World. [n.d.]. TWIST Testbed. [Online] http://www.crew-project.eu/twist.html - Last accessed: 2020-06-05.
|
| 226 |
+
|
| 227 |
+
[11] M. Doddavenkatappa et al. 2011. Indriya: A Low-Cost, 3D Wireless Sensor Network Testbed. In Proc. of the 7th TridentCom Conference. Springer.
|
| 228 |
+
|
| 229 |
+
[12] S. Duquennoy et al. 2011. Lossy Links, Low Power, High Throughput. In Proc. of the 9th Conference on Embedded Networked Sensor Systems (SenSys). ACM.
|
| 230 |
+
|
| 231 |
+
[13] F. Ferrari et al. 2012. Low-Power Wireless Bus. In Proc. of the 10th International Conference on Embedded Network Sensor Systems (SenSys). ACM.
|
| 232 |
+
|
| 233 |
+
[14] S. Fu et al. 2018. Modeling Packet Loss Rate of IEEE 802.15.4 Links in Diverse Environmental Conditions. In Proc. of the 16th WCNC Conference. IEEE.
|
| 234 |
+
|
| 235 |
+
[15] S. Grimaldi et al. 2017. An SVM-Based Method for Classification of External Interf. in Industrial Wireless Sensor and Actuator Networks. JSAN 6, 2 (2017).
|
| 236 |
+
|
| 237 |
+
[16] J.H. Hauer et al. 2010. Mitigating the Effects of RF Interference through RSSI-Based Error Recovery. In Proc. of the 7th EWSN Conference. Springer.
|
| 238 |
+
|
| 239 |
+
[17] F. Hermans et al. 2013. SoNIC: Classifying Interference in 802.15.4 Sensor Networks. In Proc. of the 12th IPSN Conference. ACM.
|
| 240 |
+
|
| 241 |
+
[18] J. Horneber et al. 2014. A Survey on Testbeds and Experimentation Environments for Wireless Sensor Networks. IEEE Comm. Surveys & Tutorials 16, 4 (2014).
|
| 242 |
+
|
| 243 |
+
[19] T. Istomin et al. 2018. Interference-Resilient Ultra-low Power Aperiodic Data Collection. In Proc. of the 17th IPSN Conference. ACM.
|
| 244 |
+
|
| 245 |
+
[20] S. Kumar et al. 2020. Performant TCP for Low-Power Wireless Networks. In Proc. of the 17th NSDI Symposium. USENIX Association.
|
| 246 |
+
|
| 247 |
+
[21] F. Lemic et al. 2014. Infrastructure for Benchmarking RF-based Indoor Localization under Controlled Interference. In Proc. of the UPINLBS Conference. IEEE.
|
| 248 |
+
|
| 249 |
+
[22] C.J.M. Liang et al. 2010. Surviving Wi-Fi Interference in Low Power ZigBee Networks. In Proc. of the 8th SenSys Conference. ACM.
|
| 250 |
+
|
| 251 |
+
[23] R. Lim et al. 2013. FlockLab: A Testbed for Distributed, Synchronized Tracing and Profiling of Wireless Embedded Systems. In Proc. of the 12th IPSN Conf. IEEE.
|
| 252 |
+
|
| 253 |
+
[24] M. Sha. 2020. Binghamton University Wireless Embedded System Testbed. [Online] http://www.cs.binghamton.edu/~msha/testbed - Last accessed: 2020-06-05.
|
| 254 |
+
|
| 255 |
+
[25] C.B. Margi et al. 2010. Impact of Operating Systems on Wireless Sensor Networks (Security) Applications and Testbeds. In Proc. of the 19th ICCCN Conference. IEEE.
|
| 256 |
+
|
| 257 |
+
[26] J. Munoz et al. 2019. OpenTestBed: Poor Man's IoT Testbed. In Proc. of the CNERT Workshop. IEEE.
|
| 258 |
+
|
| 259 |
+
[27] C. Noda et al. 2011. Quantifying the Channel Quality for Interference-Aware Wireless Sensor Networks. ACM SIGBED Review 8, 4 (2011).
|
| 260 |
+
|
| 261 |
+
[28] M. Schulz et al. 2018. Shadow Wi-Fi: Teaching Smartphones to Transmit Raw Signals and to Extract Channel State Information to Implement Practical Covert Channels over Wi-Fi. In Proc. of the 16th MobiSys Conference. ACM.
|
| 262 |
+
|
| 263 |
+
[29] M. Schulz et al. 2018. The Nexmon Firmware Analysis and Modification Framework: Empowering Researchers to Enhance Wi-Fi Devices. ComCom 129 (2018).
|
| 264 |
+
|
| 265 |
+
[30] M. Schuß et al. 2017. A Competition to Push the Dependability of Low-Power Wireless Protocols to the Edge. In Proc. of the 14th EWSN Conference.
|
| 266 |
+
|
| 267 |
+
[31] M. Schuß et al. 2018. Moving Beyond Competitions: Extending D-Cube to Seamlessly Benchmark Low-Power Wireless Systems. In Proc. of CPSBench Workshop.
|
| 268 |
+
|
| 269 |
+
[32] M. Schuß et al. 2019. JamLab-NG: Benchmarking Low-Power Wireless Protocols under Controllable and Repeatable Wi-Fi Interference. In Proc. of the 16th EWSN Conference. Junction Publishing.
|
| 270 |
+
|
| 271 |
+
[33] M. Sha et al. 2013. Energy-efficient Low Power Listening for Wireless Sensor Networks in Noisy Environments. In Proc. of the 12th IPSN Conference. IEEE.
|
| 272 |
+
|
| 273 |
+
[34] A. Spina et al. 2020. XPC: Fast and Reliable Synchronous Transmission Protocols for 2-Phase Commit and 3-Phase Commit. In Proc. of the 17th EWSN Conference.
|
| 274 |
+
|
| 275 |
+
[35] w-iLab. 2 documentation. 2020. Spectrum Sensing. [Online] https://doc.ilabt.imec.be/ilabt/wilab/tutorials/spectrum.html - Last accessed: 2020-06-05.
|
| 276 |
+
|
| 277 |
+
[36] F. Wernli. 2019. Frequency Spectrum Monitoring for FlockLab. Technical Report. ETH Zürich. Semester Thesis.
|
| 278 |
+
|
| 279 |
+
[37] S. Yin et al. 2014. Multi Channel Performance of Dual Band Low Power Wireless Network. In Proc. of the 11th MASS Conference. IEEE.
|
papers/MobiCom/MobiCom 2020/MobiCom 2020 Workshop/MobiCom 2020 Workshop CPS-IoTBench/PXwOPetJ2bF/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,187 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ TOWARDS AN AUTOMATED MONITORING OF RF ACTIVITY IN LOW-POWER WIRELESS TESTBEDS
|
| 2 |
+
|
| 3 |
+
Double-blind submission
|
| 4 |
+
|
| 5 |
+
§ ABSTRACT
|
| 6 |
+
|
| 7 |
+
To rigorously benchmark the performance of low-power wireless protocols, it is essential to monitor and quantify the RF activity in a given testing environment. Indeed, unwanted radio interference in the surroundings of wireless nodes may worsen their communication performance. Similarly, an inconsistent RF noise across multiple test runs may prevent the ability to fairly compare their results. Unfortunately, to date, this aspect is largely neglected by the community, especially due to the lack of monitoring tools enabling a quantitative assessment of RF activity in large testing facilities. In this paper, we move the first steps towards the creation of a low-cost tool automating the distributed monitoring of RF usage in a low-power wireless testbed. Specifically, we first instrument the latest generation Raspberry Pi devices to sense any ongoing activity on the RF channel, enabling a functionality that is typically not available on off-the-shelf Wi-Fi hardware. We then show that one can synchronize the RF measurements of multiple Raspberry Pi connected to a common Ethernet backbone with an average error below ${200\mu }\mathrm{s}$ . We further devise exemplary strategies to quantify the difference in RF activity across test runs, and enable the real-time detection of deviations in the current RF channel usage compared to what was measured in earlier runs. We finally showcase the ability to compare the RF activity during several test runs and detect when additional interference was present in the environment, as well as when diverse interference patterns were artificially generated.
|
| 8 |
+
|
| 9 |
+
Data availability statement. The firmware used for the data collection and the scripts developed to process the raw data and generate the plots will be made publicly available on an institutional repository (link omitted due to double-blind review). The authors commit to keep the data publicly available for at least two years.
|
| 10 |
+
|
| 11 |
+
§ 1 INTRODUCTION
|
| 12 |
+
|
| 13 |
+
The research community traditionally validates low-power wireless solutions experimentally on real-world testbeds [18]. A large variety of testbeds exist: from small-scale installations used internally by various research groups $\left\lbrack {{12},{25}}\right\rbrack$ , to large-scale publicly-available facilities such as FIT-IoTLab [1], Indriya [11], and FlockLab [23].
|
| 14 |
+
|
| 15 |
+
An aspect that is common to most of these low-power wireless testbeds is that they are located in office or university buildings, i.e., the nodes are deployed in open spaces and dynamic environments. As a consequence, these testbeds are often subject to a level of uncontrollable RF activity, e.g., generated by laptops, smart-phones, and other devices used by people operating in close proximity.
|
| 16 |
+
|
| 17 |
+
This is especially relevant as low-power wireless nodes are highly susceptible to radio interference [6]. For example, the transmissions of IEEE 802.15.4 and Bluetooth Low Energy devices are highly vulnerable to transmissions of surrounding Wi-Fi devices, which share the same frequencies (the ${2.4}\mathrm{{GHz}}$ ISM band), use a wider channel bandwidth (20 to ${40}\mathrm{{MHz}}$ ), and operate at a transmission power that is higher by several orders of magnitude (up to ${20}\mathrm{{dBm}}$ ).
|
| 18 |
+
|
| 19 |
+
As a result, when evaluating the performance of low-power wireless protocols and comparing it to the state-of-the-art, it is common to make use of testbeds during night or during weekends [6, $9,{14},{19},{20},{34},{37}\rbrack$ , i.e., when buildings are at their quietest, so to minimize the impact of external interference on the experiments.
|
| 20 |
+
|
| 21 |
+
However, some occasional RF activity may still be present in the testbed area, e.g., due to the idle activities of Wi-Fi access points installed nearby, or due to night owls working until late. Such RF activity may be sufficient to bias the experiments and lead to wrong conclusions, for example when comparing the reliability of state-of-the-art protocols, which is nowadays often close to ${100}\%$ [7].
|
| 22 |
+
|
| 23 |
+
Generalizing, whenever benchmarking protocol performance, it is important to account for the inherent variability of the experimental conditions and to detect any deviations in the RF environment. The same holds true when carrying out experiments involving the generation of artificial radio interference to stress-test protocols (e.g., using tools similar to JamLab-NG [32]): the synthetic interference patterns should remain consistent throughout different runs and no uncontrolled RF noise should be present in the surroundings. Only this way, one can ensure reproducible and comparable results.
|
| 24 |
+
|
| 25 |
+
Ideally, such an RF activity monitoring is fully automated and integrated into the experimentation chain, i.e., offered by low-power wireless testbed facilities, as highlighted by Boano et al. in an open manifesto to the community [5]. This way, the testing infrastructure can autonomously refute and rerun measurements: for example, in case the RF activity largely varies from that of previous runs.
|
| 26 |
+
|
| 27 |
+
Challenges. However, in order to integrate such functionality in existing testing facilities, several challenges need to be tackled. Accurate monitoring of RF activity on a large scale. First, one needs the ability to observe the RF spectrum across an entire testbed installation. One approach to do this consists in using low-power wireless nodes (e.g., TelosB nodes and nRF52840 dongles) spread across the testbed to scan the received signal strength in their surroundings. However, besides introducing extra costs, this approach is not optimal due to the limited channel bandwidth of these devices, which makes them unsuitable to accurately detect Wi-Fi activity. Using Wi-Fi devices to fulfil the same task is not feasible, as Wi-Fi hardware does not allow developers to measure RF activity. Therefore, one currently has to resort to spectrum analyzers and software defined radios $\left\lbrack {3,{21}}\right\rbrack$ , which is very expensive and does not scale when testbed installations span several floors or large buildings.
|
| 28 |
+
|
| 29 |
+
Synchronization of distributed RF measurements. A second challenge is to establish a common timebase in order to correlate and fuse the RF measurements of several nodes. However, depending on the employed hardware, this may be complex: for example, when connecting RF monitoring devices via USB, the jitter of the FTDI interface makes it hard to accurately time-stamp their measurements [30]. Quantitative assessment of RF activity during a run. The ability to monitor the RF spectrum, alone, is insufficient. Without a metric quantifying the RF activity during a test run, indeed, only a visual inspection of the RF channel is possible, which is subjective and only allows a qualitative assessment [22, 33]. Instead, to rigorously benchmark protocols and claim reproducibility and comparability, a quantitative assessment is necessary. To this end, one needs to identify which data a device should collect to objectively and unambiguously quantify the amount of RF activity in its surroundings. Moreover, when using this data to derive a metric capturing the amount of RF activity, one should be able to filter the transmissions of low-power wireless nodes that are part of the testing facility (i.e., the devices running the solution being tested). Without doing so, the computed metric would not only capture the amount of RF noise, but also the "spectrum friendliness" of the tested solution. Ideally, one would have the ability to distinguish between the two. Comparing RF activity across test runs. Finally, as the ultimate goal is to compare whether different test runs have been executed under similar settings, one needs to weigh the currently-measured RF activity against that of previous runs. Specifically, it should be possible to juxtapose the metrics computed across different runs and return whether there were major deviations in RF activity (e.g., additional RF noise or different interference patterns). One should not only account for temporal deviations, but also for spatial discrepancies, as wireless nodes are typically spread across a large area. This comparison process should ideally require a limited amount of time and not be resource-intensive. This way, right at the end of an experiment, one can deem whether a rerun is necessary.
|
| 30 |
+
|
| 31 |
+
Contributions. In this paper, we tackle these challenges and move the first steps towards the creation of a low-cost tool automating the monitoring of RF activity in low-power wireless testbeds.
|
| 32 |
+
|
| 33 |
+
We first show that it is possible to use the Wi-Fi module embedded on off-the-shelf Raspberry Pi $3\mathrm{\;B} + /4$ hardware to monitor RF usage at sufficient granularity to recognize common interference sources in the ${2.4}\mathrm{{GHz}}$ band. This is important, as these devices are often already used as observer nodes in low-power wireless testbeds to orchestrate activities, measure performance, and generate RF noise [26, 30, 32]. We achieve this by using Nexmon, a C-based firmware patching framework for Cypress Wi-Fi chips [28, 29].
|
| 34 |
+
|
| 35 |
+
We then show that one can synchronize the RF measurements of multiple Raspberry Pi 4B (RPi4) connected to a common Ethernet backbone (i.e., in the same way as observer nodes are connected in a testbed facility), with an average error below ${200\mu }\mathrm{s}$ . This enables us to correlate distributed RF measurements and devise exemplary strategies to quantify the difference in RF activity across test runs.
|
| 36 |
+
|
| 37 |
+
Specifically, we use the distribution of the observed power over time at the various nodes and illustrate different techniques allowing the real-time detection of deviations in the RF channel usage compared to what was measured in earlier runs. We further show how this approach allows to filter the activity of a testbed's own nodes and showcase the ability to detect when additional radio interference was present in the environment, as well as when diverse interference patterns were artificially generated by JamLab-NG.
|
| 38 |
+
|
| 39 |
+
After describing related work in $\$ 2$ , this paper proceeds as follows:
|
| 40 |
+
|
| 41 |
+
* We instrument RPi4 devices to monitor nearby RF activity (§3).
|
| 42 |
+
|
| 43 |
+
* We illustrate how we can observe the same RF activity across multiple RPi 4 with low synchronization errors (§ 4).
|
| 44 |
+
|
| 45 |
+
* We describe how to quantify the difference and detect deviations in the measured RF activity across several test runs (§5).
|
| 46 |
+
|
| 47 |
+
* We close the paper in $\$ 6$ along with a discussion on future work.
|
| 48 |
+
|
| 49 |
+
§ 2 RELATED WORK
|
| 50 |
+
|
| 51 |
+
To account for variations in RF activity and avoid inconsistencies across test runs, researchers often monitor the RF spectrum and determine if its usage is steady. To this end, they use low-cost spectrum analyzers such as the Wi-Spy ${}^{1}\left\lbrack {{21},{22},{33}}\right\rbrack$ , or low-power wireless nodes to sample the received signal strength $\left\lbrack {{13},{16}}\right\rbrack$ . However, this process mostly consists in a visual inspection of the RF channel usage, which is subjective and only allows a qualitative assessment. Instead, we aim to provide a quantitative assessment of RF activity.
|
| 52 |
+
|
| 53 |
+
A few low-power wireless testbed facilities (e.g., FlockLab [36], TWIST [10, 21], and w-iLab.2 [35]) embed software-defined radios, Wi-Spy, or high-end spectrum analyzers to allow their users to monitor RF activity. However, they have just one monitoring node across the testbed [3], operate in sub-GHz frequency only [36], or rely on old Wi-Fi hardware such as the ath9k chips [35]. Moreover, they do not perform any synchronization of the distributed measurements and do not endeavour to compute a metric quantifying the RF activity, so to enable a better reproducibility and comparability of results, which is the ultimate goal of our work.
|
| 54 |
+
|
| 55 |
+
A few researchers have analyzed the RF activity on a channel using off-the-shelf hardware for different purposes. Hermans et al. [17] aim to identify the source of interference in IEEE 802.15.4 networks. Similarly, Grimaldi et al. [15] aim to classify external interference in real-time via supervised learning. Noda et al. [27] try to quantify the quality of the channel to build interference-aware wireless sensor networks. Brown et al. [8] measure the probability distribution function of idle periods to estimate the packet reception rate of an IEEE 802.15.4 network before deployment. These works monitor the RF channel with the goal of mitigating radio interference. In contrast, in this work, we aim to perform a distributed RF monitoring to account for the inherent variability of the conditions in the testing environment and inform the user accordingly.
|
| 56 |
+
|
| 57 |
+
§ 3 MONITORING SURROUNDING RF ACTIVITY USING OFF-THE-SHELF WI-FI HARDWARE
|
| 58 |
+
|
| 59 |
+
Our aim is to design a low-cost solution to monitor the RF activity within a low-power wireless testbed. To this end, as discussed in $§2$ , the use of specialized hardware such as spectrum analyzers and software-defined radios (SDR) is not an option due to their high costs. Indeed, even the cheapest SDR costs over ${100} \in$ , and requires a dedicated powerful computer to orchestrate its operations. For this reason, instrumenting a testbed with several of these high-end devices would be both expensive and labor-intensive.
|
| 60 |
+
|
| 61 |
+
Also using a fraction of the low-power wireless nodes embedded in the testbed is not an option to directly observe the RF spectrum. Indeed, tools such as TI’s SmartRF Studio ${}^{2}$ , Nordic Semiconductors’ nRF Connect RSSI viewer ${}^{3}$ , and Contiki’s RSSI scanner ${}^{4}$ , have only a limited channel bandwidth: one would either require several nodes to monitor a single Wi-Fi channel, or let a single node continuously shift frequency at a cost of a lower sampling rate. Moreover, these low-power radios would need to be connected to one of the observer nodes in the testbed for further data processing or storage.
|
| 62 |
+
|
| 63 |
+
${}^{1}$ https://www.metageek.com/products/hardware/
|
| 64 |
+
|
| 65 |
+
${}^{2}$ http://www.ti.com/tool/SMARTRFTM-STUDIO
|
| 66 |
+
|
| 67 |
+
${}^{3}$ https://github.com/NordicSemiconductor/pc-nrfconnect-rssi
|
| 68 |
+
|
| 69 |
+
${}^{4}$ https://github.com/contiki-os/contiki/tree/master/examples/rssi-scanner
|
| 70 |
+
|
| 71 |
+
< g r a p h i c s >
|
| 72 |
+
|
| 73 |
+
Figure 1: Sketch of our RF monitoring functionality on RPi4.
|
| 74 |
+
|
| 75 |
+
The ideal case would be to reuse a testbed's observer nodes for this purpose. For example, many low-power wireless testbeds make use of devices such as the Raspberry $\mathrm{{Pi}}$ as observer nodes to orchestrate activities, measure performance, and generate RF noise $\lbrack {24},{26},{30}$ , 32]. As these devices embed radio modules operating in the ${2.4}\mathrm{{GHz}}$ band, they could be used to also monitor the surrounding RF activities. Unfortunately, the Raspberry Pi 3B and later revisions embed a Wi-Fi module, but they do not allow to measure the strength of the RF signal at an arbitrary point in time - a problem that is common to most off-the-shelf Wi-Fi hardware ${}^{5}$ .
|
| 76 |
+
|
| 77 |
+
In this work, we tackle this limitation and enable off-the-shelf observer nodes to monitor the surrounding RF activity. While also the Cypress (former Broadcom) BCM43455C0 Wi-Fi module found on recent Raspberry Pi devices does not provide a way to instantaneously measure the strength of the RF signal, one can use reverse engineering and flash patching tools to craft an RF power estimator on these low-cost Wi-Fi modules. To this end, we use Nexmon, a C-based firmware patching framework that has been used in the past to i.a., enable monitor mode on Cypress Wi-Fi chips [29].
|
| 78 |
+
|
| 79 |
+
Thanks to Nexmon, one can already record each individual Wi-Fi transmission (even those from networks with which a device is not associated) using tools such as Wireshark or tcpdump, and derive a list of sniffed packets in pcap format. However, besides Wi-Fi activity, no other source of RF noise can be currently monitored.
|
| 80 |
+
|
| 81 |
+
Therefore, we extend Nexmon as follows. First, we make use of Ghidra ${}^{6}$ , a software reverse engineering suite, to gleam into the inner workings of the BCM43455C0 firmware and spot leftover functionality that is usually not accessible to end-users (e.g., hidden functions that are only partially implemented, as well as a remnant of calibration and compliance testing features). We identify one function (wlc_phy_rx_iq_est_acphy), that fits exactly our purposes: it instructs the RF front-end to compute the power estimate over a given number of samples ( ${2}^{10}$ in our implementation). While the power estimate returned by this function is sufficient to monitor RF activity (we use this value in the remainder of this paper), a manual calibration is needed to express the power estimate in $\mathrm{{dBm}}$ .
|
| 82 |
+
|
| 83 |
+
Building upon this function, we create a userland application (RF Measurement App) and a scheduler running within the BCM43455C0 radio firmware (RF Measurement Scheduler) that interact in order to collect a sequence of RF power estimates, as shown in Fig. 1. Specifically, we use Nexmon's nexutil tool to trigger commands for the RF measurement scheduler using input/output control (IOCTL) system calls. As the overhead of the system calls is significant, polling the radio for RF power measurements would result in a limited and non-deterministic sampling rate. Therefore, similar to the approach used in JamLab-NG [32], we make use of the IOCTL interface to only instruct the radio to begin periodic measurements on a specific channel with a given bandwidth.
|
| 84 |
+
|
| 85 |
+
< g r a p h i c s >
|
| 86 |
+
|
| 87 |
+
Figure 2: Power estimates returned by a RPi4 in the presence of different devices generating RF noise in the ${2.4}\mathrm{{GHz}}$ band.
|
| 88 |
+
|
| 89 |
+
Internally, the RF measurement scheduler uses a timer (hw_timer) to periodically call the wlc_phy_rx_iq_est_acphy function. We timestamp the power estimates returned by this function with a $\mu$ s-precision timer (tsf_timer), and generate a UDP packet to be injected into the wlan0 interface. Each UDP packet contains a single timestamped power estimate and it is sent to port 5555, such that it can be captured by the RF measurement app accordingly.
|
| 90 |
+
|
| 91 |
+
Following this procedure, we can instrument the RPi 4 to estimate surrounding RF activity at ${7.8}{\mathrm{{kHz}}}^{7}$ . Fig. 2 shows exemplary power estimates returned by the RF measurement app in the presence of Wi-Fi and BLE traffic, as well as a microwave oven operating nearby. The Wi-Fi traffic is generated by a second RPi 4 placed $2\mathrm{\;m}$ away, whereas the BLE traffic is generated by an nRF52840DK node placed ${30}\mathrm{\;{cm}}$ away. The microwave oven was used to heat up water and was located ${1.5}\mathrm{\;m}$ away from the RPi 4 sniffing nearby RF activity. All measurements make use of a channel bandwidth of ${20}\mathrm{{MHz}}$ .
|
| 92 |
+
|
| 93 |
+
We also place a third RPi4 using Nexmon's monitor mode and tcpdump to capture the duration and strength of the generated Wi-Fi traffic in a pcap file. This third RPi 4 is also placed $2\mathrm{\;m}$ away from the one generating Wi-Fi traffic and its measurements are synchronized with those of the RPi4 sniffing RF activity as described in § 4. Fig. 2a shows that the power estimate returned by the sniffing RPi4 (blue line) correctly captures the over-the-air duration of the Wi-Fi packet extracted by tcpdump's pcap file (orange line).
|
| 94 |
+
|
| 95 |
+
§ 4 SYNCHRONIZATION OF DISTRIBUTED RF MEASUREMENTS
|
| 96 |
+
|
| 97 |
+
To measure the RF activity across large-scale testbed installations (i.e., to have a good spatial coverage) and to monitor the activities in the entire ${2.4}\mathrm{{GHz}}$ ISM band (i.e., to monitor several Wi-Fi channels at once), several RPi4 devices should be used to perform RF measurements at the same time. Therefore, it is important to synchronize their activities and establish a common time-base.
|
| 98 |
+
|
| 99 |
+
${}^{5}$ A notable exception is the at th9k series of Wi-Fi cards from Qualcomm. These chips allow fast polling of a binary channel clear assessment (CCA). Although the CCA threshold can be manually configured, the device can only return a true/false assessment, which is insufficient for an accurate analysis of nearby RF activity [2].
|
| 100 |
+
|
| 101 |
+
${}^{6}$ https://ghidra-sre.org/
|
| 102 |
+
|
| 103 |
+
${}^{7}$ Note that one can achieve a higher rate by decreasing the number of samples and by sending several measurements in a single UDP packet. In this work, we focus on a prototypic implementation and leave these optimizations as future work.
|
| 104 |
+
|
| 105 |
+
< g r a p h i c s >
|
| 106 |
+
|
| 107 |
+
Figure 3: Power estimate returned by five RPi4, spread over ${16}{m}^{2}$ in a room, observing the same source of Wi-Fi traffic.
|
| 108 |
+
|
| 109 |
+
< g r a p h i c s >
|
| 110 |
+
|
| 111 |
+
Figure 4: Synchronization error across five RPi4 monitoring the same RF activity. The boxes and whiskers in Fig. 4b show the median (green center line), the first and third quantile (box body), as well as the 1.5 interquartile range (whiskers).
|
| 112 |
+
|
| 113 |
+
To this end, we have implemented time-stamping twice throughout the measurement chain shown in Fig. 1. We first timestamp the collected power estimates using the RF measurement app. To do this, we employ the RPi4's unix timestamp: as this is the operating system's time, it can be kept in sync across different RPi4 in a testbed using the network time protocol (NTP), as shown in [32].
|
| 114 |
+
|
| 115 |
+
However, as each power estimate is independently passed from the RF front-end to the RF measurement app through the operating system and its network stack, one experiences non-deterministic delays affecting the accuracy of individual samples.
|
| 116 |
+
|
| 117 |
+
Therefore, we add a second timestamp as soon as the power estimates have been sampled by the RF front-end, i.e., right after the wlc_phy_rx_iq_est_acphy function has returned, using the time synchronization function timer (tsf_timer ${}^{8}$ ). This second timestamp provides a more fine-grained resolution that allows to accurately account for ephemeral RF events (e.g., a sequence of short BLE beacons). One can hence use this timestamp to correct the unix timestamps added by the RF measurement app.
|
| 118 |
+
|
| 119 |
+
Following this procedure, we instrument five RPi4 located in the same room and interconnected by an Ethernet backbone to sense the ongoing RF activity on the same Wi-Fi channel. We also place in the same room another RPi 4 running JamLab-NG to generate periodic Wi-Fi packets. Fig. 3 shows the power estimates collected from each RPi4 using the unix timestamp. As expected, due to the different location of the nodes and their distance from the RPi4 generating Wi-Fi traffic, the absolute value of the estimated power is different. However, each spike, which corresponds to the on-air time of a Wi-Fi packet, is well-synced across the five RPi4.
|
| 120 |
+
|
| 121 |
+
< g r a p h i c s >
|
| 122 |
+
|
| 123 |
+
Figure 5: Probability density function of power estimates computed with $\theta = {300}\mathrm{\;s}$ on five RPi4 observing the same Wi-Fi activity. The portion in red marks noise floor samples.
|
| 124 |
+
|
| 125 |
+
To better quantify the synchronization error across the different RPi4, we consider more than 2000 Wi-Fi packets and compare the timestamp of each rising edge in the estimated power (i.e., the beginning of each spike in Fig. 3). Fig. 4 shows the maximum deviation across all the five RPi4 and the relative error to a specific device (pi5). Regardless of which RPi4 is used as reference, the median synchronization error including all uncertainties in our measurement chain does not deviate by more than ${22\mu }\mathrm{s}$ , with ${95}\%$ of the samples never exceeding an error of ${209\mu }\mathrm{s}$ .
|
| 126 |
+
|
| 127 |
+
As shown in Fig. 3 and 4, the synchronization accuracy of the unix timestamp is quite satisfactory. One can further increase accuracy by correcting these timestamps using those obtained with the tsf_timer. Moreover, as the unix timestamps are already drift-corrected using NTP, one could use linear regression to calculate a correction factor for the timestamps obtained with the tsf_timer. The latter exhibits a drift that strongly depends on the temperature of the RPi4, which varies as a function of its computational load ${}^{9}$ .
|
| 128 |
+
|
| 129 |
+
§ 5 QUANTIFYING THE DIFFERENCE IN RF ACTIVITY ACROSS TEST RUNS
|
| 130 |
+
|
| 131 |
+
With the ability to measure the RF activity using off-the-shelf RPi 4 outlined in $§3$ and $§4$ , one can manually inspect the measurements and look for outliers. As this results in subjective and qualitative assessments only (see $§2$ ), in this section we derive a metric that allows to determine in real-time whether the ongoing RF activities are similar to those recorded in a previous experiment. Note that our aim is not to derive the best metric to compare the RF conditions in different experiments, but rather to showcase the feasibility of such real-time comparison as a seed for future work in the area.
|
| 132 |
+
|
| 133 |
+
Selecting a metric. In order to enable a real-time detection of deviations in the RF channel usage compared to what was measured in earlier runs, a first necessary step is the selection of a metric capturing the large number of power estimates sampled over time into a compact representation. While one could model or learn different RF interference patterns and compare those with the currently measured power estimates, we choose to make no assumptions about the characteristics of the RF activity and forgo any training. Instead, we derive a probability density function (PDF) of the observed power estimates over a time window $\theta$ . This approach is more generic, it allows to account for the strength of the RF signal (e.g., to capture whether sources of interference have moved closer or further away over time), and is more practical than learning in the presence of several sources of RF noise at the same time.
|
| 134 |
+
|
| 135 |
+
Fig. 5 shows the PDF computed for five RPi4 deployed in the same configuration used earlier (i.e., in the presence of an additional RPi 4 in the same room generating periodic Wi-Fi traffic) over a time window $\theta$ of approximately five minutes. The region marked in red indicates the absence of RF noise (i.e., the noise floor of each RPi4). Although the PDF shown in Fig. 5 allows to capture the ongoing RF activity measured by each RPi4, it does not contain a fine-grained information about the power estimates in the time domain. For this reason, RF interference occurring for a short amount of time gets averaged out and cannot be accounted for. To mitigate this problem, one can simply shorten the observation window $\theta$ , such that one can also account for ephemeral RF interference.
|
| 136 |
+
|
| 137 |
+
${}^{8}$ The tsf_timer is used by Nexmon’s monitor mode to perform time-stamping at the MAC layer and keep synchronized the Wi-Fi stations connected to the same access point (AP). However, as a RPi4 is not connected to an AP when collecting power estimates, its measurements are not automatically synced to those of nearby nodes.
|
| 138 |
+
|
| 139 |
+
${}^{9}$ We observed clock differences in the range of $5\mathrm{\;{ms}}$ within 10 minutes (i.e.,8 ppm).
|
| 140 |
+
|
| 141 |
+
< g r a p h i c s >
|
| 142 |
+
|
| 143 |
+
Figure 6: Deviation between the power estimates obtained in three exemplary runs (see Fig. 7a) using three different methods.
|
| 144 |
+
|
| 145 |
+
< g r a p h i c s >
|
| 146 |
+
|
| 147 |
+
Figure 7: Recorded power estimate by a RPi4 (a) and PRR of two TelosB nodes (b) in the presence of Wi-Fi interference.
|
| 148 |
+
|
| 149 |
+
Quantifying deviations in RF channel usage. In order to enable an automatic comparison of RF activity between runs, we investigate how to quantitatively compare two PDFs (such as the one shown in Fig. 5). To this end, we reuse existing methods included in the popular computer vision suite opencv to compare histograms ${}^{10}$ . Among others, we make use of correlation (Eq. 1), Hellinger distance (Eq. 2), as well as Kullback-Leibler divergence (Eq. 3), which are defined as follows:
|
| 150 |
+
|
| 151 |
+
$$
|
| 152 |
+
d\left( {{H}_{1},{H}_{2}}\right) = \frac{\mathop{\sum }\limits_{I}\left( {{H}_{1}\left( I\right) - \bar{{H}_{1}}}\right) \left( {{H}_{2}\left( I\right) - \bar{{H}_{2}}}\right) }{\sqrt{\mathop{\sum }\limits_{I}{\left( {H}_{1}\left( I\right) - \bar{{H}_{1}}\right) }^{2}\mathop{\sum }\limits_{I}{\left( {H}_{2}\left( I\right) - \bar{{H}_{2}}\right) }^{2}}} \tag{1}
|
| 153 |
+
$$
|
| 154 |
+
|
| 155 |
+
$$
|
| 156 |
+
d\left( {{H}_{1},{H}_{2}}\right) = \sqrt{1 - \frac{1}{\sqrt{{\bar{H}}_{1}{\bar{H}}_{2}{N}^{2}}}\mathop{\sum }\limits_{I}\sqrt{{H}_{1}\left( I\right) \cdot {H}_{2}\left( I\right) )}} \tag{2}
|
| 157 |
+
$$
|
| 158 |
+
|
| 159 |
+
$$
|
| 160 |
+
d\left( {{H}_{1},{H}_{2}}\right) = \mathop{\sum }\limits_{I}{H}_{1}\left( I\right) \log \left( \frac{{H}_{1}\left( I\right) }{{H}_{1}\left( I\right) }\right) \tag{3}
|
| 161 |
+
$$
|
| 162 |
+
|
| 163 |
+
where $N$ is the number of histogram bins, ${H}_{k}\left( I\right)$ represents the bin of histogram $k$ using a power estimate $I,{\bar{H}}_{k} = \frac{1}{N}\mathop{\sum }\limits_{J}{H}_{k}\left( J\right)$ , and $d\left( {{H}_{1},{H}_{2}}\right)$ is a representation of the "distance" across histograms computed based on the three aforementioned techniques.
|
| 164 |
+
|
| 165 |
+
We compare the deviation in RF activity across different runs using these three methods as follows. Using the same setup illustrated previously, we let five RPi4 record power estimates on Wi-Fi channel $7\left( {{2442}\mathrm{{MHz}}}\right)$ over 5-minutes runs. During each run, a nearby Raspberry Pi 3B using JamLab-NG generates a reproducible interference pattern on the same Wi-Fi channel during the first minute and the last three minutes (i.e., no interference is generated from ${60}\mathrm{\;s}$ to ${120}\mathrm{\;s}$ ). We also collocate two TelosB nodes running Contiki in the same room. These two nodes periodically exchange 8 packets/sec using nullmac, nullrdc, and Rime on IEEE 802.15.4 channel 18 (2440 MHz); logging the packet reception rate (PRR), i.e., the number of correctly received packets over time.
|
| 166 |
+
|
| 167 |
+
Fig. 7 shows the recorded power estimates of an exemplary RPi 4, as well as the PRR of the TelosB nodes for three different runs. Whilst the first and the second run are identical, in the third run we purposely create changes in the RF environment: a short burst of strong interference at ${90}\mathrm{\;s}$ lasting five seconds and a switch to a different (lighter) interference pattern in the last two minutes of the runs, i.e., after ${180}\mathrm{\;s}$ . Note that the spikes in Fig. 7a at ${120}\mathrm{\;s}$ and ${240}\mathrm{\;s}$ are due to a periodic self-calibration function of the Wi-Fi module and can be easily filtered due to their high value (up to 5000).
|
| 168 |
+
|
| 169 |
+
Fig. 6 shows the deviation between the three runs using the aforementioned histogram comparison methods. We employ $\theta = 1\mathrm{\;s}$ and use histograms of 128 bins in the range $\left\lbrack {-{92},0}\right\rbrack$ . While correlation has the advantage of having an upper bound, obtained by the comparison of the first run (run1) with itself, the Hellinger distance and Kullback-Leibler divergence start from 0 for identical histograms and increase proportionally to the difference between runs. From these results, we can conclude that the Hellinger distance is especially sensitive to small changes, whereas correlation does not penalize smaller deviations in the RF activity. Conversely, the Kullback-Leibler divergence can capture and significantly penalize ephemeral changes such as the self-calibration spike at ${240}\mathrm{\;s}$ .
|
| 170 |
+
|
| 171 |
+
All three methods clearly identify the artificial changes in RF usage introduced in the third run and are suitable to detect significant deviations in RF activity. To ultimately assess whether one should invalidate a test run, one can use Fig. 7b to gauge the impact of the changes in RF activity on the PRR between the TelosB nodes. Note that all three methods are computed within a fraction of a second while the computation of the histogram takes roughly $2\mathrm{\;s}$ on a single core of a modern processor. Hence, one can quickly detect deviations in the RF channel usage at the end of a test run, and autonomously make a decision on whether refuting the results and re-running the same experiment, as envisioned in [5].
|
| 172 |
+
|
| 173 |
+
Filtering the activity of a testbed's own nodes. So far, we have only focused on the detection of surrounding RF activity and ignored the impact of the transmissions of co-located low-power wireless nodes. However, when integrating such a solution into a low-power wireless testbed facility, one should be able to filter the transmissions of low-power wireless nodes that are part of the testing facility (i.e., the devices running the solution being tested).
|
| 174 |
+
|
| 175 |
+
${}^{10}$ https://docs.opencv.org/3.4/d6/dc7/group_imgproc_hist.html# ga994f53817d621e2e4228fc646342d386
|
| 176 |
+
|
| 177 |
+
< g r a p h i c s >
|
| 178 |
+
|
| 179 |
+
Figure 8: Power estimate for five RPi4 spread out across a room observing a single TelosB generating RF activity.
|
| 180 |
+
|
| 181 |
+
Traditionally, such low-power wireless nodes are directly attached to the observer nodes in the testbed, i.e., they are located in very close proximity. Our experiments have actually shown that one can easily recognize and filter transmissions of low-power wireless nodes located in very close proximity to the RPi4 due to the high magnitude of the power estimate. Fig. 8 shows the power estimate of five RPi4 devices in the presence of a TelosB node sending packets periodically (a) and emitting a continuous modulated carrier tone (b) as in [4] using a transmission power of $0\mathrm{{dBm}}$ . The TelosB node is attached to pi5 and it is located about ${50}\mathrm{\;{cm}}$ away from pi2, $1\mathrm{\;m}$ away from pi3, and $2\mathrm{\;m}$ away from pi1 and pi4. As Fig. 8 shows, the BCM43455C0 heavily overestimates the narrowband signal to over 400 on pi5, whereas pi2 still reports a power estimate of about 50, which is easily distinguished from surrounding RF interference, which typically returns a lower power estimate value. At about $2\mathrm{\;m}$ the signal is indistinguishable from the noise floor, and cannot be detected by pi1 and pi4. Therefore, one can, in principle be agnostic to the transmissions of the nodes attached to a RPi4 acting as observer node in a low-power wireless testbed.
|
| 182 |
+
|
| 183 |
+
§ 6 CONCLUSIONS AND FUTURE WORK
|
| 184 |
+
|
| 185 |
+
When benchmarking the performance of low-power wireless systems, it is important to account for the inherent variability of the RF conditions in the testing environment. In this paper, we have put the basis for the creation of a low-cost tool automating the distributed monitoring of RF activity in low-power wireless testbeds. After instrumenting several Raspberry Pi $4\mathrm{\;B}$ nodes to monitor the RF activity in their surroundings and synchronizing their measurements, we have showcased the ability to quantitatively compare the RF usage during several test runs and detect critical deviations.
|
| 186 |
+
|
| 187 |
+
Our work represents an important step towards a better reproducibility and comparability of results. However, to ensure that the experimental conditions are exactly the same across multiple runs, it is not sufficient to only check the amount of RF activity in the surroundings of the wireless nodes. For example, the ability to monitor that the link quality between the nodes in the testbed (which may vary if nodes are slightly moved, nearby shelves are moved, and doors are opened) did not change across multiple runs, is an orthogonal effort that goes beyond this paper. Similarly, the exemplary strategies to quantify the difference in RF activity across test runs presented in this paper only allow to objectively conclude how similar the RF conditions were when running several experiments: determining whether the variability of the RF conditions is sufficient to deem two or more test runs as comparable is not in the scope of this work. In the future, we plan to tackle also these issues, and to integrate our full-fledged RF monitoring approach into the framework of an existing benchmarking facility (e.g., D-Cube [31]).
|
papers/NeurIPS/NeurIPS 2019/NeurIPS 2019 Workshop/NeurIPS 2019 Workshop Neuro_AI/B1Mo4XFL8H/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,97 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Differentiating Granger Causal Influence and Stimulus-Related Information Flow
|
| 2 |
+
|
| 3 |
+
Anonymous Author(s)
|
| 4 |
+
|
| 5 |
+
Affiliation
|
| 6 |
+
|
| 7 |
+
email
|
| 8 |
+
|
| 9 |
+
## Abstract
|
| 10 |
+
|
| 11 |
+
Information flow is becoming an increasingly popular term in the context of understanding neural circuitry, both in neuroscience and in Artificial Neural Networks. Granger causality has long been the tool of choice in the neuroscience literature for identifying functional connectivity in the brain, i.e., pathways along which information flows. However, there has been relatively little work on providing a fundamental theory for information flow, and as part of that, understanding whether Granger causality captures the intuitive direction of information flow in a computational circuit. Recently, Venkatesh et al. [2019] proposed a theoretical framework for identifying stimulus-related information paths in a computational graph. They also provided a counterexample showing that the direction of greater Granger causal influence can be opposite to that of information flow [Venkatesh and Grover, 2015]. Here, we reexamine and expand on this counterexample. In particular, we find that Granger Causal influence can be statistically insignificant in the direction of information flow, while being significant in the opposite direction. By examining the mutual- (and conditional-mutual-) information that each signal shares with the stimulus, we are able to gain a more nuanced understanding of the actual information flows in this system.
|
| 12 |
+
|
| 13 |
+
## 18 1 Introduction
|
| 14 |
+
|
| 15 |
+
Information flow is starting to gain importance in both neuroscience and artificial intelligence for understanding biological and artificial neural networks. For instance, several works have sought to gain an intuition for how deep neural networks operate by examining how information propagates through these networks [Tishby et al., 2000, Goldfeld et al., 2019, Yu et al., 2018, Tax et al., 2017]. At the same time, numerous works seek to understand the brain by understanding how information flows in biological neural circuits [Almeida et al., 2013, Brovelli et al., 2004, Bar et al., 2006, Greenberg et al., 2012, Lalo et al., 2008]. Both these areas have seen an increased use of information-theoretic tools for examining information flow. In the context of the brain, Granger causality and its derivatives-including Transfer Entropy and Directed Information-have been used extensively to understand functional relationships between different areas of the brain. On the other hand, analyses of neural networks have typically entailed the use of mutual information. In what follows, we consider both biological and artificial neural networks to be instances of neural circuits that can be modeled in the form of graph of interconnected nodes, with transmissions on edges [Venkatesh et al., 2019].
|
| 16 |
+
|
| 17 |
+
Contrast the following two interpretations of the notion of "information flow", both prevalent in the neuroscience literature: (i) the first refers to "information" in the abstract, and indicates that one part of a neural circuit influences another; (ii) the second refers to some very specific information, for example, information about a stimulus in a neuroscientific experiment, or information about two classes in an ANN. In this paper, we argue that the use of Granger Causality-based tools in neuroscience should be restricted solely for understanding the first variety of "information flow"
|
| 18 |
+
|
| 19 |
+
mentioned above. In order to make inferences on stimulus-related information flows, neuroscience should take after the field of AI and use information-theoretic tools (or approximations thereof), computed between the stimulus and the neural activity of interest. We make this point by discussing a counterexample based on a feedback communication system, which shows that Granger causal influence can be greater in a direction opposite to that of information flow. While this example has been presented before by Venkatesh and Grover [2015], they leave several questions unanswered: the authors only compare feedforward and feedback Granger-causal influences, and do not provide a statistical or computational analysis to back up their claims. Furthermore, they provide no immediate solution that identifies the correct flows of information in this system. In a later paper [Venkatesh et al., 2019], despite constructing a framework for analyzing information flow, the authors do not define a quantitative measure for information flow, or provide satisfactory resolution to this issue: the notion of "derived information" they define is, at best, cumbersome to apply in this setting. By undertaking a computational and statistical study of this example, we address both these drawbacks of the aforementioned works.
|
| 20 |
+
|
| 21 |
+
Granger causality, along with its derivatives, is known to have several shortcomings, which have been discussed at length previously. These criticisms have largely been associated with the fact that Granger causality does not capture true causal influence [Pearl, 2009], or that it may provide erroneous results in the presence of hidden nodes [Pearl, 2009, p. 54], measurement noise [Andersson, 2005, Nalatore et al., 2007] or improper preprocessing techniques [Gong et al., 2015]. However, we share the belief opined by Venkatesh et al. [2019] that the inability to interpret Granger causal influence as stimulus-related information flow is a much more fundamental issue, which limits the kinds of inferences one is able to make about the computation being performed by the neural circuit.
|
| 22 |
+
|
| 23 |
+
## 2 Results
|
| 24 |
+
|
| 25 |
+
The counterexample demonstrated by Venkatesh and Grover [2015] is based on a feedback communication scheme, which was originally proposed by Schalkwijk and Kailath [1966]. As mentioned before, while the counterexample was examined theoretically in a limited setting, it was never subjected to a computational evaluation or a rigorous statistical analysis. Our main results are two-fold:
|
| 26 |
+
|
| 27 |
+
1. We perform a rigorous statistical analysis of the feedback-based counterexample given by Venkatesh et al. [2019], and show that the result is much stronger than previously supposed: Granger causal influences can be statistically insignificant in the direction of information flow, while being highly significant in the opposite direction.
|
| 28 |
+
|
| 29 |
+
2. We also show that one can obtain a better understanding of the system by examining the stimulus-related information flow. In particular, by measuring the mutual and conditional mutual information between the signals and the stimulus-here, the message being communicated-allows one to interpret the true direction of information exchange in the system.
|
| 30 |
+
|
| 31 |
+
### 2.1 The Schalkwijk and Kailath Counterexample
|
| 32 |
+
|
| 33 |
+
The Schalkwijk and Kailath scheme [1966] is a strategy for efficiently communicating a message from a transmitter to a receiver (here, we refer to them as Alice and Bob respectively), in the presence of a feedback channel (see Fig. 1a). Suppose Alice wishes to send a message $\theta \sim \mathcal{N}\left( {0,1}\right)$ to Bob. The feedforward and feedback channels between Alice and Bob are noisy, and have signal-to-noise ratios (SNRs) characterized by the noise variances, ${\sigma }_{N}^{2}$ and ${\sigma }_{R}^{2}$ respectively. To start with, we assume that the feedback channel has a higher SNR than the feedforward link ${}^{1}$ , i.e., ${\sigma }_{R}^{2} < {\sigma }_{N}^{2}$ . The scheme proceeds iteratively: Alice starts by communicating the message to Bob, i.e., ${X}_{1} = \theta$ . Bob receives a noisy version, ${Y}_{1} = {X}_{1} + {Z}_{1}$ , using which he computes an estimate ${\widehat{\theta }}_{1}$ . He transmits this estimate back to Alice on the feedback channel, and she receives the noisy estimate ${\widetilde{\theta }}_{1} = {\widehat{\theta }}_{1} + {R}_{1}$ . Subsequently, Alice transmits the error in Bob’s last best estimate: ${X}_{i} = \theta - {\widetilde{\theta }}_{i - 1}$ ; while Bob uses these noisy error terms to improve his estimate over time: ${\widehat{\theta }}_{i} = {\widehat{\theta }}_{i - 1} + {Y}_{i}/i$ . It can be shown that,
|
| 34 |
+
|
| 35 |
+
---
|
| 36 |
+
|
| 37 |
+
${}^{1}$ The version of the scheme we present here is simplified from the original Schalkwijk and Kailath scheme, for ease of analysis. Also, this scheme is communication-theoretically optimal when the feedback channel is noiseless, however, it continues to work (if sub-optimally) even when noise is present in the feedback link.
|
| 38 |
+
|
| 39 |
+
---
|
| 40 |
+
|
| 41 |
+

|
| 42 |
+
|
| 43 |
+
Figure 1: (a) A schematic of the counterexample based on the Schalkwijk-Kailath feedback communication scheme; (b) A comparison of Granger Causal influences (GCIs) at different reverse-noise-ratios, ${\sigma }_{R}/{\sigma }_{N}$ . The violin plots indicate the null distributions based on the permutation test described in Section 2.1, while the errorbars show the mean and standard error of GCI. ${\sigma }_{N} = {0.1}$ for this plot; (c) Mutual (and conditional mutual) information between the stimulus and Alice’s and Bob’s transmissions. $I\left( {\theta ;{\widehat{\theta }}_{i}}\right)$ slowly increases with $i$ , while $I\left( {\theta ;{X}_{i} \mid {\widehat{\theta }}_{i - 1}}\right)$ slowly falls, indicating that Alice is communicating information about the message $\theta$ to Bob.
|
| 44 |
+
|
| 45 |
+
using this scheme, Bob’s estimate eventually converges to the true value of the message: $\widehat{\theta } \rightarrow \theta$ (see Venkatesh and Grover [2015] for a proof). Given the ubiquity of feedback links in the brain, such counterexamples deserve careful attention.
|
| 46 |
+
|
| 47 |
+
Suppose we now observe Alice’s and Bob’s transmissions $\left( {X}_{i}\right.$ and $\left. {\widehat{\theta }}_{i}\right)$ , and wish to use a Granger causal analysis to determine how information flows in this setting. Intuitively, Alice's past transmissions do not predict Bob’s future transmissions well: the ${X}_{i}$ ’s are corrupted by noise and $\widehat{\theta }$ is a poor estimate initially. On the other hand, when the noise in the feedback link is small $\left( {{\sigma }_{R}^{2} < {\sigma }_{N}^{2}}\right)$ , Bob’s past transmissions predict Alice’s future transmissions: ${X}_{i} = \theta - {\widetilde{\theta }}_{i - 1} \approx \theta - {\widehat{\theta }}_{i - 1}$ . Since the Granger causal influence (GCI) from Alice to Bob effectively measures the extent to which Alice's past transmissions help in predicting Bob's future transmissions, we can conclude that the GCI from Bob to Alice is greater than that from Alice to Bob.
|
| 48 |
+
|
| 49 |
+
We demonstrate this computationally in Fig. 1b. We simulated the Schalkwijk and Kailath scheme for $T = {100}$ time steps and for $n = {100}$ trials. We computed GCIs by fitting an autoregressive model of order $p = {10}$ to the data. Fig. 1b shows the mean GCI over 100 trials (errorbars represent standard error of the mean). We assessed the statistical significance of the result using the method described by Brovelli et al. [2004]: we permuted the trials of Alice's transmissions and Bob's transmissions independently, to disrupt trial-related dependences, while maintaining the original distributions of the individual transmissions. We then computed the GCIs on the permuted trials. We repeated this process ${n}_{\text{Perm }} = {100}$ times, and constructed a histogram of mean GCIs under permutation, which became our empirical estimate of the null distribution. We found that for a certain regime of ${\sigma }_{R}/{\sigma }_{N}$ , the actual GCI from Bob to Alice was far outside the empirical null distribution. The $p$ -value of 0.01 was effectively the minimum attainable $p$ -value, determined by the number of permutations we performed. Fig. 1b shows that GCIs can be statistically insignificant in the direction of information flow, while at the same time being highly significant in the opposite direction.
|
| 50 |
+
|
| 51 |
+
### 2.2 A Resolution through Mutual Information
|
| 52 |
+
|
| 53 |
+
Granger causality's failure to identify the direction in which the message flows in the above example can be attributed to the fact that Granger causality only examines predictive influence; it does not capture what that influence is about. Granger causality does not intrinsically check for stimulus-dependence in any way. The recent work of Venkatesh et al. [2019], while defining stimulus-related information flow, does not provide a quantitative measure of information flow, and their partial resolution to the counterexample based on derived information is cumbersome and unsatisfactory.
|
| 54 |
+
|
| 55 |
+
Here, we take a much simpler approach and show that by measuring mutual and conditional mutual information, we can observe how information about the message evolves in Alice's and Bob's transmissions. Since all variables in this example are Gaussian, the mutual information between the message $\theta$ and any transmission $U$ can be written in terms of their correlation: $I\left( {\theta ;U}\right) =$ $- \frac{1}{2}\log \left( {1 - \rho {\left( \theta , U\right) }^{2}}\right)$ , where the correlation $\rho \left( {\theta , U}\right)$ is readily estimated. Fig. 1c shows how the mutual (and conditional mutual) information of ${X}_{i}$ and ${\widehat{\theta }}_{i}$ evolve over time steps, $i$ . In particular, observe that $I\left( {\theta ;{\widehat{\theta }}_{i}}\right)$ slowly increases over time $i$ , while $I\left( {\theta ;{X}_{i}}\right)$ is nearly zero. The conditional mutual information $I\left( {\theta ;{X}_{i} \mid {\widehat{\theta }}_{i - 1}}\right)$ , however, is much larger and slowly decreases over time, indicating the presence of synergistic information about $\theta$ in the forward link, which decays as the estimate $\widehat{\theta }$ improves.
|
| 56 |
+
|
| 57 |
+
The decrease of stimulus-related information in Alice's transmissions, and the corresponding increase in Bob's transmissions indicates that information about the stimulus is being conveyed from Alice to Bob and not vice versa. This also indicates that caution must be exercised in interpreting Granger causal influences as conveying stimulus-related information.
|
| 58 |
+
|
| 59 |
+
## 3 Conclusion
|
| 60 |
+
|
| 61 |
+
We significantly advanced on a previously proposed counterexample, showing that it is possible for Granger causal influence to be statistically insignificant in the direction of stimulus-related information flow, while being highly significant in the opposite direction. We also demonstrated that quantitative information-theoretic measures, which are finding heavy use in the analysis of artificial neural networks, can be particularly useful in enabling the correct interpretation of the direction of information flow.
|
| 62 |
+
|
| 63 |
+
## References
|
| 64 |
+
|
| 65 |
+
J. Almeida, A. R. Fintzi, and B. Z. Mahon. Tool manipulation knowledge is retrieved by way of the ventral
|
| 66 |
+
|
| 67 |
+
visual object processing pathway. Cortex, 49(9):2334-2344, 2013.
|
| 68 |
+
|
| 69 |
+
J. Andersson. Testing for Granger causality in the presence of measurement errors. Economics Bulletin, 2005.
|
| 70 |
+
|
| 71 |
+
M. Bar, K. S. Kassam, A. S. Ghuman, J. Boshyan, A. M. Schmid, A. M. Dale, M. S. Hämäläinen, K. Marinkovic, D. L. Schacter, B. R. Rosen, and E. Halgren. Top-down facilitation of visual recognition. PNAS, 103(2): 449-454, 2006.
|
| 72 |
+
|
| 73 |
+
A. Brovelli, M. Ding, A. Ledberg, Y. Chen, R. Nakamura, and S. L. Bressler. Beta oscillations in a large-scale sensorimotor cortical network: Directional influences revealed by Granger causality. PNAS, 101(26): 9849-9854, 2004.
|
| 74 |
+
|
| 75 |
+
Z. Goldfeld, E. Van Den Berg, K. Greenewald, I. Melnyk, N. Nguyen, B. Kingsbury, and Y. Polyanskiy. Estimating information flow in deep neural networks. In ICML, pages 2299-2308, 2019.
|
| 76 |
+
|
| 77 |
+
M. Gong, K. Zhang, B. Schölkopf, D. Tao, and P. Geiger. Discovering temporal causal relations from subsampled data. In Proc. ICML, volume 37 of PMLR, pages 1898-1906, Jul 2015.
|
| 78 |
+
|
| 79 |
+
A. S. Greenberg, T. Verstynen, Y.-C. Chiu, S. Yantis, W. Schneider, and M. Behrmann. Visuotopic cortical connectivity underlying attention revealed with white-matter tractography. J. Neurosci., 32(8):2773-2782, 2012.
|
| 80 |
+
|
| 81 |
+
E. Lalo, S. Thobois, A. Sharott, G. Polo, P. Mertens, A. Pogosyan, and P. Brown. Patterns of bidirectional communication between cortex and basal ganglia during movement in patients with Parkinson disease. J. Neurosci., 28(12):3008-3016, 2008.
|
| 82 |
+
|
| 83 |
+
H. Nalatore, M. Ding, and G. Rangarajan. Mitigating the effects of measurement noise on Granger causality. Phys. Rev. E, 75(3):031123, Mar 2007.
|
| 84 |
+
|
| 85 |
+
J. Pearl. Causality: Models, Reasoning and Inference. Cambridge University Press, 2009.
|
| 86 |
+
|
| 87 |
+
J. Schalkwijk and T. Kailath. A coding scheme for additive noise channels with feedback-I: No bandwidth constraint. IEEE Trans. Inf. Th., 12(2):172-182, 1966.
|
| 88 |
+
|
| 89 |
+
T. Tax, P. Mediano, and M. Shanahan. The partial information decomposition of generative neural network models. Entropy, 19(9):474, 2017.
|
| 90 |
+
|
| 91 |
+
N. Tishby, F. C. Pereira, and W. Bialek. The information bottleneck method. arXiv:physics/0004057, 2000.
|
| 92 |
+
|
| 93 |
+
P. Venkatesh and P. Grover. Is the direction of greater Granger causal influence the same as the direction of information flow? In Allerton, pages 672-679, Sept 2015.
|
| 94 |
+
|
| 95 |
+
P. Venkatesh, S. Dutta, and P. Grover. Information flow in computational systems. arXiv:1902.02292, 2019.
|
| 96 |
+
|
| 97 |
+
S. Yu, K. Wickstrøm, R. Jenssen, and J. C. Principe. Understanding convolutional neural networks with information theory: An initial exploration. arXiv:1804.06537, 2018.
|