Add files using upload-large-folder tool
Browse filesThis view is limited to 50 files because it contains too many changes. See raw diff
- papers/ISCA/ISCA 2022/ISCA 2022 Workshop/ISCA 2022 Workshop MLArchSys/YhqRKykUmm/Initial_manuscript_md/Initial_manuscript.md +143 -0
- papers/ISCA/ISCA 2023/ISCA 2023 Workshop/ISCA 2023 Workshop ASSYST/0QA2qomtW3-/Initial_manuscript_md/Initial_manuscript.md +211 -0
- papers/ISCA/ISCA 2023/ISCA 2023 Workshop/ISCA 2023 Workshop ASSYST/0QA2qomtW3-/Initial_manuscript_tex/Initial_manuscript.tex +163 -0
- papers/ISCA/ISCA 2023/ISCA 2023 Workshop/ISCA 2023 Workshop ASSYST/4zdPNY3SDQk/Initial_manuscript_md/Initial_manuscript.md +197 -0
- papers/ISCA/ISCA 2023/ISCA 2023 Workshop/ISCA 2023 Workshop ASSYST/4zdPNY3SDQk/Initial_manuscript_tex/Initial_manuscript.tex +193 -0
- papers/ISCA/ISCA 2023/ISCA 2023 Workshop/ISCA 2023 Workshop ASSYST/6d5El_LENnf/Initial_manuscript_md/Initial_manuscript.md +359 -0
- papers/ISCA/ISCA 2023/ISCA 2023 Workshop/ISCA 2023 Workshop ASSYST/6d5El_LENnf/Initial_manuscript_tex/Initial_manuscript.tex +317 -0
- papers/ISCA/ISCA 2023/ISCA 2023 Workshop/ISCA 2023 Workshop ASSYST/GtyQbLUUagE/Initial_manuscript_md/Initial_manuscript.md +185 -0
- papers/ISCA/ISCA 2023/ISCA 2023 Workshop/ISCA 2023 Workshop ASSYST/GtyQbLUUagE/Initial_manuscript_tex/Initial_manuscript.tex +87 -0
- papers/ISCA/ISCA 2023/ISCA 2023 Workshop/ISCA 2023 Workshop ASSYST/PibYaG2C7An/Initial_manuscript_md/Initial_manuscript.md +209 -0
- papers/ISCA/ISCA 2023/ISCA 2023 Workshop/ISCA 2023 Workshop ASSYST/PibYaG2C7An/Initial_manuscript_tex/Initial_manuscript.tex +191 -0
- papers/ISCA/ISCA 2023/ISCA 2023 Workshop/ISCA 2023 Workshop ASSYST/gb6VM_pTd5E/Initial_manuscript_md/Initial_manuscript.md +293 -0
- papers/ISCA/ISCA 2023/ISCA 2023 Workshop/ISCA 2023 Workshop ASSYST/gb6VM_pTd5E/Initial_manuscript_tex/Initial_manuscript.tex +260 -0
- papers/ISCA/ISCA 2023/ISCA 2023 Workshop/ISCA 2023 Workshop ASSYST/nfmfqzQ4Mwl/Initial_manuscript_md/Initial_manuscript.md +209 -0
- papers/ISCA/ISCA 2023/ISCA 2023 Workshop/ISCA 2023 Workshop ASSYST/nfmfqzQ4Mwl/Initial_manuscript_tex/Initial_manuscript.tex +209 -0
- papers/ISCA/ISCA 2023/ISCA 2023 Workshop/ISCA 2023 Workshop ASSYST/rqn2v1Ltgn0/Initial_manuscript_md/Initial_manuscript.md +177 -0
- papers/ISCA/ISCA 2023/ISCA 2023 Workshop/ISCA 2023 Workshop ASSYST/rqn2v1Ltgn0/Initial_manuscript_tex/Initial_manuscript.tex +216 -0
- papers/ISCA/ISCA 2023/ISCA 2023 Workshop/ISCA 2023 Workshop ASSYST/ymfPxccNUZ/Initial_manuscript_md/Initial_manuscript.md +151 -0
- papers/ISCA/ISCA 2023/ISCA 2023 Workshop/ISCA 2023 Workshop ASSYST/ymfPxccNUZ/Initial_manuscript_tex/Initial_manuscript.tex +127 -0
- papers/JSYS/JSYS 2021/JSYS 2021 Mar_Papers/4Xo8nv5DNS/Initial_manuscript_md/Initial_manuscript.md +0 -0
- papers/JSYS/JSYS 2021/JSYS 2021 Mar_Papers/4Xo8nv5DNS/Initial_manuscript_tex/Initial_manuscript.tex +0 -0
- papers/JSYS/JSYS 2021/JSYS 2021 Mar_Papers/N6_kWfbABl1/Initial_manuscript_md/Initial_manuscript.md +447 -0
- papers/JSYS/JSYS 2021/JSYS 2021 Mar_Papers/N6_kWfbABl1/Initial_manuscript_tex/Initial_manuscript.tex +454 -0
- papers/JSYS/JSYS 2021/JSYS 2021 Mar_Papers/VdWaMgaTKtX/Initial_manuscript_md/Initial_manuscript.md +529 -0
- papers/JSYS/JSYS 2021/JSYS 2021 Mar_Papers/VdWaMgaTKtX/Initial_manuscript_tex/Initial_manuscript.tex +345 -0
- papers/JSYS/JSYS 2021/JSYS 2021 Mar_Papers/bXe1agiq9LN/Initial_manuscript_md/Initial_manuscript.md +0 -0
- papers/JSYS/JSYS 2021/JSYS 2021 Mar_Papers/bXe1agiq9LN/Initial_manuscript_tex/Initial_manuscript.tex +567 -0
- papers/JSYS/JSYS 2022/JSYS 2022 Aug_Papers/RgeMS1Tf1zs/Initial_manuscript_md/Initial_manuscript.md +245 -0
- papers/JSYS/JSYS 2022/JSYS 2022 Aug_Papers/RgeMS1Tf1zs/Initial_manuscript_tex/Initial_manuscript.tex +250 -0
- papers/JSYS/JSYS 2022/JSYS 2022 May_Papers/EFTlLmTzmVp/Initial_manuscript_md/Initial_manuscript.md +0 -0
- papers/JSYS/JSYS 2022/JSYS 2022 May_Papers/EFTlLmTzmVp/Initial_manuscript_tex/Initial_manuscript.tex +584 -0
- papers/JSYS/JSYS 2022/JSYS 2022 May_Papers/HyUoiQKimL2/Initial_manuscript_md/Initial_manuscript.md +0 -0
- papers/JSYS/JSYS 2022/JSYS 2022 May_Papers/HyUoiQKimL2/Initial_manuscript_tex/Initial_manuscript.tex +0 -0
- papers/JSYS/JSYS 2022/JSYS 2022 Oct_Papers/-0sywUv8ryL/Initial_manuscript_md/Initial_manuscript.md +685 -0
- papers/JSYS/JSYS 2022/JSYS 2022 Oct_Papers/-0sywUv8ryL/Initial_manuscript_tex/Initial_manuscript.tex +525 -0
- papers/JSYS/JSYS 2022/JSYS 2022 Oct_Papers/8xwPz-Vx8sq/Initial_manuscript_md/Initial_manuscript.md +0 -0
- papers/JSYS/JSYS 2022/JSYS 2022 Oct_Papers/8xwPz-Vx8sq/Initial_manuscript_tex/Initial_manuscript.tex +0 -0
- papers/JSYS/JSYS 2022/JSYS 2022 Oct_Papers/Ze6AJKHsOP/Initial_manuscript_md/Initial_manuscript.md +0 -0
- papers/JSYS/JSYS 2022/JSYS 2022 Oct_Papers/Ze6AJKHsOP/Initial_manuscript_tex/Initial_manuscript.tex +0 -0
- papers/JSYS/JSYS 2022/JSYS 2022 Oct_Papers/nJ6e-3M2rx/Initial_manuscript_md/Initial_manuscript.md +0 -0
- papers/JSYS/JSYS 2022/JSYS 2022 Oct_Papers/nJ6e-3M2rx/Initial_manuscript_tex/Initial_manuscript.tex +371 -0
- papers/JSYS/JSYS 2023/JSYS 2023 March_Papers/T-eV4T4h_dc/Initial_manuscript_md/Initial_manuscript.md +495 -0
- papers/JSYS/JSYS 2023/JSYS 2023 March_Papers/T-eV4T4h_dc/Initial_manuscript_tex/Initial_manuscript.tex +422 -0
- papers/JSYS/JSYS 2023/JSYS 2023 March_Papers/hj77eOQNIrx/Initial_manuscript_md/Initial_manuscript.md +0 -0
- papers/JSYS/JSYS 2023/JSYS 2023 March_Papers/hj77eOQNIrx/Initial_manuscript_tex/Initial_manuscript.tex +0 -0
- papers/JSYS/JSYS 2023/JSYS 2023 March_Papers/s-78X2Y9sm/Initial_manuscript_md/Initial_manuscript.md +0 -0
- papers/JSYS/JSYS 2023/JSYS 2023 March_Papers/s-78X2Y9sm/Initial_manuscript_tex/Initial_manuscript.tex +0 -0
- papers/JSYS/JSYS 2023/JSYS 2023 March_Papers/sR7rA8txBZF/Initial_manuscript_md/Initial_manuscript.md +0 -0
- papers/JSYS/JSYS 2023/JSYS 2023 March_Papers/sR7rA8txBZF/Initial_manuscript_tex/Initial_manuscript.tex +477 -0
- papers/JSYS/JSYS 2023/JSYS 2023 March_Papers/yuY5n8gMn-s/Initial_manuscript_md/Initial_manuscript.md +0 -0
papers/ISCA/ISCA 2022/ISCA 2022 Workshop/ISCA 2022 Workshop MLArchSys/YhqRKykUmm/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,143 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Unveiling Source of Performance Variance on Searching-based Compiler Optimization
|
| 2 |
+
|
| 3 |
+
MLArchSys 2022 Submission #10 - Confidential Draft - Do NOT Distribute!!
|
| 4 |
+
|
| 5 |
+
${Abstract}$ -In order to squeeze out the performance in the post-Moore era, compiler auto-tuning approach has been widely studied and productized. Despite its superior efficiency in compiler optimization problems, performance variance in final tuning output has long been an issue for searching-based auto-tuning methods. It poses a challenge to research reproducibility and production stability. In general, the causes of such performance variance come from many aspects across different system layers. In addition to generic causes, we observe that auto-tuners add unique sources of variance, including the use of different search methods and cost models.
|
| 6 |
+
|
| 7 |
+
In this work, we specifically focus on the performance variance originating from the nature of auto-tuning. Based on our observation, we set three major hypotheses on search method, cost model, and hardware characteristics. Then, we validated our hypotheses through experiments with a production auto-tuner and a representative set of machine learning workloads. Our result suggests impactful factors to consider in the future auto-tuner design.
|
| 8 |
+
|
| 9 |
+
## I. INTRODUCTION
|
| 10 |
+
|
| 11 |
+
As the performance from hardware innovations is stagnating due to the end of Moore's Law [10], leveraging latent opportunities in compiler optimization space is becoming more critical. However, the ever-increasing complexity in hardware and software stacks makes even the most advanced compilers fail to deliver the best optimization settings for individual workloads from time to time. It strongly motivates the recent efforts in auto-tuning approach in both industry and academia [1], [6], [8], [9], [11], [12], [14]-[16].
|
| 12 |
+
|
| 13 |
+
Most auto-tuning techniques are essentially an iterative feedback-directed search in the optimization space - promising candidates are generated, evaluated, and reflected upon to generate even more promising candidates for future tuning trials. Evaluation is usually conducted by the actual compilation and run-time measurement for the sake of high accuracy. However, due to its expensive cost, recent studies introduce various cost models that can select superior candidates without the actual measurement [2], [5], [13]. One popular approach is to train a learning-based cost model on-the-fly by using the run-time samples collected during the tuning process. And then, auto-tuning methods can use this cost model to filter out non-promising candidates cheaply and evaluate only the promising ones with the actual measurement. In addition, to effectively explore the extremely huge and complicated optimization search space, these auto-tuning methods adopt intelligent search methods to make the best use of the feedback from the evaluation. Evolutionary search [16], multiarmed bandit [14], ensemble search [1] and reinforcement learning [6], [9] are representative examples.
|
| 14 |
+
|
| 15 |
+
Overall, the auto-tuning approach has successfully demonstrated its strong performance and is widely deployed in the industry [6], [11], [12], [15], [16]. However, performance variance in the final tuning outcome has been problematic and one of the major challenges to reproducibility and production stability of auto-tuning methods. The source of variance can be diverse and laid across the multiple system layers ranging from hardware to software stacks (e.g., non-deterministic behavior of hardware, interrupt from the operating system, etc.). Also, auto-tuners add another source of variance. For example, since most search methods are randomization-based, the search pattern may differ across tuning runs.
|
| 16 |
+
|
| 17 |
+
To address this problem, we investigate the representative root causes for the performance variance. Specifically, this work focuses on the unique sources of variance that originated from the nature of the auto-tuning approach. Based on our experience with production auto-tuners, we set three hypotheses on search method, cost model, and hardware property. Then, we conduct experiments to verify each of the hypotheses. Given the importance of the problem, we believe this is a meaningful step forward to attack a long-standing concern in auto-tuners.
|
| 18 |
+
|
| 19 |
+
The contributions of this paper are as follows:
|
| 20 |
+
|
| 21 |
+
- We discuss the unique source of performance variance in the popular auto-tuning approach and set three hypotheses to assess their influence.
|
| 22 |
+
|
| 23 |
+
- We analyze our hypotheses with the production auto-tuner and the representative machine learning workloads on AWS Cloud instances.
|
| 24 |
+
|
| 25 |
+
- Future works outline our next steps to address this enormous challenge in the auto-tuning approach.
|
| 26 |
+
|
| 27 |
+
## II. POTENTIAL SOURCES OF PERFORMANCE VARIANCE IN AUTO-TUNERS
|
| 28 |
+
|
| 29 |
+
Figure 1 illustrates the overarching workflow of the popular auto-tuning approach with a learning-based cost model [2], [5], [16]. Once users provide the tuning budget (e.g., number of candidate trials), auto-tuners repeatedly perform four steps using its three major components: search method, cost model, and measurer. First off, search method, such as evolutionary search [7], generates a set of promising candidates and queries the cost model to predict the competency between candidates and filter out non-promising ones. Then, the measurer takes the filtered candidates and evaluates them on the actual hardware. It accompanies the actual compilation that applies the optimization setting in each candidate and the run-time measurement of the compiled executable. Once the run-time performance is collected, it will be provided as feedback to both search method and cost model. Search method will use the run-time performance as the feedback to generate more promising candidates in the next iteration (e.g., mutation and cross-over in the evolutionary search [7]). On the other hand, the cost model will use the run-time performance for its training process. Furthermore, recent work [17] has shown that past performance tuning datasets could be leveraged to pretrain the cost model.
|
| 30 |
+
|
| 31 |
+

|
| 32 |
+
|
| 33 |
+
Fig. 1: Representative auto-tuning workflow with three major components: search method, cost model and measurer. Within the predefined tuning budget (e.g., number of candidate trials, wall-clock time), these components in the auto-tuner repeat the four steps by working together closely.
|
| 34 |
+
|
| 35 |
+
Because every component interacts very closely with each other, a slight difference in the behavior of one component may lead to quite different outcomes across tuning runs. For example, popular search methods [1], [6], [7] leverage random search to certain extent, especially at the beginning of the tuning process. Unfortunately, some source of non-determinism is inevitable - it is impossible to completely eliminate run-time noise during the measurement. Also, search methods require some randomness for their statistical efficiency. Therefore, in this work, we examine three aspects that we may be able to improve in the future auto-tuner design.
|
| 36 |
+
|
| 37 |
+
## A. Hypothesis 1: search method may generate imbalanced training data for the cost model.
|
| 38 |
+
|
| 39 |
+
We observe the potential dilemma between search method and the cost model. In order to find out the best candidates, search method exploits the previous feedback to effectively cut down the search space and focus on the narrowed promising space. Ironically, this may imply the generation of biased training data for the cost model.
|
| 40 |
+
|
| 41 |
+
## B. Hypothesis 2: instability of cost model accuracy may affect final performance variance.
|
| 42 |
+
|
| 43 |
+
Since the cost model is constructed online, it is not clear if training data collected during the tuning process is enough to make the cost model's accuracy mature and stable enough. Also, Hypothesis 1 adds another source of variance in its accuracy across tuning runs. C. Hypothesis 3: certain hardware and its target-specific optimization may inherently have higher run-time variance than others.
|
| 44 |
+
|
| 45 |
+
Non-determinism in hardware behavior incurs some runtime noise in the measurer and results in noisy feedback that may guide the search in a biased direction. If certain hardware is intrinsically noisier, the auto-tuner may consider this factor in its strategy of utilizing feedback.
|
| 46 |
+
|
| 47 |
+
## III. EVALUATION
|
| 48 |
+
|
| 49 |
+
## A. Experiment Setup
|
| 50 |
+
|
| 51 |
+
1) Auto-tuner: For experiments, we investigate the new auto-tuning technology based on TVM [4] called MetaSched-ule. MetaSchedule is a tuning framework that follows the auto-tuning framework in Figure 1.
|
| 52 |
+
|
| 53 |
+
The search space of any tensor program is defined as different ways to lower and execute the model without impacting its results, for example, how to tile the loops and whether to vectorize the innermost loop in workloads. The tuner will generate potential tuning search space for a given workload. All the candidates are selected from a certain search space. We defined the search space using the default optimization rules for CPU and GPU defined in TVM MetaSchedule, including multilevel tiling, loop unrolling and vectorization, etc.
|
| 54 |
+
|
| 55 |
+
The search space grows exponentially as the number of loops increases, so a search strategy is defined and applied to generate measurement candidates in the search space. For example, the random search strategy can explore the search space via randomization. In contrast, the evolutionary search strategy can generate new measurement candidates based on the results from previous rounds of auto-tuning.
|
| 56 |
+
|
| 57 |
+
In order to maximize auto-tuning efficiency, we utilize the cost model to extract features, predict the candidates' performance, and select candidates with the most potential for measurement. Selected candidates are then compiled and measured by the evaluator. The measurement results are used for later end-to-end compilation, cost model update, and search strategy update.
|
| 58 |
+
|
| 59 |
+
2) Workloads: We select 5 representative tensor programs to conduct all the experiments, covering most compute-intensive subgraphs in end-to-end model tuning. The workloads are as follows:
|
| 60 |
+
|
| 61 |
+
- C2D: 2-D Convolution, NHWC layout;
|
| 62 |
+
|
| 63 |
+
- C3D: 3-D Convolution, NHWC layout;
|
| 64 |
+
|
| 65 |
+
- GMM: Batch Matrix Multiplication;
|
| 66 |
+
|
| 67 |
+
- T2D: Transposed 2-D Convolution, NHWC layout;
|
| 68 |
+
|
| 69 |
+
- TBG: Transposed Batch Matrix Multiplication
|
| 70 |
+
|
| 71 |
+
3) Hardware: Our experiments are all based on AWS EC2 machines. CPU experiments are conducted on c5.9xlarge machine with 36 Intel(R) Platinum 8223CL CPU @ 3.00GHz vCPUs. GPU experiments are conducted on p3. 2xlarge machine with an NVIDIA V100 GPU.
|
| 72 |
+
|
| 73 |
+
In order to separate autotuning from measurement, we used another AWS Cloud machine (c5.12xlarge) to do tuning and conduct measurement via TVM's RPC module. TVM's
|
| 74 |
+
|
| 75 |
+

|
| 76 |
+
|
| 77 |
+
(c) Examine Hypothesis 3: Comparison of the variance between CPU and GPU experiments. In every workload, CPU showed more stable performance despite its lower performance.
|
| 78 |
+
|
| 79 |
+
Fig. 2: Experiments to validate our hypotheses. We plotted their tuning progress by targeting five representative deep learning workloads and collecting the best performance in GFLOPS, together with latency at each trial. AWS instances of c5.9x1arge (36 Intel Xeon Platinum 8223CL vCPUs) and p3.2x1arge (NVIDIA V100 GPU) are used for CPU and GPU experiments correspondingly. Each experiment included five tuning runs, each consuming 2,000 trials. The shaded area shows the performance variance at each trial. In general, we observe that the performance variance alleviates as tuning proceeds.
|
| 80 |
+
|
| 81 |
+
TimeEvaluator module is explicitly used for workload running time measurements.
|
| 82 |
+
|
| 83 |
+
## B. Result
|
| 84 |
+
|
| 85 |
+
Figure 2 presents our experimental results. Overall, tuning jobs experience high variance during early phases and stabilize their performance as the tuning proceeds. However, in some tuning workloads, such as ${T2D}$ , the variance gets larger. Our future study will examine this further (e.g., assign more trials).
|
| 86 |
+
|
| 87 |
+
- Hypothesis 1: search method may generate imbalanced training data for cost model.
|
| 88 |
+
|
| 89 |
+
To validate this hypothesis, we compare the tuning progress between evolutionary search (EvoSearch) and random search. Since random search explores search space without any feedback, it would generate the most balanced training data that can be collected during the tuning process. Figure 2a presents the experimental result. Since random search does not use any intelligence during candidate generation, it generally shows worse performance than evolutionary search. However, it consistently demonstrates stability by exhibiting less performance variance across workloads. Especially, random search showcases significantly better stability in ${C3D},{T2D}$ and ${TBG}$ .
|
| 90 |
+
|
| 91 |
+
- Hypothesis 2: instability of cost model accuracy may affect final performance variance.
|
| 92 |
+
|
| 93 |
+
In order to compare, we conduct tuning runs with a cost model (XGBoost) and without it. Without any cost model, all promising candidates generated by evolutionary search will be evaluated with measurer (See Figure 1). Note that these two experiments had an equal number of measurements during each tuning instance (i.e., 1 trial = 1 measurement), so the overall tuning time stays similar as overhead in the measurer usually dominates the overall tuning time. Figure $2\mathrm{\;b}$ presents the result.
|
| 94 |
+
|
| 95 |
+
Overall, we could stabilize the performance variance by disabling the cost model. With the cost model, the variance on most workloads (i.e., All but GMM) does not improve and gets worse often. We suspect this might be attributed to the unstable cost model accuracy with the online training approach. Future work may assign more trials to see if the variance improves with more training samples for the cost model.
|
| 96 |
+
|
| 97 |
+
- Hypothesis 3: certain hardware and its target-specific optimization may inherently have higher run-time variance than others.
|
| 98 |
+
|
| 99 |
+
Figure 2c exhibits default MetaSchedule tuning (i.e., evolutionary search with cost model) on two separate hardware architectures - CPU (Intel Xeon Platinum 8223CL) and GPU (NVIDIA V100) machines. Although GPU shows higher performance throughput in every workload, it generally presents worse variance. Especially, variance in ${T2D}$ is significant.
|
| 100 |
+
|
| 101 |
+
Such difference in hardware performance variance could be attributed to the difference during search space construction - some optimization rules could be hardware-specific, especially layout-related rules. Also, the use of the cost model further enhanced such difference. On the other hand, the different memory architectures in CPU and GPU resulted in distinct performance sensitivity on tensor program compilation. Minor changes in GPU memory access patterns may bring an enormous performance leap compared to CPU. Future work can set up the same optimization search space to different hardware and further explore their characteristics.
|
| 102 |
+
|
| 103 |
+
## IV. Future Direction and Conclusion
|
| 104 |
+
|
| 105 |
+
This work investigates the performance variance problem in the popular compiler auto-tuning approach. By focusing on the causes of such variance that originates from the nature of auto-tuning, we suggest three factors to consider to alleviate the performance variance in the future design of auto-tuners: Firstly, search method may consider both the search process itself and the balance of training data for the cost model. Future studies may examine the ratio of randomly generated candidates and see its impact on the balance of training data. Secondly, we may need to take a deeper look at the training progress of the cost model. If a standard tuning budget is insufficient to reach stable accuracy, we may consider offline tuning or transfer learning. Last but not least, we show that certain hardware can be inherently noisier than others. Our future investigation may examine a statistical approach to cope with this issue gracefully.
|
| 106 |
+
|
| 107 |
+
## REFERENCES
|
| 108 |
+
|
| 109 |
+
[1] J. Ansel, S. Kamil, K. Veeramachaneni, J. Ragan-Kelley, J. Bosboom,
|
| 110 |
+
|
| 111 |
+
U.-M. O'Reilly, and S. Amarasinghe, "Opentuner: An extensible framework for program autotuning," in Proceedings of the 23rd international conference on Parallel architectures and compilation, 2014, pp. 303- 316.
|
| 112 |
+
|
| 113 |
+
[2] R. Baghdadi, M. Merouani, M.-H. Leghettas, K. Abdous, T. Arbaoui, K. Benatchba et al., "A deep learning based cost model for automatic code optimization," Proceedings of Machine Learning and Systems, vol. 3, pp. 181-193, 2021.
|
| 114 |
+
|
| 115 |
+
[3] T. Chen, T. He, M. Benesty, V. Khotilovich, Y. Tang, H. Cho, K. Chen et al., "Xgboost: extreme gradient boosting," $R$ package version 0.4-2, vol. 1, no. 4, pp. 1-4, 2015.
|
| 116 |
+
|
| 117 |
+
[4] T. Chen, T. Moreau, Z. Jiang, L. Zheng, E. Yan, H. Shen, M. Cowan, L. Wang, Y. Hu, L. Ceze, C. Guestrin, and A. Krishnamurthy, "TVM: An automated End-to-End optimizing compiler for deep learning," in 13th USENIX Symposium on Operating Systems Design and Implementation (OSDI 18). Carlsbad, CA: USENIX Association, Oct. 2018, pp. 578-594. [Online]. Available: https: //www.usenix.org/conference/osdi18/presentation/chen
|
| 118 |
+
|
| 119 |
+
[5] T. Chen, L. Zheng, E. Yan, Z. Jiang, T. Moreau, L. Ceze, C. Guestrin, and A. Krishnamurthy, "Learning to optimize tensor programs," Advances in Neural Information Processing Systems, vol. 31, 2018.
|
| 120 |
+
|
| 121 |
+
[6] C. Cummins, B. Wasti, J. Guo, B. Cui, J. Ansel, S. Gomez, S. Jain, J. Liu, O. Teytaud, B. Steiner et al., "Compilergym: Robust, performant compiler optimization environments for ai research," in 2022 IEEE/ACM International Symposium on Code Generation and Optimization (CGO). IEEE, 2022, pp. 92-105.
|
| 122 |
+
|
| 123 |
+
[7] F.-A. Fortin, F.-M. De Rainville, M.-A. G. Gardner, M. Parizeau, and C. Gagné, "Deap: Evolutionary algorithms made easy," The Journal of Machine Learning Research, vol. 13, no. 1, pp. 2171-2175, 2012.
|
| 124 |
+
|
| 125 |
+
[8] G. Fursin, Y. Kashnikov, A. W. Memon, Z. Chamski, O. Temam, M. Namolaru, E. Yom-Tov, B. Mendelson, A. Zaks, E. Courtois et al., "Milepost gcc: Machine learning enabled self-tuning compiler," International journal of parallel programming, vol. 39, no. 3, pp. 296-327, 2011.
|
| 126 |
+
|
| 127 |
+
[9] A. Haj-Ali, N. K. Ahmed, T. Willke, Y. S. Shao, K. Asanovic, and I. Stoica, "Neurovectorizer: End-to-end vectorization with deep reinforcement learning," in Proceedings of the 18th ACM/IEEE International Symposium on Code Generation and Optimization, 2020, pp. 242-255.
|
| 128 |
+
|
| 129 |
+
[10] J. Hennessy and D. Patterson, "A new golden age for computer architecture: Domain-specific hardware/software co-design, enhanced."
|
| 130 |
+
|
| 131 |
+
[11] B. Jeon, S. Park, P. Liao, S. Xu, T. Chen, and Z. Jia, "Collage: Automated integration of deep learning backends," arXiv preprint arXiv:2111.00655, 2021.
|
| 132 |
+
|
| 133 |
+
[12] Z. Jia, O. Padon, J. Thomas, T. Warszawski, M. Zaharia, and A. Aiken, "Taso: optimizing deep learning computation with automatic generation of graph substitutions," in Proceedings of the 27th ACM Symposium on Operating Systems Principles, 2019, pp. 47-62.
|
| 134 |
+
|
| 135 |
+
[13] S. Kaufman, P. Phothilimthana, Y. Zhou, C. Mendis, S. Roy, A. Sabne, and M. Burrows, "A learned performance model for tensor processing units," Proceedings of Machine Learning and Systems, vol. 3, pp. 387- 400, 2021.
|
| 136 |
+
|
| 137 |
+
[14] S. Park, S. Latifi, Y. Park, A. Behroozi, B. Jeon, and S. Mahlke, "Srtuner: Effective compiler optimization customization by exposing synergistic relations," in 2022 IEEE/ACM International Symposium on Code Generation and Optimization (CGO). IEEE, 2022, pp. 118-130.
|
| 138 |
+
|
| 139 |
+
[15] P. M. Phothilimthana, A. Sabne, N. Sarda, K. S. Murthy, Y. Zhou, C. Angermueller, M. Burrows, S. Roy, K. Mandke, R. Farahani et al., "A flexible approach to autotuning multi-pass machine learning compilers," in 2021 30th International Conference on Parallel Architectures and Compilation Techniques (PACT). IEEE, 2021, pp. 1-16.
|
| 140 |
+
|
| 141 |
+
[16] L. Zheng, C. Jia, M. Sun, Z. Wu, C. H. Yu, A. Haj-Ali, Y. Wang, J. Yang, D. Zhuo, K. Sen et al., "Ansor: Generating \{High-Performance\} tensor programs for deep learning," in 14th USENIX Symposium on Operating Systems Design and Implementation (OSDI 20), 2020, pp. 863-879.
|
| 142 |
+
|
| 143 |
+
[17] L. Zheng, R. Liu, J. Shao, T. Chen, J. E. Gonzalez, I. Stoica, and A. H. Ali, "Tenset: A large-scale program performance dataset for learned tensor compilers," in Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 1), 2021.
|
papers/ISCA/ISCA 2023/ISCA 2023 Workshop/ISCA 2023 Workshop ASSYST/0QA2qomtW3-/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,211 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Towards Efficient Multi-Agent Learning Systems
|
| 2 |
+
|
| 3 |
+
MLArchSys 2023 Submission #NaN - Confidential Draft - Do NOT Distribute!!
|
| 4 |
+
|
| 5 |
+
Abstract-Multi-Agent Reinforcement Learning (MARL) is an increasingly important research field that can model and control multiple large-scale autonomous systems. Despite its achievements, existing multi-agent learning methods typically involve expensive computations in terms of training time and power arising from large observation-action space and a huge number of training steps. Therefore, a key challenge is understanding and characterizing the computationally intensive functions in several popular classes of MARL algorithms during their training phases. Our preliminary experiments reveal new insights into the key modules of MARL algorithms that limit the adoption of MARL in real-world systems. We explore neighbor sampling strategy to improve cache locality and observe performance improvement ranging from 26.66% (3 agents) to 27.39% (12 agents) during the computationally intensive mini-batch sampling phase. Additionally, we demonstrate that improving the locality leads to an end-to-end training time reduction of ${10.2}\%$ (for 12 agents) compared to existing multi-agent algorithms without significant degradation in the mean reward.
|
| 6 |
+
|
| 7 |
+
Index Terms-Multi-Agent Systems, Performance Analysis, Reinforcement Learning, Performance Optimization
|
| 8 |
+
|
| 9 |
+
## I. INTRODUCTION
|
| 10 |
+
|
| 11 |
+
Reinforcement Learning (RL) has recently made exciting progress in many applications, including Atari games [1], aviation systems [2], and robotics [3]. Specifically, RL frameworks fit in the context of problems that involve sequential-decision making where the agent needs to take actions in an environment to maximize the cumulative rewards. In RL, the quality of state-action pairs is evaluated using a reward function, and the transition to a new state depends on the current state and action [4]. The function that determines the action from the state is known as a policy. The function representing the reward estimates is known as the value function.
|
| 12 |
+
|
| 13 |
+
Multi-agent systems [4] have shown excellent performance among various multi-player games [5] where there is significant sharing of observations between the agents during training, and the joint actions among these agents could affect the environment dynamically. In MARL, several agents simultaneously explore a common environment and perform competitive (e.g., Predator-prey) and cooperative (e.g., Cooperative navigation) tasks [6]. All the observations are shared in the cooperative setting, and the training is performed centrally. In contrast, each agent aims to outperform its enemies in a competitive setting. As a result, MARL training involves several computationally-challenging tasks that deal with dynamically changing environments.
|
| 14 |
+
|
| 15 |
+
In this paper, we performed a workload characterization study to understand the performance-limiting functions on well-known model-free MARL frameworks [6], [7] implemented using actor-critic methods with state spaces that are usually very large. We analyze different MARL training phases where the actor and critic networks are responsible for policy and value functions, respectively. The critic tries to learn a value function given the policy from the actor, while the actor can estimate the policy gradient based on the approximate value function that the critic provides.
|
| 16 |
+
|
| 17 |
+

|
| 18 |
+
|
| 19 |
+
Fig. 1: Overview of our multi-agent decentralized actor, centralized critic approach (Competitive environment).
|
| 20 |
+
|
| 21 |
+
As shown in Figure 1, each agent in the environment has its own actor network which outputs the action of an agent given its observation (Action selection). During the mini-batch sampling phase, each agent $i$ collects the historical transition data of all other agents stored within the Experience Replay Buffer. The sampling approach enables the algorithm to reuse the transition data for updating the current policy. Each agent has a centralized critic which outputs the Q-value using the joint observation-action space of all other agents. During ${Up}$ - date all trainers phase, both the actor and critic networks are updated after the target $Q$ calculation and sampling phase. At last, we explore the neighbor sampling strategy for better cache locality and that leads to a performance improvement ranging from 26.66% (3 agents) to 27.39% (12 agents) (Figure 5).
|
| 22 |
+
|
| 23 |
+
The main contributions of our paper are the following:
|
| 24 |
+
|
| 25 |
+
- We systematically perform a hardware-software performance analysis within the training phases of Multi-agent systems. Further, we present key insights into the performance bottlenecks confronting several key MARL algorithms from a systems perspective.
|
| 26 |
+
|
| 27 |
+
- We explore a neighbor sampling strategy to improve the locality of data access within the mini-batch sampling phase. Our preliminary experiments provide performance improvement ranging from 26.66% (3 agents) to 27.39% (12 agents) in the sampling phase training run-time. Additionally, we achieve ${10.2}\%$ (12 agents) end-to-end training time reduction compared to the SOTA multi-agent algorithms.
|
| 28 |
+
|
| 29 |
+

|
| 30 |
+
|
| 31 |
+
Fig. 2: Training time breakdown on Ampere Architecture RTX 3090 for the MARL workloads (MADDPG & MASAC) with 3 to 48 agents. The environment is Predator-Prey.
|
| 32 |
+
|
| 33 |
+

|
| 34 |
+
|
| 35 |
+
Fig. 3: Computation time growth in MARL modules averaged across the two MARL frameworks.
|
| 36 |
+
|
| 37 |
+
## II. MOTIVATION
|
| 38 |
+
|
| 39 |
+
In multi-agent systems, the training phase is performance-intensive as the agents must collaborate and coordinate to maximize a shared return. Many real-world applications, such as robot fleet coordination [8] and traffic light control [9], are naturally modeled as multi-agent problems, but they become intractable with the growing number of agents due to the expensive computation required to estimate other agents' policies at each state and a huge amount of neural network parameters. This problem limits their adoption to real-world systems due to their computationally expensive nature and allows them only to deal with a few agents [10], [11]. Figure 2 shows the runtime breakdown of the training phase. We omit the agents interactions phase since it primarily depends on environment complexity. Update all trainers contributes to $\approx {35}\%$ to $\approx {85}\%$ of the training time as the number of MARL agents grows from 3 to 48. This is mainly due to two reasons: ① In MARL, each agent has its own actor and critic networks since they may have different rewards. Each agent must randomly collect a batch of transitions from all other agents to update the critic and actor networks. ② The dynamic memory requirements of observation and action spaces also grow quadratically due to each agent having to coordinate with other agents towards sharing their observations and actions. Action selection phase occupies a small portion and scales linearly with the number of agents (Figure 3). This is because, in Action selection, agents consider individual policies to obtain local actions. Other segments include experience collection, reward collection, and policy initialization, and they add a negligible overhead.
|
| 40 |
+
|
| 41 |
+
## III. BACKGROUND
|
| 42 |
+
|
| 43 |
+
Typically, MARL settings with $N$ agents is defined by a set of states, $S = {S}_{1} \times \ldots \times {S}_{N}$ , a set of actions $A = {A}_{1} \times \ldots \times {A}_{N}$ . Each agent selects its action by using a policy ${\pi }_{{\theta }_{i}} : {O}_{i} \times {A}_{i} \rightarrow$ $\left\lbrack {0,1}\right\rbrack$ . The state transition $\left( {T : S \times {A}_{1} \times {A}_{2} \times \ldots \times {A}_{N}}\right)$ function produces the next state ${S}^{\prime }$ , given the current state and actions for each agent. The reward, ${R}_{i} : S \times {A}_{i} \rightarrow \mathbb{R}$ for each agent is a function of global state and action of all other agents, with the aim of maximizing its own expected return ${R}_{i} = \mathop{\sum }\limits_{{t = 0}}^{T}{\gamma }^{t}{r}_{i}^{t}$ , where $\gamma$ denotes the discount factor and $T$ is the time horizon. For this, we use the actor-critic methods such as MADDPG [6], MASAC [7].
|
| 44 |
+
|
| 45 |
+
MADDPG [6] is centralized training and decentralized execution (CTDE) algorithm mainly designed for mixed environments. Each agent learns an individual policy that maps the observation to its action to maximize the expected return, which is approximated by the critic. MADDPG lets the critic of agent $i$ to be trained by minimizing the loss with the target $Q$ -value and ${y}_{i}$ using $\mathcal{L}\left( {\theta }_{i}\right) = {\mathbb{E}}_{D}\left\lbrack \left( {{Q}_{i}\left( {S,{A}_{1},\ldots {A}_{n}}\right) - {y}_{i}^{2}}\right\rbrack \right.$ , and ${y}_{i} = {r}_{i} + \gamma {\bar{Q}}_{i}{\left( {S}^{\prime },{A}_{1}^{\prime },\ldots {A}_{n}^{\prime }\right) }_{{a}_{j}^{\prime } = \bar{\pi }\left( {o}_{j}^{\prime }\right) }$ , where $S$ and ${A}_{1},\ldots {A}_{n}$ represent the joint observations and actions respectively. $D$ is the experience replay buffer that stores the observations, actions, rewards, and new observations samples of all agents obtained after the training episodes. The critic networks are augmented with states and actions of all agents to reduce the variance of policy gradients and improve performance. The MARL framework has four networks- actor, critic, target actor, and target critic. ${\bar{Q}}_{i}$ and $\bar{\pi }\left( {o}_{j}^{\prime }\right)$ are the target networks for the stable learning of critic $\left( {Q}_{i}\right)$ and actor networks. The target actor estimates the next action from the policy using the state output by the actor network. The target critic aggregates the output from the target actor to compute the target Q-values, which helps to update the critic network and assess the quality of actions taken by agents. The target networks are created to achieve training stability. Note that the updating sequence of networks in the back-propagation phase is critics, actors, then the target networks.
|
| 46 |
+
|
| 47 |
+
Similar to MADDPG, the centralized critic is introduced in Soft Actor-Critic (SAC [7]) algorithm. MASAC uses the maximum entropy RL, in which the agents are encouraged to maximize the exploration within the policy. MASAC assigns equal probability to nearly-optimal actions which have similar state-action values and avoids repeatedly selecting the same action. This learning trick will increase the stability, policy exploration, and sample efficiency [7], [12].
|
| 48 |
+
|
| 49 |
+
## IV. Evaluation Setup
|
| 50 |
+
|
| 51 |
+
Benchmark. Table I provides the behavior of selected Multi-agent Particle Environments (MPE [6]). We profile and characterize three state-of-the-art MARL algorithms, MAD-DPG and MASAC. A two-layer ReLU MLP parameterizes the actor and critic networks with 64 units per layer, and the mini-batch size is 1024 for sampling the transitions. In our experiments, we use Adam optimizer [13] with a learning rate of 0.01, maximum episode length as 25 (max episodes to reach the terminal state), and $\tau = {0.01}$ for updating the target networks. $\gamma$ is the discount factor which is set to 0.95 . The size of the replay buffer is ${10}^{6}$ , and the entropy coefficient for MASAC is 0.05 . The network parameters are updated after every 100 samples are added to the replay buffer.
|
| 52 |
+
|
| 53 |
+
TABLE I: Multi-agent Particle environment.
|
| 54 |
+
|
| 55 |
+
<table><tr><td>$\mathbf{{Environment}}$</td><td>Details</td></tr><tr><td>Cooperative navigation</td><td>$N$ agents move in a cooperated manner to reach $L$ landmarks and the rewards encourages the agents get closer to the landmarks.</td></tr><tr><td>Predator-Prey</td><td>$N$ predators work cooperatively to block the way of $M$ fast paced prey agents. The prey agents are envi- ronment controlled and they try to avoid the collision with predators.</td></tr></table>
|
| 56 |
+
|
| 57 |
+
Profiling Platform. MARL algorithms are implemented with state-of-the-art CPU-GPU compatible TensorFlow-GPU (v2.11.0). The server runs on Ubuntu Linux 20.04.5 LTS operating system with CUDA 9.0, cuDNN 7.6.5, PCIe Express® v4.0 with NCCL v2.8.4 communication library. The machine supports python 3.7.15, TensorFlow-Slim (v1.1.0) and OpenAI GYM (v0.10.5). All the workloads are profiled on single Nvidia GeForce RTX 3090 Ampere Architecture with Perf [14] and NVProf to profile hardware performance counters for performance analysis. Finally, we trained for ${60}\mathrm{K}$ episodes using default hyper-parameters recommended by the algorithms.
|
| 58 |
+
|
| 59 |
+
## V. Preliminary Evaluation
|
| 60 |
+
|
| 61 |
+
This section is organized as follows. First, we present an overview of our profiling result. Then, we divide the computationally dominant functions in Update all trainers into multiple modules: Mini-batch sampling, Target $Q$ calculation, and $Q$ loss & $P$ loss and present our results in the competitive setting (predator-prey) to understand the key factors limiting MARL in large-scale systems.
|
| 62 |
+
|
| 63 |
+
Overview of Profile. Figure 4 shows the breakdown between the modules, Mini-batch sampling, Target $Q$ calculation, $Q$ loss, and $P$ loss that contribute ${61}\% ,{21}\% ,{10}\%$ , and 8% to computation time averaging across different workloads, respectively.
|
| 64 |
+
|
| 65 |
+
## A. Mini-batch sampling
|
| 66 |
+
|
| 67 |
+
Our experimental results in Figure 4 show that mini-batch sampling is the largest time-consuming phase within the Update All Trainers module. The behavior is also consistent with scaling in other critical hardware performance metrics: dTLB load misses $- {3.9} \times$ (growth rate from $3 - 6$ agents) and cache misses $- {3.9} \times$ (growth rate from $3 - 6$ agents).
|
| 68 |
+
|
| 69 |
+
Mini-batch sampling phase is dominated by the collection of random samples from all other agents' replay buffers and updates the parameters of its actor and critic networks. Note that the agent replay buffers are kept separate from each other to capture their past transitions. For each time-step, agent $i$ draws a random index set $\left\{ {{L}_{1},{L}_{2},\ldots ,{L}_{K}}\right\} (K$ is the mini-batch size), and first selects ${L}_{1}$ to perform a memory lookup in the experience replay buffer to retrieve the corresponding transition and store it in the individual agent buffer. This operation grows as a function of the number of agents, $N$ , which is repeated on all $N$ agents. The sampling stage exhibits random memory access patterns and cannot exploit the cache reuse due to randomness in the indices for each agent between the iterations, resulting in increased cache misses. Also, the computation is a heavy burden due to the data accessing from dispersed memory locations, and therefore, the algorithm with large buffer sizes and batch sizes has more cache misses. In cooperative navigation (simple spread [6]), we observe similar bottlenecks since all the agents are trained together to reach the landmarks while avoiding collisions with each other.
|
| 70 |
+
|
| 71 |
+
## B. Target $Q$ calculation
|
| 72 |
+
|
| 73 |
+
The Target $Q$ calculation phase is second largest time-consuming phase within Update All Trainers (Figure 4). In this function, each agent performs the next action calculation, target $Q$ next, and target $Q$ values as a function of all other agents' joint observation-action space. To calculate the next action, the agent $i$ uses its policy network to determine next action- $a$ ’ from the next state- $S$ ’. In this phase, each agent's policy network involves multiplications with input-weight matrix and additions resulting in performance impact. The obtained a' and S' data are aggregated and concatenated into a single vector in order to compute the target $Q$ next amongst the cooperating agents. The input space (dimension) for the $Q$ -function increases quadratically with the number of agents [15]. The target critic values for each agent $i$ is computed using target $Q$ next values from the target actor network. We note that, each agent has to read other agents' policy values; as such for $N$ agents, there is $N \times \left( {N - 1}\right)$ memory lookup operations corresponding to the next action- ${a}^{\prime }$ .
|
| 74 |
+
|
| 75 |
+

|
| 76 |
+
|
| 77 |
+
Fig. 4: Training time breakdown on Ampere Architecture RTX 3090 within Update all trainers on two different MARL workloads (MADDPG & MASAC) with 3 to 48 agents under Predator-Prey environment.
|
| 78 |
+
|
| 79 |
+
## C. Back-propagation - $Q$ loss & $P$ loss
|
| 80 |
+
|
| 81 |
+
Back propagation stage is dominated by execution of two networks: ① critic network computes the Mean-Squared Error loss between the target critic and critic networks, and ② the actor network is updated by minimizing the $\mathrm{Q}$ values (computed by the critic network). The total training time increases as the number of agents increases, as shown in Figure 4. This is because as the number of agents increases, the trainable parameters increase, and $N$ policy and $N$ critic networks are built for all $N$ agents, which incurs extra time to update the weights for each agent. For each update, we sample the random mini-batch of size (1024 in our studies) transitions from the replay buffer within each agent $i$ from all the other agents and then perform gradient descent on the critic and actor networks.
|
| 82 |
+
|
| 83 |
+
## VI. NEIGHBOR SAMPLING STRATEGY
|
| 84 |
+
|
| 85 |
+
From our analysis so far, it can be concluded that the mini-batch sampling phase dominates Update all trainers phase when the number of agents scales linearly due to each agent sampling all other agents' transition data. Moreover, fetching the transition data from far way memory location affects the overall training time significantly as the complexity of the problem grows. Among all of the hardware metrics, cache misses suffer from worst scaling factor (3.9 - or higher). Therefore, with the support of loop level optimization, we explore optimizations that can improve the locality and overall MARL performance. In this paper, we propose an approach at the loop level to optimize the data accessing in the mini-batch sampling phase.
|
| 86 |
+
|
| 87 |
+
Algorithm 1 Neighbor Sampling Strategy
|
| 88 |
+
|
| 89 |
+
---
|
| 90 |
+
|
| 91 |
+
Require: List of random locations indices, Replay buffer $\mathcal{D}$
|
| 92 |
+
|
| 93 |
+
with a size $d$ , Neighbors $n$ , Batch size $b$
|
| 94 |
+
|
| 95 |
+
Ensure: Experience tuples from neighbor indices
|
| 96 |
+
|
| 97 |
+
for $i$ in indices do
|
| 98 |
+
|
| 99 |
+
if $i$ in enumerate(D)then
|
| 100 |
+
|
| 101 |
+
$\alpha \leftarrow$ Sample all transitions in range $\left\lbrack {i + n, i - n}\right\rbrack$
|
| 102 |
+
|
| 103 |
+
if $\alpha \subset \mathcal{D}$ then
|
| 104 |
+
|
| 105 |
+
for $j$ in $\alpha$ do
|
| 106 |
+
|
| 107 |
+
$\delta \leftarrow \mathcal{D}\left\lbrack j\right\rbrack$
|
| 108 |
+
|
| 109 |
+
end for $\vartriangleright$ Extract all the transitions from the
|
| 110 |
+
|
| 111 |
+
replay buffer $\mathcal{D}$ using the subset of indices $\alpha$
|
| 112 |
+
|
| 113 |
+
obs, act, next_obs, rew, dones $\leftarrow$ unpack $\left( \delta \right)$
|
| 114 |
+
|
| 115 |
+
end if
|
| 116 |
+
|
| 117 |
+
end if
|
| 118 |
+
|
| 119 |
+
if $\operatorname{len}\left( {obs}\right) \geq b$ then
|
| 120 |
+
|
| 121 |
+
break
|
| 122 |
+
|
| 123 |
+
return obs, act, next_obs, rew, dones
|
| 124 |
+
|
| 125 |
+
$\vartriangleright$ Sampled transitions
|
| 126 |
+
|
| 127 |
+
end if
|
| 128 |
+
|
| 129 |
+
end for
|
| 130 |
+
|
| 131 |
+
---
|
| 132 |
+
|
| 133 |
+
The idea of this approach is to eliminate the computation issues arising due to fetching the data from far away memory locations based on random indices. We investigate the neighbor sampling optimization in MAPPDG, where we collectively capture the neighbor transitions of an index $i$ to enable faster data access on a given hardware. Intuitively, at each index $i$ , we group the neighbor indices into a single micro-batch and extract the data in a locality-aware memory access order to efficiently sample the transitions.
|
| 134 |
+
|
| 135 |
+
Neighbor Sampling Strategy. Algorithm 1 shows how the mini-batch sampling phase selects the neighboring transitions for a random index $i$ . We initialize replay buffer size $\mathcal{D}$ , neighbors $n$ , and batch size $b$ . The first loop in line 1 maintains the random accesses of the indices. We note that the original computation loop of random sampling has been expanded into two loops. The first index $i$ is accessed and checked if $i$ is in the limits of $\mathcal{D}$ and, if so, it creates a micro-batch with all the neighbors based on $n$ and returns a list of neighbors $\alpha$ (line 3) for index $i$ . The second loop extracts the neighbor transitions from $\mathcal{D}$ and stores all of them in $\delta$ (line 6). In line 8, output vectors are unpacked and stored as individual vectors in the experience replay tuple consisting of observations, actions, next observations, rewards, dones. Finally, all the transitions are individually accumulated as numpy vectors. Statement 11 checks whether the size of observations has become full (equal to the batch size $b$ ), and if so, line 13 returns the batch of accumulated transitions as vectors.
|
| 136 |
+
|
| 137 |
+

|
| 138 |
+
|
| 139 |
+
Fig. 5: (a) Percentage reduction in training time for the mini-batch sampling phase for 3, 6 and 12 agents (MADDPG). (b) Percentage reduction in the total training time when the number of agents are scaled by $2 \times$ for MADDPG. The environment test-bed is Predator-Prey and Neighbors $= 3$
|
| 140 |
+
|
| 141 |
+
Overall, our optimization improves the cache locality and achieves the performance improvement ranging from 26.66% (3 agents) to ${27.39}\%$ (12 agents) during the computationally intensive mini-batch sampling (Figure 5). While studying this optimization, we ensure that there isn't any significant degradation in the mean episode reward.
|
| 142 |
+
|
| 143 |
+
## VII. DISCUSSION AND RELATED WORK
|
| 144 |
+
|
| 145 |
+
Hardware-Software acceleration techniques in RL have been the subject of research in recent years [16]-[19]. For example, to accelerate RL training from the software standpoint, prior works have shown that half-precision (FP16) quantization can yield significant performance benefits and improve hardware efficiency while achieving adequate convergence [20]. Other relevant approaches include QuaRL [21], where quantization is applied to speed up the RL training and inference phases. The authors experimentally demonstrated that quantizing the policies to $\leq 8$ bits led to substantial speedups in the training time compared to full precision training. Our work differs from QuaRL as we focus on multi-agent learning frameworks, where the agents operate in a common environment. Further, we characterize the computational issues of MARL and improve the performance by implementing neighbor sampling optimization to improve the efficiency of mini-batch sampling.
|
| 146 |
+
|
| 147 |
+

|
| 148 |
+
|
| 149 |
+
Fig. 6: (a) Average of mean episode rewards of all the agents trained for 60,000 episodes for MADDPG. (b) Average of mean episode rewards all the agents trained for 60,000 episodes after the neighbor sampling optimization for MADDPG. The environment test-bed is Predator-Prey and Neighbors $= 3$ .
|
| 150 |
+
|
| 151 |
+
Prior studies, like FA3C, have focused on hardware acceleration in multiple parallel worker scenarios, where each agent is controlled independently within their environments using single-agent RL algorithms [17]. In contrast, we seek to systematically understand the performance-limiting functions in multi-agent systems, where the agents collaborate in a single shared environment. Agents in such MARL settings usually have high visibility of one another (leading to large space and action spaces).
|
| 152 |
+
|
| 153 |
+
In MARL settings where each agent needs to interact with its neighbor agents especially in complex environments with lots of observations and huge action spaces, computational bottlenecks may be alleviated using architectural primitives implementing selective attention [12], [22], [23]. As the number of agents increases, the hardware techniques such as near-memory computing could help to efficiently perform mini-batch sampling. For the input to critic networks, multilevel data compression [24]-[26] techniques on targeted group of agents may be used based on their importance in the environment. Also, the cache misses during mini-batch sampling are indicative of competition for the LLC cache, that may be addressed through smart cache allocation strategies. Other modules such as next action calculation, environment interactions, and action selection phases may also benefit from custom acceleration of key modules.
|
| 154 |
+
|
| 155 |
+
## VIII. CONCLUSION AND FUTURE WORK
|
| 156 |
+
|
| 157 |
+
In this work, we present an end-to-end characterization of several popular Multi-Agent Reinforcement Learning algorithms and, in particular, explore the locality-aware neighbor indexing optimization. We find that the Update all trainers dominates the training process of MARL algorithms and scales super linearly with the number of agents. Our experimental analysis presents key insights into the modules that are the driving factors behind computational bottlenecks. We also propose an approach at the loop level to optimize the data accessing in the mini-batch sampling phase. The proposal achieves performance improvement from 26.66% (3 agents) to ${27.39}\%$ (12 agents) in the mini-batch sampling phase. Future research includes investigating various pseudo-random sampling strategies and designing a hardware-friendly architecture to efficiently fetch the transitions in large-scale MARL.
|
| 158 |
+
|
| 159 |
+
## REFERENCES
|
| 160 |
+
|
| 161 |
+
[1] V. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wier-stra, and M. Riedmiller, "Playing atari with deep reinforcement learning," arXiv preprint arXiv:1312.5602, 2013.
|
| 162 |
+
|
| 163 |
+
[2] P. Razzaghi, A. Tabrizian, W. Guo, S. Chen, A. Taye, E. Thompson, A. Bregeon, A. Baheri, and P. Wei, "A survey on reinforcement learning in aviation applications," arXiv preprint arXiv:2211.02147, 2022.
|
| 164 |
+
|
| 165 |
+
[3] D. Wang, R. Walters, X. Zhu, and R. Platt,"Equivariant $q$ learning in spatial action spaces," in Conference on Robot Learning. PMLR, 2022, pp. 1713-1723.
|
| 166 |
+
|
| 167 |
+
[4] R. S. Sutton and A. G. Barto, Reinforcement learning: An introduction. MIT press, 2018.
|
| 168 |
+
|
| 169 |
+
[5] K. Zhang, Z. Yang, and T. Başar, "Multi-agent reinforcement learning: A selective overview of theories and algorithms," Handbook of Reinforcement Learning and Control, pp. 321-384, 2021.
|
| 170 |
+
|
| 171 |
+
[6] R. Lowe, Y. I. Wu, A. Tamar, J. Harb, O. Pieter Abbeel, and I. Mordatch, "Multi-agent actor-critic for mixed cooperative-competitive environments," Advances in neural information processing systems, vol. 30, 2017.
|
| 172 |
+
|
| 173 |
+
[7] T. Haarnoja, A. Zhou, P. Abbeel, and S. Levine, "Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor," in International conference on machine learning. PMLR, 2018, pp. 1861-1870.
|
| 174 |
+
|
| 175 |
+
[8] G. Swamy, S. Reddy, S. Levine, and A. D. Dragan, "Scaled autonomy: Enabling human operators to control robot fleets," in 2020 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2020, pp. 5942-5948.
|
| 176 |
+
|
| 177 |
+
[9] A. L. Bazzan, "Opportunities for multiagent systems and multiagent reinforcement learning in traffic control," Autonomous Agents and Multi-Agent Systems, vol. 18, pp. 342-375, 2009.
|
| 178 |
+
|
| 179 |
+
[10] M. Zhou, Y. Chen, Y. Wen, Y. Yang, Y. Su, W. Zhang, D. Zhang, and J. Wang, "Factorized q-learning for large-scale multi-agent systems," in Proceedings of the first international conference on distributed artificial intelligence, 2019, pp. 1-7.
|
| 180 |
+
|
| 181 |
+
[11] Y. Liu, W. Wang, Y. Hu, J. Hao, X. Chen, and Y. Gao, "Multi-agent game abstraction via graph attention neural network," in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, no. 05, 2020, pp. 7211-7218.
|
| 182 |
+
|
| 183 |
+
[12] S. Iqbal and F. Sha, "Actor-attention-critic for multi-agent reinforcement learning," in International conference on machine learning. PMLR, 2019, pp. 2961-2970.
|
| 184 |
+
|
| 185 |
+
[13] D. P. Kingma and J. Ba, "Adam: A method for stochastic optimization," arXiv preprint arXiv:1412.6980, 2014.
|
| 186 |
+
|
| 187 |
+
[14] V. Ramos, "Performance counters api for python," https://pypi.org/ project/performance-features/, May 2019.
|
| 188 |
+
|
| 189 |
+
[15] H. U. Sheikh and L. Bölöni, "Multi-agent reinforcement learning for problems with combined individual and team reward," in 2020 International Joint Conference on Neural Networks (IJCNN). IEEE, 2020, pp. 1-8.
|
| 190 |
+
|
| 191 |
+
[16] M. Babaeizadeh, I. Frosio, S. Tyree, J. Clemons, and J. Kautz, "GA3C: GPU-based A3C for deep reinforcement learning," CoRR abs/1611.06256, 2016.
|
| 192 |
+
|
| 193 |
+
[17] H. Cho, P. Oh, J. Park, W. Jung, and J. Lee, "Fa3c: Fpga-accelerated deep reinforcement learning," in Proceedings of the Twenty-Fourth International Conference on Architectural Support for Programming Languages and Operating Systems, 2019, pp. 499-513.
|
| 194 |
+
|
| 195 |
+
[18] Y. Li, I.-J. Liu, Y. Yuan, D. Chen, A. Schwing, and J. Huang, "Accelerating distributed reinforcement learning with in-switch computing," in Proceedings of the 46th International Symposium on Computer Architecture, 2019, pp. 279-291.
|
| 196 |
+
|
| 197 |
+
[19] A. Stooke and P. Abbeel, "Accelerated methods for deep reinforcement learning," arXiv preprint arXiv:1803.02811, 2018.
|
| 198 |
+
|
| 199 |
+
[20] J. Björck, X. Chen, C. De Sa, C. P. Gomes, and K. Weinberger, "Low-precision reinforcement learning: running soft actor-critic in half precision," in International Conference on Machine Learning. PMLR, 2021, pp. 980-991.
|
| 200 |
+
|
| 201 |
+
[21] S. Krishnan, M. Lam, S. Chitlangia, Z. Wan, G. Barth-Maron, A. Faust, and V. J. Reddi, "QuaRL: Quantization for fast and environmentally sustainable reinforcement learning," 2022.
|
| 202 |
+
|
| 203 |
+
[22] A. Mahajan, M. Samvelyan, L. Mao, V. Makoviychuk, A. Garg, J. Kos-saifi, S. Whiteson, Y. Zhu, and A. Anandkumar, "Tesseract: Tensorised actors for multi-agent reinforcement learning," in International Conference on Machine Learning. PMLR, 2021, pp. 7301-7312.
|
| 204 |
+
|
| 205 |
+
[23] T. J. Ham, S. J. Jung, S. Kim, Y. H. Oh, Y. Park, Y. Song, J.-H. Park, S. Lee, K. Park, J. W. Lee et al., "A" 3: Accelerating attention mechanisms in neural networks with approximation," in 2020 IEEE International Symposium on High Performance Computer Architecture (HPCA). IEEE, 2020, pp. 328-341.
|
| 206 |
+
|
| 207 |
+
[24] A. Jain, A. Phanishayee, J. Mars, L. Tang, and G. Pekhimenko, "Gist: Efficient data encoding for deep neural network training," in 2018 ACM/IEEE 45th Annual International Symposium on Computer Architecture (ISCA). IEEE, 2018, pp. 776-789.
|
| 208 |
+
|
| 209 |
+
[25] S. Q. Zhang, B. McDanel, and H. Kung, "Fast: Dnn training under variable precision block floating point with stochastic rounding," in 2022 IEEE International Symposium on High-Performance Computer Architecture (HPCA). IEEE, 2022, pp. 846-860.
|
| 210 |
+
|
| 211 |
+
[26] S. Venkataramani, V. Srinivasan, W. Wang, S. Sen, J. Zhang, A. Agrawal, M. Kar, S. Jain, A. Mannari, H. Tran et al., "Rapid: Ai accelerator for ultra-low precision training and inference," in 2021 ACM/IEEE 48th Annual International Symposium on Computer Architecture (ISCA). IEEE, 2021, pp. 153-166.
|
papers/ISCA/ISCA 2023/ISCA 2023 Workshop/ISCA 2023 Workshop ASSYST/0QA2qomtW3-/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,163 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ TOWARDS EFFICIENT MULTI-AGENT LEARNING SYSTEMS
|
| 2 |
+
|
| 3 |
+
MLArchSys 2023 Submission #NaN - Confidential Draft - Do NOT Distribute!!
|
| 4 |
+
|
| 5 |
+
Abstract-Multi-Agent Reinforcement Learning (MARL) is an increasingly important research field that can model and control multiple large-scale autonomous systems. Despite its achievements, existing multi-agent learning methods typically involve expensive computations in terms of training time and power arising from large observation-action space and a huge number of training steps. Therefore, a key challenge is understanding and characterizing the computationally intensive functions in several popular classes of MARL algorithms during their training phases. Our preliminary experiments reveal new insights into the key modules of MARL algorithms that limit the adoption of MARL in real-world systems. We explore neighbor sampling strategy to improve cache locality and observe performance improvement ranging from 26.66% (3 agents) to 27.39% (12 agents) during the computationally intensive mini-batch sampling phase. Additionally, we demonstrate that improving the locality leads to an end-to-end training time reduction of ${10.2}\%$ (for 12 agents) compared to existing multi-agent algorithms without significant degradation in the mean reward.
|
| 6 |
+
|
| 7 |
+
Index Terms-Multi-Agent Systems, Performance Analysis, Reinforcement Learning, Performance Optimization
|
| 8 |
+
|
| 9 |
+
§ I. INTRODUCTION
|
| 10 |
+
|
| 11 |
+
Reinforcement Learning (RL) has recently made exciting progress in many applications, including Atari games [1], aviation systems [2], and robotics [3]. Specifically, RL frameworks fit in the context of problems that involve sequential-decision making where the agent needs to take actions in an environment to maximize the cumulative rewards. In RL, the quality of state-action pairs is evaluated using a reward function, and the transition to a new state depends on the current state and action [4]. The function that determines the action from the state is known as a policy. The function representing the reward estimates is known as the value function.
|
| 12 |
+
|
| 13 |
+
Multi-agent systems [4] have shown excellent performance among various multi-player games [5] where there is significant sharing of observations between the agents during training, and the joint actions among these agents could affect the environment dynamically. In MARL, several agents simultaneously explore a common environment and perform competitive (e.g., Predator-prey) and cooperative (e.g., Cooperative navigation) tasks [6]. All the observations are shared in the cooperative setting, and the training is performed centrally. In contrast, each agent aims to outperform its enemies in a competitive setting. As a result, MARL training involves several computationally-challenging tasks that deal with dynamically changing environments.
|
| 14 |
+
|
| 15 |
+
In this paper, we performed a workload characterization study to understand the performance-limiting functions on well-known model-free MARL frameworks [6], [7] implemented using actor-critic methods with state spaces that are usually very large. We analyze different MARL training phases where the actor and critic networks are responsible for policy and value functions, respectively. The critic tries to learn a value function given the policy from the actor, while the actor can estimate the policy gradient based on the approximate value function that the critic provides.
|
| 16 |
+
|
| 17 |
+
< g r a p h i c s >
|
| 18 |
+
|
| 19 |
+
Fig. 1: Overview of our multi-agent decentralized actor, centralized critic approach (Competitive environment).
|
| 20 |
+
|
| 21 |
+
As shown in Figure 1, each agent in the environment has its own actor network which outputs the action of an agent given its observation (Action selection). During the mini-batch sampling phase, each agent $i$ collects the historical transition data of all other agents stored within the Experience Replay Buffer. The sampling approach enables the algorithm to reuse the transition data for updating the current policy. Each agent has a centralized critic which outputs the Q-value using the joint observation-action space of all other agents. During ${Up}$ - date all trainers phase, both the actor and critic networks are updated after the target $Q$ calculation and sampling phase. At last, we explore the neighbor sampling strategy for better cache locality and that leads to a performance improvement ranging from 26.66% (3 agents) to 27.39% (12 agents) (Figure 5).
|
| 22 |
+
|
| 23 |
+
The main contributions of our paper are the following:
|
| 24 |
+
|
| 25 |
+
* We systematically perform a hardware-software performance analysis within the training phases of Multi-agent systems. Further, we present key insights into the performance bottlenecks confronting several key MARL algorithms from a systems perspective.
|
| 26 |
+
|
| 27 |
+
* We explore a neighbor sampling strategy to improve the locality of data access within the mini-batch sampling phase. Our preliminary experiments provide performance improvement ranging from 26.66% (3 agents) to 27.39% (12 agents) in the sampling phase training run-time. Additionally, we achieve ${10.2}\%$ (12 agents) end-to-end training time reduction compared to the SOTA multi-agent algorithms.
|
| 28 |
+
|
| 29 |
+
< g r a p h i c s >
|
| 30 |
+
|
| 31 |
+
Fig. 2: Training time breakdown on Ampere Architecture RTX 3090 for the MARL workloads (MADDPG & MASAC) with 3 to 48 agents. The environment is Predator-Prey.
|
| 32 |
+
|
| 33 |
+
< g r a p h i c s >
|
| 34 |
+
|
| 35 |
+
Fig. 3: Computation time growth in MARL modules averaged across the two MARL frameworks.
|
| 36 |
+
|
| 37 |
+
§ II. MOTIVATION
|
| 38 |
+
|
| 39 |
+
In multi-agent systems, the training phase is performance-intensive as the agents must collaborate and coordinate to maximize a shared return. Many real-world applications, such as robot fleet coordination [8] and traffic light control [9], are naturally modeled as multi-agent problems, but they become intractable with the growing number of agents due to the expensive computation required to estimate other agents' policies at each state and a huge amount of neural network parameters. This problem limits their adoption to real-world systems due to their computationally expensive nature and allows them only to deal with a few agents [10], [11]. Figure 2 shows the runtime breakdown of the training phase. We omit the agents interactions phase since it primarily depends on environment complexity. Update all trainers contributes to $\approx {35}\%$ to $\approx {85}\%$ of the training time as the number of MARL agents grows from 3 to 48. This is mainly due to two reasons: ① In MARL, each agent has its own actor and critic networks since they may have different rewards. Each agent must randomly collect a batch of transitions from all other agents to update the critic and actor networks. ② The dynamic memory requirements of observation and action spaces also grow quadratically due to each agent having to coordinate with other agents towards sharing their observations and actions. Action selection phase occupies a small portion and scales linearly with the number of agents (Figure 3). This is because, in Action selection, agents consider individual policies to obtain local actions. Other segments include experience collection, reward collection, and policy initialization, and they add a negligible overhead.
|
| 40 |
+
|
| 41 |
+
§ III. BACKGROUND
|
| 42 |
+
|
| 43 |
+
Typically, MARL settings with $N$ agents is defined by a set of states, $S = {S}_{1} \times \ldots \times {S}_{N}$ , a set of actions $A = {A}_{1} \times \ldots \times {A}_{N}$ . Each agent selects its action by using a policy ${\pi }_{{\theta }_{i}} : {O}_{i} \times {A}_{i} \rightarrow$ $\left\lbrack {0,1}\right\rbrack$ . The state transition $\left( {T : S \times {A}_{1} \times {A}_{2} \times \ldots \times {A}_{N}}\right)$ function produces the next state ${S}^{\prime }$ , given the current state and actions for each agent. The reward, ${R}_{i} : S \times {A}_{i} \rightarrow \mathbb{R}$ for each agent is a function of global state and action of all other agents, with the aim of maximizing its own expected return ${R}_{i} = \mathop{\sum }\limits_{{t = 0}}^{T}{\gamma }^{t}{r}_{i}^{t}$ , where $\gamma$ denotes the discount factor and $T$ is the time horizon. For this, we use the actor-critic methods such as MADDPG [6], MASAC [7].
|
| 44 |
+
|
| 45 |
+
MADDPG [6] is centralized training and decentralized execution (CTDE) algorithm mainly designed for mixed environments. Each agent learns an individual policy that maps the observation to its action to maximize the expected return, which is approximated by the critic. MADDPG lets the critic of agent $i$ to be trained by minimizing the loss with the target $Q$ -value and ${y}_{i}$ using $\mathcal{L}\left( {\theta }_{i}\right) = {\mathbb{E}}_{D}\left\lbrack \left( {{Q}_{i}\left( {S,{A}_{1},\ldots {A}_{n}}\right) - {y}_{i}^{2}}\right\rbrack \right.$ , and ${y}_{i} = {r}_{i} + \gamma {\bar{Q}}_{i}{\left( {S}^{\prime },{A}_{1}^{\prime },\ldots {A}_{n}^{\prime }\right) }_{{a}_{j}^{\prime } = \bar{\pi }\left( {o}_{j}^{\prime }\right) }$ , where $S$ and ${A}_{1},\ldots {A}_{n}$ represent the joint observations and actions respectively. $D$ is the experience replay buffer that stores the observations, actions, rewards, and new observations samples of all agents obtained after the training episodes. The critic networks are augmented with states and actions of all agents to reduce the variance of policy gradients and improve performance. The MARL framework has four networks- actor, critic, target actor, and target critic. ${\bar{Q}}_{i}$ and $\bar{\pi }\left( {o}_{j}^{\prime }\right)$ are the target networks for the stable learning of critic $\left( {Q}_{i}\right)$ and actor networks. The target actor estimates the next action from the policy using the state output by the actor network. The target critic aggregates the output from the target actor to compute the target Q-values, which helps to update the critic network and assess the quality of actions taken by agents. The target networks are created to achieve training stability. Note that the updating sequence of networks in the back-propagation phase is critics, actors, then the target networks.
|
| 46 |
+
|
| 47 |
+
Similar to MADDPG, the centralized critic is introduced in Soft Actor-Critic (SAC [7]) algorithm. MASAC uses the maximum entropy RL, in which the agents are encouraged to maximize the exploration within the policy. MASAC assigns equal probability to nearly-optimal actions which have similar state-action values and avoids repeatedly selecting the same action. This learning trick will increase the stability, policy exploration, and sample efficiency [7], [12].
|
| 48 |
+
|
| 49 |
+
§ IV. EVALUATION SETUP
|
| 50 |
+
|
| 51 |
+
Benchmark. Table I provides the behavior of selected Multi-agent Particle Environments (MPE [6]). We profile and characterize three state-of-the-art MARL algorithms, MAD-DPG and MASAC. A two-layer ReLU MLP parameterizes the actor and critic networks with 64 units per layer, and the mini-batch size is 1024 for sampling the transitions. In our experiments, we use Adam optimizer [13] with a learning rate of 0.01, maximum episode length as 25 (max episodes to reach the terminal state), and $\tau = {0.01}$ for updating the target networks. $\gamma$ is the discount factor which is set to 0.95 . The size of the replay buffer is ${10}^{6}$ , and the entropy coefficient for MASAC is 0.05 . The network parameters are updated after every 100 samples are added to the replay buffer.
|
| 52 |
+
|
| 53 |
+
TABLE I: Multi-agent Particle environment.
|
| 54 |
+
|
| 55 |
+
max width=
|
| 56 |
+
|
| 57 |
+
$\mathbf{{Environment}}$ Details
|
| 58 |
+
|
| 59 |
+
1-2
|
| 60 |
+
Cooperative navigation $N$ agents move in a cooperated manner to reach $L$ landmarks and the rewards encourages the agents get closer to the landmarks.
|
| 61 |
+
|
| 62 |
+
1-2
|
| 63 |
+
Predator-Prey $N$ predators work cooperatively to block the way of $M$ fast paced prey agents. The prey agents are envi- ronment controlled and they try to avoid the collision with predators.
|
| 64 |
+
|
| 65 |
+
1-2
|
| 66 |
+
|
| 67 |
+
Profiling Platform. MARL algorithms are implemented with state-of-the-art CPU-GPU compatible TensorFlow-GPU (v2.11.0). The server runs on Ubuntu Linux 20.04.5 LTS operating system with CUDA 9.0, cuDNN 7.6.5, PCIe Express® v4.0 with NCCL v2.8.4 communication library. The machine supports python 3.7.15, TensorFlow-Slim (v1.1.0) and OpenAI GYM (v0.10.5). All the workloads are profiled on single Nvidia GeForce RTX 3090 Ampere Architecture with Perf [14] and NVProf to profile hardware performance counters for performance analysis. Finally, we trained for ${60}\mathrm{K}$ episodes using default hyper-parameters recommended by the algorithms.
|
| 68 |
+
|
| 69 |
+
§ V. PRELIMINARY EVALUATION
|
| 70 |
+
|
| 71 |
+
This section is organized as follows. First, we present an overview of our profiling result. Then, we divide the computationally dominant functions in Update all trainers into multiple modules: Mini-batch sampling, Target $Q$ calculation, and $Q$ loss & $P$ loss and present our results in the competitive setting (predator-prey) to understand the key factors limiting MARL in large-scale systems.
|
| 72 |
+
|
| 73 |
+
Overview of Profile. Figure 4 shows the breakdown between the modules, Mini-batch sampling, Target $Q$ calculation, $Q$ loss, and $P$ loss that contribute ${61}\% ,{21}\% ,{10}\%$ , and 8% to computation time averaging across different workloads, respectively.
|
| 74 |
+
|
| 75 |
+
§ A. MINI-BATCH SAMPLING
|
| 76 |
+
|
| 77 |
+
Our experimental results in Figure 4 show that mini-batch sampling is the largest time-consuming phase within the Update All Trainers module. The behavior is also consistent with scaling in other critical hardware performance metrics: dTLB load misses $- {3.9} \times$ (growth rate from $3 - 6$ agents) and cache misses $- {3.9} \times$ (growth rate from $3 - 6$ agents).
|
| 78 |
+
|
| 79 |
+
Mini-batch sampling phase is dominated by the collection of random samples from all other agents' replay buffers and updates the parameters of its actor and critic networks. Note that the agent replay buffers are kept separate from each other to capture their past transitions. For each time-step, agent $i$ draws a random index set $\left\{ {{L}_{1},{L}_{2},\ldots ,{L}_{K}}\right\} (K$ is the mini-batch size), and first selects ${L}_{1}$ to perform a memory lookup in the experience replay buffer to retrieve the corresponding transition and store it in the individual agent buffer. This operation grows as a function of the number of agents, $N$ , which is repeated on all $N$ agents. The sampling stage exhibits random memory access patterns and cannot exploit the cache reuse due to randomness in the indices for each agent between the iterations, resulting in increased cache misses. Also, the computation is a heavy burden due to the data accessing from dispersed memory locations, and therefore, the algorithm with large buffer sizes and batch sizes has more cache misses. In cooperative navigation (simple spread [6]), we observe similar bottlenecks since all the agents are trained together to reach the landmarks while avoiding collisions with each other.
|
| 80 |
+
|
| 81 |
+
§ B. TARGET $Q$ CALCULATION
|
| 82 |
+
|
| 83 |
+
The Target $Q$ calculation phase is second largest time-consuming phase within Update All Trainers (Figure 4). In this function, each agent performs the next action calculation, target $Q$ next, and target $Q$ values as a function of all other agents' joint observation-action space. To calculate the next action, the agent $i$ uses its policy network to determine next action- $a$ ’ from the next state- $S$ ’. In this phase, each agent's policy network involves multiplications with input-weight matrix and additions resulting in performance impact. The obtained a' and S' data are aggregated and concatenated into a single vector in order to compute the target $Q$ next amongst the cooperating agents. The input space (dimension) for the $Q$ -function increases quadratically with the number of agents [15]. The target critic values for each agent $i$ is computed using target $Q$ next values from the target actor network. We note that, each agent has to read other agents' policy values; as such for $N$ agents, there is $N \times \left( {N - 1}\right)$ memory lookup operations corresponding to the next action- ${a}^{\prime }$ .
|
| 84 |
+
|
| 85 |
+
< g r a p h i c s >
|
| 86 |
+
|
| 87 |
+
Fig. 4: Training time breakdown on Ampere Architecture RTX 3090 within Update all trainers on two different MARL workloads (MADDPG & MASAC) with 3 to 48 agents under Predator-Prey environment.
|
| 88 |
+
|
| 89 |
+
§ C. BACK-PROPAGATION - $Q$ LOSS & $P$ LOSS
|
| 90 |
+
|
| 91 |
+
Back propagation stage is dominated by execution of two networks: ① critic network computes the Mean-Squared Error loss between the target critic and critic networks, and ② the actor network is updated by minimizing the $\mathrm{Q}$ values (computed by the critic network). The total training time increases as the number of agents increases, as shown in Figure 4. This is because as the number of agents increases, the trainable parameters increase, and $N$ policy and $N$ critic networks are built for all $N$ agents, which incurs extra time to update the weights for each agent. For each update, we sample the random mini-batch of size (1024 in our studies) transitions from the replay buffer within each agent $i$ from all the other agents and then perform gradient descent on the critic and actor networks.
|
| 92 |
+
|
| 93 |
+
§ VI. NEIGHBOR SAMPLING STRATEGY
|
| 94 |
+
|
| 95 |
+
From our analysis so far, it can be concluded that the mini-batch sampling phase dominates Update all trainers phase when the number of agents scales linearly due to each agent sampling all other agents' transition data. Moreover, fetching the transition data from far way memory location affects the overall training time significantly as the complexity of the problem grows. Among all of the hardware metrics, cache misses suffer from worst scaling factor (3.9 - or higher). Therefore, with the support of loop level optimization, we explore optimizations that can improve the locality and overall MARL performance. In this paper, we propose an approach at the loop level to optimize the data accessing in the mini-batch sampling phase.
|
| 96 |
+
|
| 97 |
+
Algorithm 1 Neighbor Sampling Strategy
|
| 98 |
+
|
| 99 |
+
Require: List of random locations indices, Replay buffer $\mathcal{D}$
|
| 100 |
+
|
| 101 |
+
with a size $d$ , Neighbors $n$ , Batch size $b$
|
| 102 |
+
|
| 103 |
+
Ensure: Experience tuples from neighbor indices
|
| 104 |
+
|
| 105 |
+
for $i$ in indices do
|
| 106 |
+
|
| 107 |
+
if $i$ in enumerate(D)then
|
| 108 |
+
|
| 109 |
+
$\alpha \leftarrow$ Sample all transitions in range $\left\lbrack {i + n,i - n}\right\rbrack$
|
| 110 |
+
|
| 111 |
+
if $\alpha \subset \mathcal{D}$ then
|
| 112 |
+
|
| 113 |
+
for $j$ in $\alpha$ do
|
| 114 |
+
|
| 115 |
+
$\delta \leftarrow \mathcal{D}\left\lbrack j\right\rbrack$
|
| 116 |
+
|
| 117 |
+
end for $\vartriangleright$ Extract all the transitions from the
|
| 118 |
+
|
| 119 |
+
replay buffer $\mathcal{D}$ using the subset of indices $\alpha$
|
| 120 |
+
|
| 121 |
+
obs, act, next_obs, rew, dones $\leftarrow$ unpack $\left( \delta \right)$
|
| 122 |
+
|
| 123 |
+
end if
|
| 124 |
+
|
| 125 |
+
end if
|
| 126 |
+
|
| 127 |
+
if $\operatorname{len}\left( {obs}\right) \geq b$ then
|
| 128 |
+
|
| 129 |
+
break
|
| 130 |
+
|
| 131 |
+
return obs, act, next_obs, rew, dones
|
| 132 |
+
|
| 133 |
+
$\vartriangleright$ Sampled transitions
|
| 134 |
+
|
| 135 |
+
end if
|
| 136 |
+
|
| 137 |
+
end for
|
| 138 |
+
|
| 139 |
+
The idea of this approach is to eliminate the computation issues arising due to fetching the data from far away memory locations based on random indices. We investigate the neighbor sampling optimization in MAPPDG, where we collectively capture the neighbor transitions of an index $i$ to enable faster data access on a given hardware. Intuitively, at each index $i$ , we group the neighbor indices into a single micro-batch and extract the data in a locality-aware memory access order to efficiently sample the transitions.
|
| 140 |
+
|
| 141 |
+
Neighbor Sampling Strategy. Algorithm 1 shows how the mini-batch sampling phase selects the neighboring transitions for a random index $i$ . We initialize replay buffer size $\mathcal{D}$ , neighbors $n$ , and batch size $b$ . The first loop in line 1 maintains the random accesses of the indices. We note that the original computation loop of random sampling has been expanded into two loops. The first index $i$ is accessed and checked if $i$ is in the limits of $\mathcal{D}$ and, if so, it creates a micro-batch with all the neighbors based on $n$ and returns a list of neighbors $\alpha$ (line 3) for index $i$ . The second loop extracts the neighbor transitions from $\mathcal{D}$ and stores all of them in $\delta$ (line 6). In line 8, output vectors are unpacked and stored as individual vectors in the experience replay tuple consisting of observations, actions, next observations, rewards, dones. Finally, all the transitions are individually accumulated as numpy vectors. Statement 11 checks whether the size of observations has become full (equal to the batch size $b$ ), and if so, line 13 returns the batch of accumulated transitions as vectors.
|
| 142 |
+
|
| 143 |
+
< g r a p h i c s >
|
| 144 |
+
|
| 145 |
+
Fig. 5: (a) Percentage reduction in training time for the mini-batch sampling phase for 3, 6 and 12 agents (MADDPG). (b) Percentage reduction in the total training time when the number of agents are scaled by $2 \times$ for MADDPG. The environment test-bed is Predator-Prey and Neighbors $= 3$
|
| 146 |
+
|
| 147 |
+
Overall, our optimization improves the cache locality and achieves the performance improvement ranging from 26.66% (3 agents) to ${27.39}\%$ (12 agents) during the computationally intensive mini-batch sampling (Figure 5). While studying this optimization, we ensure that there isn't any significant degradation in the mean episode reward.
|
| 148 |
+
|
| 149 |
+
§ VII. DISCUSSION AND RELATED WORK
|
| 150 |
+
|
| 151 |
+
Hardware-Software acceleration techniques in RL have been the subject of research in recent years [16]-[19]. For example, to accelerate RL training from the software standpoint, prior works have shown that half-precision (FP16) quantization can yield significant performance benefits and improve hardware efficiency while achieving adequate convergence [20]. Other relevant approaches include QuaRL [21], where quantization is applied to speed up the RL training and inference phases. The authors experimentally demonstrated that quantizing the policies to $\leq 8$ bits led to substantial speedups in the training time compared to full precision training. Our work differs from QuaRL as we focus on multi-agent learning frameworks, where the agents operate in a common environment. Further, we characterize the computational issues of MARL and improve the performance by implementing neighbor sampling optimization to improve the efficiency of mini-batch sampling.
|
| 152 |
+
|
| 153 |
+
< g r a p h i c s >
|
| 154 |
+
|
| 155 |
+
Fig. 6: (a) Average of mean episode rewards of all the agents trained for 60,000 episodes for MADDPG. (b) Average of mean episode rewards all the agents trained for 60,000 episodes after the neighbor sampling optimization for MADDPG. The environment test-bed is Predator-Prey and Neighbors $= 3$ .
|
| 156 |
+
|
| 157 |
+
Prior studies, like FA3C, have focused on hardware acceleration in multiple parallel worker scenarios, where each agent is controlled independently within their environments using single-agent RL algorithms [17]. In contrast, we seek to systematically understand the performance-limiting functions in multi-agent systems, where the agents collaborate in a single shared environment. Agents in such MARL settings usually have high visibility of one another (leading to large space and action spaces).
|
| 158 |
+
|
| 159 |
+
In MARL settings where each agent needs to interact with its neighbor agents especially in complex environments with lots of observations and huge action spaces, computational bottlenecks may be alleviated using architectural primitives implementing selective attention [12], [22], [23]. As the number of agents increases, the hardware techniques such as near-memory computing could help to efficiently perform mini-batch sampling. For the input to critic networks, multilevel data compression [24]-[26] techniques on targeted group of agents may be used based on their importance in the environment. Also, the cache misses during mini-batch sampling are indicative of competition for the LLC cache, that may be addressed through smart cache allocation strategies. Other modules such as next action calculation, environment interactions, and action selection phases may also benefit from custom acceleration of key modules.
|
| 160 |
+
|
| 161 |
+
§ VIII. CONCLUSION AND FUTURE WORK
|
| 162 |
+
|
| 163 |
+
In this work, we present an end-to-end characterization of several popular Multi-Agent Reinforcement Learning algorithms and, in particular, explore the locality-aware neighbor indexing optimization. We find that the Update all trainers dominates the training process of MARL algorithms and scales super linearly with the number of agents. Our experimental analysis presents key insights into the modules that are the driving factors behind computational bottlenecks. We also propose an approach at the loop level to optimize the data accessing in the mini-batch sampling phase. The proposal achieves performance improvement from 26.66% (3 agents) to ${27.39}\%$ (12 agents) in the mini-batch sampling phase. Future research includes investigating various pseudo-random sampling strategies and designing a hardware-friendly architecture to efficiently fetch the transitions in large-scale MARL.
|
papers/ISCA/ISCA 2023/ISCA 2023 Workshop/ISCA 2023 Workshop ASSYST/4zdPNY3SDQk/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,197 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Online Learning for Right-Sizing Serverless Function Invocations
|
| 2 |
+
|
| 3 |
+
ISCA 2023 Submission #13 - Confidential Draft - Do NOT Distribute!!
|
| 4 |
+
|
| 5 |
+
Abstract-Serverless computing relieves developers from the burden of allocating and managing resources for their cloud applications, providing ease-of-use to the users and the opportunity to optimize resource utilization to the providers. However, the lack of visibility into user functions limits providers' ability to right-size the functions. Thus, providers resort to simplifying assumptions, ignoring input variability, and coupling different resource types (CPU, memory, network), resulting in widely varying function performance and resource efficiency. To provide users with predictable performance and costs for their function executions, we need to understand how these factors contribute to function performance and resource usage.
|
| 6 |
+
|
| 7 |
+
In this paper, we first conduct a deep study of commonly deployed serverless functions on an open-source serverless computing framework. Our analysis provides key insights to guide the design of a resource allocation framework for serverless systems, including the need to provision resources per invocation, account for function semantics, and decouple resources. We then present Lachesis, a resource allocation framework that builds on the insights we found and leverages online learning to right-size a function invocation. Our experiments show that Lachesis can increase speedup by ${2.6}\mathrm{x}$ while decreasing idle cores by ${82}\%$ compared to static allocation decisions made by users.
|
| 8 |
+
|
| 9 |
+
## I. INTRODUCTION
|
| 10 |
+
|
| 11 |
+
A key benefit of serverless computing for users is that they get to focus on their application logic and leave the details of resource provisioning and management to the cloud providers. However, this results in an opaque interface between users and providers that adversely impacts both. For users with performance-critical applications, such as timely detection of videos with indecent content uploaded to YouTube, or cost-minded applications, such as personal photo organization, unknown resource management policies that they cannot control are a problem [12], [19]. Meanwhile, providers lack visibility into user-functions limiting their ability to make cost-performance trade-offs on behalf of the users.
|
| 12 |
+
|
| 13 |
+
Existing serverless systems either completely hide the resource allocation policies they use [16], or provide a single knob, the memory size of the container, that the user can set [5], [10]. This parameter is intended to give users control over resource management and providers visibility into the resource requirements of user functions. However, even with this additional input, serverless systems are incapable of providing performance- and cost-aware function execution to users. We argue that, to fix this issue, we need to first understand which factors impact function performance and how. We then need to study how the current resource allocation frameworks take these factors into account. Finally, we need to see the combined impact of existing policies and these factors on function performance and resource efficiency. We review the following policies and assumptions made by existing resource allocation frameworks for serverless systems and motivate the need for our characterization work.
|
| 14 |
+
|
| 15 |
+

|
| 16 |
+
|
| 17 |
+

|
| 18 |
+
|
| 19 |
+
(a) Slowdown w.r.t. the best (b) Maximum mem utilized runtime across mem sizes vs. allocated across 100 runs for 100 invocations of a of the video transcoding
|
| 20 |
+
|
| 21 |
+
video transcoding function. function (from Fig. 1a).
|
| 22 |
+
|
| 23 |
+
Fig. 1: Characterizing functions with respect to the resources allocated, utilized, and performance observed.
|
| 24 |
+
|
| 25 |
+
1. Static and input-agnostic allocation by providers: Providers statically allocate resources to functions using the user-specified memory size. However, this approach ignores the fact that different inputs submitted to the same function might have different resource needs (as demonstrated by the spread in duration in Figure 1a). This precludes optimizations such as using smaller containers for smaller inputs, which might bring significant cost benefits. For instance, if the static function-level allocator sized a container with 3GB memory, and most of the invocations used only $1\mathrm{{GB}}$ , the costs incurred are $2\mathrm{x}$ higher for allocated but unused resources. So, we need to understand the impact of inputs and function semantics on function performance and resource utilization.
|
| 26 |
+
|
| 27 |
+
2. Coupled allocation of different resource types by providers: The specified memory size for a function dictates the number of CPU cores, thus, tightly coupling the two types of resources together. There are two main limitations with this approach: (a) Although users now need to only tune the memory knob, setting this knob correctly might be difficult for workloads that are not memory-intensive but limited by other resources. For instance, video transcoding or compression are CPU intensive workloads. Users might have to profile their functions carefully, adding significant cost. (b) The tight coupling of resources might lead to suboptimal resource allocation decisions for certain kinds of workloads. For example, CPU intensive workloads might end up being allocated large amounts of memory that are not used (Figure 1b).
|
| 28 |
+
|
| 29 |
+
3. Over-provisioning by users: The importance of the memory size parameter [6] and its opaque coupling to other resources forces users to profile their function to find the right setting. But, as the performance and resource usage of a function can depend significantly on inputs, users must profile on diverse inputs to ensure adequate resource availability in all cases. The cost of doing so is prohibitive. Prior work on reducing this profiling cost either assumes knowledge of the workload which is unavailable [4], [18], or ignores the input itself, which can have a large impact on many functions [20]. Thus, users often overprovision and choose the largest memory size available (10GB for AWS Lambda, for instance), raising their costs significantly and leading to underutilized resources for the provider [2]. So, we need to understand how inputs affect a function's resource demands.
|
| 30 |
+
|
| 31 |
+
<table><tr><td>Function</td><td>Input Type</td><td>#Runs</td><td>#Sizes</td><td>Size Range</td></tr><tr><td>matmult</td><td>square matrix</td><td>540</td><td>9</td><td>500 - 80000</td></tr><tr><td>linpack</td><td>square matrix</td><td>660</td><td>11</td><td>500 - 8000</td></tr><tr><td>image-process</td><td>image</td><td>840</td><td>14</td><td>12K - 4.6M</td></tr><tr><td>video-process</td><td>video</td><td>645</td><td>5</td><td>2.2M - 6.1M</td></tr><tr><td>encrypt</td><td>string</td><td>420</td><td>7</td><td>500 - 50000</td></tr><tr><td>mobilenet</td><td>image</td><td>840</td><td>14</td><td>12K - 4.6M</td></tr><tr><td>sentiment</td><td>batch of strings</td><td>716</td><td>12</td><td>50 - 3000</td></tr><tr><td>speech2text</td><td>audio</td><td>471</td><td>8</td><td>48K - 12M</td></tr><tr><td>${qr}$</td><td>url</td><td>660</td><td>11</td><td>25 - 480</td></tr><tr><td>lr-train</td><td>training set</td><td>160</td><td>4</td><td>10M - 100M</td></tr><tr><td>compress</td><td>file</td><td>434</td><td>7</td><td>64M - 2G</td></tr><tr><td>resnet-50 (inf)</td><td>image</td><td>574</td><td>9</td><td>184K - 4.6M</td></tr></table>
|
| 32 |
+
|
| 33 |
+
TABLE I: Summary of 12 serverless functions studied.
|
| 34 |
+
|
| 35 |
+
In this paper we extensively study the impact of function inputs and resource coupling on several serverless functions covering a wide range of application types. Building on the insights we found, we introduce Lachesis, an online learning based resource allocation framework that (1) allocates resources to each function invocation based on characteristics of the input and function semantics, and (2) decouples different resource types. Lachesis employs an online learning agent that uses cost-sensitive multi-class classification to predict the minimum number of cores required to satisfy a given invocation's service level objective (SLO). It removes the need for users to specify memory limits, and in doing so, Lachesis achieves betters resource utilization while simplifying the serverless user interface.
|
| 36 |
+
|
| 37 |
+
## II. Existing Resource Allocation Mechanisms
|
| 38 |
+
|
| 39 |
+
Several cloud providers [5], [10] and open-source communities [17] expose a common interface to their serverless platforms: users specify a memory limit for their functions at creation time. The platforms then allocate a proportional amount of CPU based on the memory limit. Thus, all invocations of a function have the same container size, regardless of their actual resource needs. Apache OpenWhisk's [17] CPU allocation is a soft limit, as invocations can burst if there are available CPUs in the server. Cypress [7] creates containers with high CPU count and memory capacity per function to consolidate multiple concurrent function invocations within one container to avoid wasting resources. Bilal et al., [8] propose to decouple memory and CPU to create a trade-off between performance and cost. ReSC [11] divides functions into resource components (i.e., compute or memory) and allocates resources per component.
|
| 40 |
+
|
| 41 |
+

|
| 42 |
+
|
| 43 |
+
Fig. 2: Execution time as a function of data size for three serverless functions. The CPU and memory limit is fixed across sizes.
|
| 44 |
+
|
| 45 |
+

|
| 46 |
+
|
| 47 |
+
Fig. 3: video-process's (a) CPU and (b) memory utilization as a function of video size. The CPU limit is fixed at 80 cores.
|
| 48 |
+
|
| 49 |
+
## III. What affects Function Performance?
|
| 50 |
+
|
| 51 |
+
We study the impact of input properties (i.e., size, type), resource availability, and coupling of resource types, on the performance and resource utilization of serverless functions.
|
| 52 |
+
|
| 53 |
+
Experimental Setup: Our study observes functions on Open-Whisk [17]. We make two changes to OpenWhisk. (1) We force all CPU limits to be hard limits. (2) We decouple CPU and memory to explore different configurations than the fixed pairings provided by OpenWhisk. We deploy OpenWhisk on two bare-metal nodes in TACC's Chameleon cluster [14]. Each node contains 2 AMD EPYC 7763 CPUs, operating at 2.45 GHz [1], and 251GB of memory. For performance predictability, we disable hyperthreading, as done in [13], resulting in 128 online cores per machine. We install Ubuntu LTS 18.04. One machine hosts the OpenWhisk Controller and CouchDB while the other hosts the Invoker to run functions. Workloads: We study 12 functions (see Table I) from literature and benchmark suites [7], [9], [15] covering scientific applications, data processing, and ML inference and training. We collect the execution time and memory/CPU utilization for several combinations of functions, input sizes, and CPU limits. We run each combination 8-10 times, for a total of $\sim 8\mathrm{\;K}$ runs.
|
| 54 |
+
|
| 55 |
+
## A. Impact of Function Inputs
|
| 56 |
+
|
| 57 |
+
We study two questions: (1) What impact does input size have on function performance? (2) Do input properties, other than size, affect function performance and resource utilization? Observations: Figure 2 presents input size vs. execution time for three functions (we omit the others due to space constraints). We note that regardless of input type (matrix, files, text) or function semantics (i.e., single- vs. multi-threaded), function performance is correlated with input size and depends on whether the function is single or multi-threaded.
|
| 58 |
+
|
| 59 |
+
Figure 3a compares the number of cores used by video-process on two input sets of different videos. We see that two inputs of the same size may vastly differ in the number of cores used. We also notice that while set-1 has an unpredictable relationship between input size and cores used, set-2 exhibits constant utilization regardless of video size.
|
| 60 |
+
|
| 61 |
+

|
| 62 |
+
|
| 63 |
+
Fig. 4: Execution time as a function of CPU limit for three of our serverless functions. The input size is fixed at max value.
|
| 64 |
+
|
| 65 |
+

|
| 66 |
+
|
| 67 |
+
Fig. 5: CPU utilization compared to allocation for three of our serverless functions. The input size is fixed at max value.
|
| 68 |
+
|
| 69 |
+
To understand these differences, we compare video properties beyond just size: frame rate per second, video length, bit rate, and video resolution. We find that the resolution is the key property affecting resource utilization. While the resolution is constant in set-2 $\left( {{1280} \times {720}}\right)$ , it widely varies between the different video sizes in set-1. Inputs with higher resolutions $\left( {{1280} \times {720}}\right)$ have lower CPU and higher memory utilization, whereas the inverse is seen for lower resolution inputs.
|
| 70 |
+
|
| 71 |
+
Insights: Function semantics and input properties (not just limited to size) affect performance and resource utilization. Existing resource allocators that ignore input properties beyond size are thus suboptimal. Instead, functions can benefit from allocators that account for inputs and function semantics.
|
| 72 |
+
|
| 73 |
+
## B. Impact of Added Resources
|
| 74 |
+
|
| 75 |
+
We now evaluate the effect of adding resources to a function. Observations: Figures 4a and 4c show that lr-train and resnet-50 can benefit from more cores (execution time decreases). matmult, compress, and linpack also exhibit these trends. However, lr-train shows that the gains of increasing CPU saturate: execution time improvements saturate at 8 cores. In fact, Figure 5a shows that utilization never surpasses 5 cores. lr-train uses scikit-learn's LogisticRegressionCV( ) with $n$ _____jobs $= - 1$ to implement training. This setting specifies to use as many cores as possible. Since ${lr}$ -train does not specify the number of cross-validation folds, 5 folds (the default) are in the loop for training, and thus at most 5 cores are fully utilized.
|
| 76 |
+
|
| 77 |
+
Meanwhile, Figure 4b shows that image-process does not benefit from more cores, even though its performance is input-dependent as explained in $§$ III-A. Figure 5b shows that regardless of CPU allocation, utilization is always hovering around 1 core. In fact, several of our functions are single-threaded: mobilenet, sentiment, encrypt, and speech2text.
|
| 78 |
+
|
| 79 |
+
Insights: Serverless platforms see a mix of single- and multi-threaded functions with potentially bounded parallelism. Adding resources may not always help. Hence, resource allocators should tailor their policies to suit the type of function.
|
| 80 |
+
|
| 81 |
+

|
| 82 |
+
|
| 83 |
+
Fig. 6: (a) Current serverless platforms vs (b) Lachesis.
|
| 84 |
+
|
| 85 |
+
## C. Impact of Coupled Resource Types
|
| 86 |
+
|
| 87 |
+
Existing allocation policies scale CPU in proportion to the user-specified memory size [5]. However, this assumes functions are both CPU- and memory-intensive. Here, we evaluate the accuracy of this assumption.
|
| 88 |
+
|
| 89 |
+
Observations: Figure 3 shows that video-process uses up to 50 cores, but its memory utilization is at most 41% (0.8GB). Thus, video-process (also matmult, linpack, and lr-train) is compute-intensive. Conversely, we found sentiment to be memory-bound (100% memory utilization while it uses at most 1 core). Thus, different functions may utilize resource types in different proportions. Cloud providers can experience severe underutilization due to resource coupling. For example, providing enough memory to sentiment would lead to ${50}\%$ underutilization of allocated vCPUs. Meanwhile, to allocate 50 vCPUs to video-process would require a 88GB memory allocation, resulting in $\sim {99}\%$ memory underutilization.
|
| 90 |
+
|
| 91 |
+
Insights: It is imperative that allocators decouple resources to improve utilization while meeting resource demands.
|
| 92 |
+
|
| 93 |
+
## IV. LACHESIS DESIGN AND IMPLEMENTATION
|
| 94 |
+
|
| 95 |
+
We now present Lachesis, a system that makes fine-grained and decoupled resource allocations per invocation using an online learning agent. Figure 6a shows a simplified architecture of existing serverless systems. Figure $6\mathrm{\;b}$ shows the changes we make to the existing workflow of serverless frameworks: Lachesis simplifies the user interface by removing the need for users to specify a static memory limit during function submission. Instead, users can simply provide an SLO per invocation. Given a function, input, and SLO, Lachesis aims to right-size invocations by dynamically allocating the minimum amount of resources to meet the SLO. Algorithm 1 summarizes Lachesis' logic. For an invocation, the online learner predicts the minimum number of cores to allocate. Lachesis defaults this value if the learner has not seen enough invocations for the given function. It then launches the invocation with the determined CPU limit and collects utilization and duration metrics for feedback. Finally, it uses the observed data to compute costs and updates the online learner. Next, we describe the formulation of our online learning agent.
|
| 96 |
+
|
| 97 |
+
Prediction Target: As our goal is to meet user-specified SLOs with efficient use of resources, a natural prediction target is the minimum number of cores a function invocation needs for a target execution time.
|
| 98 |
+
|
| 99 |
+
Model Inputs: Our model's inputs are the serverless function, user inputs, and an SLO. We extract features from functions (number of function calls, libraries used, loop sizes) and inputs (image size and resolution, matrix size and density) to construct a vector for model updating and prediction.
|
| 100 |
+
|
| 101 |
+
Algorithm 1 Lachesis' logic using online learning
|
| 102 |
+
|
| 103 |
+
---
|
| 104 |
+
|
| 105 |
+
Input ${fxn},{in},{slo}$
|
| 106 |
+
|
| 107 |
+
Determine cpu_lim: default or ModelPredict(in, slo)
|
| 108 |
+
|
| 109 |
+
Launch ${fxn}$ with given in and determined cpu_lim
|
| 110 |
+
|
| 111 |
+
Observe fxn's max_cpu and exec_time during runtime
|
| 112 |
+
|
| 113 |
+
Use max_cpu and exec_time to ComputeCosts( )
|
| 114 |
+
|
| 115 |
+
Update online model: ModelUpdate(fxn, in, slo, costs)
|
| 116 |
+
|
| 117 |
+
---
|
| 118 |
+
|
| 119 |
+
Feedback: On each worker machine, we deploy a daemon that captures CPU utilization over the invocation's runtime. These collected metrics are then used by our cost function to update our model's weights online.
|
| 120 |
+
|
| 121 |
+
Learning Algorithm: We approach predicting core count as a supervised learning problem, which can be solved with regression or classification. We opt to not use regressors because of the difficulty in formulating a cost function to differentiate between underpredictions and overpredictions upon an SLO violation. Instead, we use cost-sensitive multi-class classification to make predictions. Each class (core count) has its own linear regressor that predicts the class's cost for an invocation. We select the class with the lowest cost as the allocation. Now, we can differentiate costs for different classes without worrying about the relationship between them.
|
| 122 |
+
|
| 123 |
+
Cost Function: Our cost function is rather intuitive. First, we determine the class to assign the lowest cost of one to. There are three cases. (1) If an invocation's SLO is met, the max_cpu (i.e., the maximum number of cores used by the invocation) class is given the lowest cost. (2) If an invocation's SLO is met and all assigned cores are used, we may assign a class lower than max_cpu the lowest cost (depending on the slack between the invocation's execution time and SLO), as this informs our learner that less cores may also satisfy this invocation's SLO. (3) Upon an SLO violation, we assign a class greater than max_cpu (at most $+ {10}$ ) the lowest cost in an attempt to meet the SLO in the next invocation. The difference between execution time and SLO determines this class. The costs of the remaining classes grow linearly, with underpredictions being penalized further by a hyperparameter.
|
| 124 |
+
|
| 125 |
+
Implementation: We implement Algorithm 1 as a shim layer that can run on top of any serverless platform. This layer runs on the same node as our dispatcher. We use Apache Open-Whisk [17] as our base serverless platform and implement our online learning agent using Vowpal Wabbit [3], a library with an efficient online implementation of the cost-sensitive multi-class classification algorithm. On each Invoker, we launch a metric aggregation daemon that collects utilization and runtime metrics per container and persists the data in a MetaData store for the shim layer to use when updating its models. Why Online Learning: The fundamental limitation of existing public serverless platforms [5], [10] is their inability to right-size containers dynamically based on inputs. Meanwhile, for Cypress to achieve high utilization, arrival patterns need to be frequent enough to pack invocations in one container within the window of an SLO [7]. Hence, Cypress is susceptible to severe resource underutilization with sparse resource arrival patterns. Finally, as shown in $§$ III, it is infeasible to use heuristics to predict optimal resource allocation because of variation in function behaviors depending on function semantics and input types/properties. This prompts our use of online learning, enabling Lachesis to dynamically right-size containers and adapt to changes in function and input distribution over time.
|
| 126 |
+
|
| 127 |
+

|
| 128 |
+
|
| 129 |
+
Fig. 7: Difference in (a) SLO violation percentage and (b) idle cores between Lachesis and our baselines for three functions.
|
| 130 |
+
|
| 131 |
+
## V. EVALUATION
|
| 132 |
+
|
| 133 |
+
We aim to show Lachesis' efficacy in allocating resources per invocation. Specifically, we evaluate the impact of per-invocation allocations on the number of SLO violations, resource utilization, and user cost.
|
| 134 |
+
|
| 135 |
+
## A. Methodology
|
| 136 |
+
|
| 137 |
+
Baselines: We compare Lachesis to three baselines, on Open-Whisk (ow), users might choose when providing resource needs to existing serverless platforms. Users may ask for the maximum, median, or minimum amount of resources for all their invocations. These correspond to our ow-large (64 cores), ow-medium (32 cores), and ow-small (1 core) baselines.
|
| 138 |
+
|
| 139 |
+
Workloads: We evaluate Lachesis with three serverless functions from Table I: image-process, matmult, and resnet-50. While image-process is single-threaded, both matmult and resnet-50 are multi-threaded, showing the robustness of our system to both types of functions. For each function, we run over 100 invocations with over 60 different inputs for image-process and 20 for both resnet-50 and matmult. The trace of invocations is the same on Lachesis and our three baselines.
|
| 140 |
+
|
| 141 |
+
Evaluation Metrics: Lachesis aims to meet an invocation's SLO using the minimum number of cores it can. Hence, we are interested in two metrics: a function's SLO violation ratio and CPU utilization. (1) Each function input has its own SLO (max execution time). We determined this value by profiling each input with different allocations and extracting the best execution time we could achieve. We then increased this value by ${10}\%$ and considered this the input's SLO. A function's SLO violation ratio is the number of SLO violations to the number of invocations. (2) We report CPU utilization as the number of idle allocated cores. This is because a ${50}\%$ underutilization using $\frac{1}{2}$ cores is not as severe as using only $\frac{16}{32}$ .
|
| 142 |
+
|
| 143 |
+
## B. Results
|
| 144 |
+
|
| 145 |
+
We compare the SLO violations (Figure 7a) and CPU utilization (Figure 7b) between Lachesis and the three baselines. Our baselines display an inherent tradeoff between meeting invocation SLOs and achieving optimal CPU utilization. While ow-large meets all SLOs, resource utilization is poor, as most inputs do not require an allocation of 64 cores. Meanwhile, ow-small is unable to meet any of the SLOs (100% violation) for our multithreaded-functions (matmult, resnet-50), but achieves perfect CPU utilization because every function uses at least 1 core. The ow-medium baseline allocates 32 cores to all invocations. While 32 cores are enough for many invocations, there are still plenty of inputs that require more than 32 cores to meet the SLO. Lachesis dynamically learns the minimum core count required to meet the SLO, thereby reducing the number of idle allocated cores while decreasing the number of SLO violations compared to ow-medium. This translates into a significant impact on user cost, as for resnet-50 alone Lachesis reduces cost by ${63}\%$ for 100 invocations.
|
| 146 |
+
|
| 147 |
+

|
| 148 |
+
|
| 149 |
+
Fig. 8: A timeline view of Lachesis' number of unused cores (blue) and SLO violations (green) over 100 invocations of resnet-50.
|
| 150 |
+
|
| 151 |
+
Figure 8 shows Lachesis' number of idle cores and SLO violations over the course of 100 invocations of resnet-50 with various inputs. It takes 28 invocations for Lachesis to stabilize and learn the minimum number of cores required for different inputs. For the remaining invocations, the number of idle cores is less than 8 , except for one spike at invocation 47 . Interestingly, throughout the course of the 100 invocations, there continues to be periodic SLO violations. We noticed that these violations are for the same input, which had an unrealistic SLO. For each invocation of this input, Lachesis would allocate more cores in an attempt to meet the SLO, however even with the max 64 cores, the SLO was never met.
|
| 152 |
+
|
| 153 |
+
## VI. CONCLUSION
|
| 154 |
+
|
| 155 |
+
For ease-of-use and resource efficiency of serverless platforms, our analysis motivates that resource allocation should be fine-grained per invocation and per resource type, to account for various input properties. We present Lachesis that uses an online learner to predict the number of cores required to meet an invocation's SLO and show its efficacy in improving performance, resource utilization, and user cost.
|
| 156 |
+
|
| 157 |
+
Next steps: Currently, Lachesis creates one online agent per function due to the variable number of features extracted from different function input types (e.g., video, audio). Instead, we plan to standardize features to enable one agent to make allocation decisions for all function invocations. We also need to design a scheduler that closely interacts with the allocator to reason about resource availability and the trade-off between suffering potential performance degradation or resource underutilization and tolerating cold starts (which we inherently increase with dynamic allocation) to right-size containers.
|
| 158 |
+
|
| 159 |
+
[1] "Introducing compute optimized vms powered by amd epyc processors," https://cloud.google.com/blog/products/compute/introducing-compute-optimized-vms-on-amd-epyc-milan.
|
| 160 |
+
|
| 161 |
+
[2] "The state of kubernetes report: Overprovisioning in real-life containerized applications," https://cast.ai/the-state-of-kubernetes-overprovisioning/.
|
| 162 |
+
|
| 163 |
+
[3] "Vowpal wabbit," https://vowpalwabbit.org/index.html.
|
| 164 |
+
|
| 165 |
+
[4] O. Alipourfard, H. H. Liu, J. Chen, S. Venkataraman, M. Yu, and M. Zhang, "Cherrypick: Adaptively unearthing the best cloud configurations for big data analytics," in 14th USENIX Symposium on Networked Systems Design and Implementation (NSDI 17). Boston, MA: USENIX Association, Mar. 2017, pp. 469-482. [Online]. Available: https://www.usenix.org/conference/nsdi17/technical-sessions/ presentation/alipourfard
|
| 166 |
+
|
| 167 |
+
[5] "AWS Lambda," https://aws.amazon.com/lambda/.
|
| 168 |
+
|
| 169 |
+
[6] "Serverless applications lens: Aws well-architected framework," https://d1.awsstatic.com/whitepapers/architecture/AWS-Serverless-Applications-Lens.pdf.
|
| 170 |
+
|
| 171 |
+
[7] V. M. Bhasi, J. R. Gunasekaran, A. Sharma, M. T. Kandemir, and C. Das, "Cypress: Input size-sensitive container provisioning and request scheduling for serverless platforms," in Proceedings of the 13th Symposium on Cloud Computing, ser. SoCC '22. New York, NY, USA: Association for Computing Machinery, 2022, p. 257-272. [Online]. Available: https://doi.org/10.1145/3542929.3563464
|
| 172 |
+
|
| 173 |
+
[8] M. Bilal, M. Canini, R. Fonseca, and R. Rodrigues, "With great freedom comes great opportunity: Rethinking resource allocation for serverless functions," 2021. [Online]. Available: https://arxiv.org/abs/2105.14845
|
| 174 |
+
|
| 175 |
+
[9] M. Copik, G. Kwasniewski, M. Besta, M. Podstawski, and T. Hoefler, "Sebs: A serverless benchmark suite for function-as-a-service computing," in Proceedings of the 22nd International Middleware Conference, ser. Middleware '21. New York, NY, USA: Association for Computing Machinery, 2021, p. 64-78. [Online]. Available: https://doi.org/10.1145/3464298.3476133
|
| 176 |
+
|
| 177 |
+
[10] "Google cloud functions," https://cloud.google.com/functions/.
|
| 178 |
+
|
| 179 |
+
[11] Z. Guo, Z. Blanco, M. Shahrad, Z. Wei, B. Dong, J. Li, I. Pota, H. Xu, and Y. Zhang, "Decomposing and executing serverless applications as resource graphs," 2022. [Online]. Available: https://arxiv.org/abs/2206.13444
|
| 180 |
+
|
| 181 |
+
[12] E. Jonas, J. Schleier-Smith, V. Sreekanti, C.-C. Tsai, A. Khandelwal, Q. Pu, V. Shankar, J. Menezes Carreira, K. Krauth, N. Yadwadkar, J. Gonzalez, R. A. Popa, I. Stoica, and D. A. Patterson, "Cloud programming simplified: A berkeley view on serverless computing," EECS Department, University of California, Berkeley, Tech. Rep. UCB/EECS-2019-3, Feb 2019. [Online]. Available: http://www2.eecs.berkeley.edu/Pubs/TechRpts/2019/EECS-2019-3.html
|
| 182 |
+
|
| 183 |
+
[13] K. Kaffes, N. J. Yadwadkar, and C. Kozyrakis, "Practical scheduling for real-world serverless computing," November 2021.
|
| 184 |
+
|
| 185 |
+
[14] K. Keahey, J. Anderson, Z. Zhen, P. Riteau, P. Ruth, D. Stanzione, M. Cevik, J. Colleran, H. S. Gunawi, C. Hammock, J. Mambretti, A. Barnes, F. Halbach, A. Rocha, and J. Stubbs, "Lessons learned from the chameleon testbed," in Proceedings of the 2020 USENIX Annual Technical Conference (USENIX ATC '20). USENIX Association, July 2020.
|
| 186 |
+
|
| 187 |
+
[15] J. Kim and K. Lee, "Functionbench: A suite of workloads for serverless cloud function service," in 2019 IEEE 12th International Conference on Cloud Computing (CLOUD), 2019, pp. 502-504.
|
| 188 |
+
|
| 189 |
+
[16] "Microsoft Azure Functions," https://azure.microsoft.com/en-us/services/functions/.
|
| 190 |
+
|
| 191 |
+
[17] "Apache OpenWhisk," https://openwhisk.apache.org/.
|
| 192 |
+
|
| 193 |
+
[18] S. Venkataraman, Z. Yang, M. Franklin, B. Recht, and I. Stoica, "Ernest: Efficient performance prediction for large-scale advanced analytics," in 13th USENIX Symposium on Networked Systems Design and Implementation (NSDI 16). Santa Clara, CA: USENIX Association, 2016, pp. 363-378. [Online]. Available: https://www.usenix.org/ conference/nsdi16/technical-sessions/presentation/venkataraman
|
| 194 |
+
|
| 195 |
+
[19] L. Wang, M. Li, Y. Zhang, T. Ristenpart, and M. Swift, "Peeking behind the curtains of serverless platforms," in 2018 USENIX Annual Technical Conference (USENIX ATC 18). Boston, MA: USENIX Association, Jul. 2018, pp. 133-146. [Online]. Available: https://www.usenix.org/conference/atc18/presentation/wang-liang
|
| 196 |
+
|
| 197 |
+
[20] N. J. Yadwadkar, B. Hariharan, J. E. Gonzalez, B. Smith, and R. H. Katz, "Selecting the best vm across multiple public clouds: A data-driven performance modeling approach," in Proceedings of the 2017 Symposium on Cloud Computing, ser. SoCC '17. New York, NY, USA: ACM, 2017, pp. 452-465. [Online]. Available: http://doi.acm.org/10.1145/3127479.3131614
|
papers/ISCA/ISCA 2023/ISCA 2023 Workshop/ISCA 2023 Workshop ASSYST/4zdPNY3SDQk/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,193 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ ONLINE LEARNING FOR RIGHT-SIZING SERVERLESS FUNCTION INVOCATIONS
|
| 2 |
+
|
| 3 |
+
ISCA 2023 Submission #13 - Confidential Draft - Do NOT Distribute!!
|
| 4 |
+
|
| 5 |
+
Abstract-Serverless computing relieves developers from the burden of allocating and managing resources for their cloud applications, providing ease-of-use to the users and the opportunity to optimize resource utilization to the providers. However, the lack of visibility into user functions limits providers' ability to right-size the functions. Thus, providers resort to simplifying assumptions, ignoring input variability, and coupling different resource types (CPU, memory, network), resulting in widely varying function performance and resource efficiency. To provide users with predictable performance and costs for their function executions, we need to understand how these factors contribute to function performance and resource usage.
|
| 6 |
+
|
| 7 |
+
In this paper, we first conduct a deep study of commonly deployed serverless functions on an open-source serverless computing framework. Our analysis provides key insights to guide the design of a resource allocation framework for serverless systems, including the need to provision resources per invocation, account for function semantics, and decouple resources. We then present Lachesis, a resource allocation framework that builds on the insights we found and leverages online learning to right-size a function invocation. Our experiments show that Lachesis can increase speedup by ${2.6}\mathrm{x}$ while decreasing idle cores by ${82}\%$ compared to static allocation decisions made by users.
|
| 8 |
+
|
| 9 |
+
§ I. INTRODUCTION
|
| 10 |
+
|
| 11 |
+
A key benefit of serverless computing for users is that they get to focus on their application logic and leave the details of resource provisioning and management to the cloud providers. However, this results in an opaque interface between users and providers that adversely impacts both. For users with performance-critical applications, such as timely detection of videos with indecent content uploaded to YouTube, or cost-minded applications, such as personal photo organization, unknown resource management policies that they cannot control are a problem [12], [19]. Meanwhile, providers lack visibility into user-functions limiting their ability to make cost-performance trade-offs on behalf of the users.
|
| 12 |
+
|
| 13 |
+
Existing serverless systems either completely hide the resource allocation policies they use [16], or provide a single knob, the memory size of the container, that the user can set [5], [10]. This parameter is intended to give users control over resource management and providers visibility into the resource requirements of user functions. However, even with this additional input, serverless systems are incapable of providing performance- and cost-aware function execution to users. We argue that, to fix this issue, we need to first understand which factors impact function performance and how. We then need to study how the current resource allocation frameworks take these factors into account. Finally, we need to see the combined impact of existing policies and these factors on function performance and resource efficiency. We review the following policies and assumptions made by existing resource allocation frameworks for serverless systems and motivate the need for our characterization work.
|
| 14 |
+
|
| 15 |
+
< g r a p h i c s >
|
| 16 |
+
|
| 17 |
+
< g r a p h i c s >
|
| 18 |
+
|
| 19 |
+
(a) Slowdown w.r.t. the best (b) Maximum mem utilized runtime across mem sizes vs. allocated across 100 runs for 100 invocations of a of the video transcoding
|
| 20 |
+
|
| 21 |
+
video transcoding function. function (from Fig. 1a).
|
| 22 |
+
|
| 23 |
+
Fig. 1: Characterizing functions with respect to the resources allocated, utilized, and performance observed.
|
| 24 |
+
|
| 25 |
+
1. Static and input-agnostic allocation by providers: Providers statically allocate resources to functions using the user-specified memory size. However, this approach ignores the fact that different inputs submitted to the same function might have different resource needs (as demonstrated by the spread in duration in Figure 1a). This precludes optimizations such as using smaller containers for smaller inputs, which might bring significant cost benefits. For instance, if the static function-level allocator sized a container with 3GB memory, and most of the invocations used only $1\mathrm{{GB}}$ , the costs incurred are $2\mathrm{x}$ higher for allocated but unused resources. So, we need to understand the impact of inputs and function semantics on function performance and resource utilization.
|
| 26 |
+
|
| 27 |
+
2. Coupled allocation of different resource types by providers: The specified memory size for a function dictates the number of CPU cores, thus, tightly coupling the two types of resources together. There are two main limitations with this approach: (a) Although users now need to only tune the memory knob, setting this knob correctly might be difficult for workloads that are not memory-intensive but limited by other resources. For instance, video transcoding or compression are CPU intensive workloads. Users might have to profile their functions carefully, adding significant cost. (b) The tight coupling of resources might lead to suboptimal resource allocation decisions for certain kinds of workloads. For example, CPU intensive workloads might end up being allocated large amounts of memory that are not used (Figure 1b).
|
| 28 |
+
|
| 29 |
+
3. Over-provisioning by users: The importance of the memory size parameter [6] and its opaque coupling to other resources forces users to profile their function to find the right setting. But, as the performance and resource usage of a function can depend significantly on inputs, users must profile on diverse inputs to ensure adequate resource availability in all cases. The cost of doing so is prohibitive. Prior work on reducing this profiling cost either assumes knowledge of the workload which is unavailable [4], [18], or ignores the input itself, which can have a large impact on many functions [20]. Thus, users often overprovision and choose the largest memory size available (10GB for AWS Lambda, for instance), raising their costs significantly and leading to underutilized resources for the provider [2]. So, we need to understand how inputs affect a function's resource demands.
|
| 30 |
+
|
| 31 |
+
max width=
|
| 32 |
+
|
| 33 |
+
Function Input Type #Runs #Sizes Size Range
|
| 34 |
+
|
| 35 |
+
1-5
|
| 36 |
+
matmult square matrix 540 9 500 - 80000
|
| 37 |
+
|
| 38 |
+
1-5
|
| 39 |
+
linpack square matrix 660 11 500 - 8000
|
| 40 |
+
|
| 41 |
+
1-5
|
| 42 |
+
image-process image 840 14 12K - 4.6M
|
| 43 |
+
|
| 44 |
+
1-5
|
| 45 |
+
video-process video 645 5 2.2M - 6.1M
|
| 46 |
+
|
| 47 |
+
1-5
|
| 48 |
+
encrypt string 420 7 500 - 50000
|
| 49 |
+
|
| 50 |
+
1-5
|
| 51 |
+
mobilenet image 840 14 12K - 4.6M
|
| 52 |
+
|
| 53 |
+
1-5
|
| 54 |
+
sentiment batch of strings 716 12 50 - 3000
|
| 55 |
+
|
| 56 |
+
1-5
|
| 57 |
+
speech2text audio 471 8 48K - 12M
|
| 58 |
+
|
| 59 |
+
1-5
|
| 60 |
+
${qr}$ url 660 11 25 - 480
|
| 61 |
+
|
| 62 |
+
1-5
|
| 63 |
+
lr-train training set 160 4 10M - 100M
|
| 64 |
+
|
| 65 |
+
1-5
|
| 66 |
+
compress file 434 7 64M - 2G
|
| 67 |
+
|
| 68 |
+
1-5
|
| 69 |
+
resnet-50 (inf) image 574 9 184K - 4.6M
|
| 70 |
+
|
| 71 |
+
1-5
|
| 72 |
+
|
| 73 |
+
TABLE I: Summary of 12 serverless functions studied.
|
| 74 |
+
|
| 75 |
+
In this paper we extensively study the impact of function inputs and resource coupling on several serverless functions covering a wide range of application types. Building on the insights we found, we introduce Lachesis, an online learning based resource allocation framework that (1) allocates resources to each function invocation based on characteristics of the input and function semantics, and (2) decouples different resource types. Lachesis employs an online learning agent that uses cost-sensitive multi-class classification to predict the minimum number of cores required to satisfy a given invocation's service level objective (SLO). It removes the need for users to specify memory limits, and in doing so, Lachesis achieves betters resource utilization while simplifying the serverless user interface.
|
| 76 |
+
|
| 77 |
+
§ II. EXISTING RESOURCE ALLOCATION MECHANISMS
|
| 78 |
+
|
| 79 |
+
Several cloud providers [5], [10] and open-source communities [17] expose a common interface to their serverless platforms: users specify a memory limit for their functions at creation time. The platforms then allocate a proportional amount of CPU based on the memory limit. Thus, all invocations of a function have the same container size, regardless of their actual resource needs. Apache OpenWhisk's [17] CPU allocation is a soft limit, as invocations can burst if there are available CPUs in the server. Cypress [7] creates containers with high CPU count and memory capacity per function to consolidate multiple concurrent function invocations within one container to avoid wasting resources. Bilal et al., [8] propose to decouple memory and CPU to create a trade-off between performance and cost. ReSC [11] divides functions into resource components (i.e., compute or memory) and allocates resources per component.
|
| 80 |
+
|
| 81 |
+
< g r a p h i c s >
|
| 82 |
+
|
| 83 |
+
Fig. 2: Execution time as a function of data size for three serverless functions. The CPU and memory limit is fixed across sizes.
|
| 84 |
+
|
| 85 |
+
< g r a p h i c s >
|
| 86 |
+
|
| 87 |
+
Fig. 3: video-process's (a) CPU and (b) memory utilization as a function of video size. The CPU limit is fixed at 80 cores.
|
| 88 |
+
|
| 89 |
+
§ III. WHAT AFFECTS FUNCTION PERFORMANCE?
|
| 90 |
+
|
| 91 |
+
We study the impact of input properties (i.e., size, type), resource availability, and coupling of resource types, on the performance and resource utilization of serverless functions.
|
| 92 |
+
|
| 93 |
+
Experimental Setup: Our study observes functions on Open-Whisk [17]. We make two changes to OpenWhisk. (1) We force all CPU limits to be hard limits. (2) We decouple CPU and memory to explore different configurations than the fixed pairings provided by OpenWhisk. We deploy OpenWhisk on two bare-metal nodes in TACC's Chameleon cluster [14]. Each node contains 2 AMD EPYC 7763 CPUs, operating at 2.45 GHz [1], and 251GB of memory. For performance predictability, we disable hyperthreading, as done in [13], resulting in 128 online cores per machine. We install Ubuntu LTS 18.04. One machine hosts the OpenWhisk Controller and CouchDB while the other hosts the Invoker to run functions. Workloads: We study 12 functions (see Table I) from literature and benchmark suites [7], [9], [15] covering scientific applications, data processing, and ML inference and training. We collect the execution time and memory/CPU utilization for several combinations of functions, input sizes, and CPU limits. We run each combination 8-10 times, for a total of $\sim 8\mathrm{\;K}$ runs.
|
| 94 |
+
|
| 95 |
+
§ A. IMPACT OF FUNCTION INPUTS
|
| 96 |
+
|
| 97 |
+
We study two questions: (1) What impact does input size have on function performance? (2) Do input properties, other than size, affect function performance and resource utilization? Observations: Figure 2 presents input size vs. execution time for three functions (we omit the others due to space constraints). We note that regardless of input type (matrix, files, text) or function semantics (i.e., single- vs. multi-threaded), function performance is correlated with input size and depends on whether the function is single or multi-threaded.
|
| 98 |
+
|
| 99 |
+
Figure 3a compares the number of cores used by video-process on two input sets of different videos. We see that two inputs of the same size may vastly differ in the number of cores used. We also notice that while set-1 has an unpredictable relationship between input size and cores used, set-2 exhibits constant utilization regardless of video size.
|
| 100 |
+
|
| 101 |
+
< g r a p h i c s >
|
| 102 |
+
|
| 103 |
+
Fig. 4: Execution time as a function of CPU limit for three of our serverless functions. The input size is fixed at max value.
|
| 104 |
+
|
| 105 |
+
< g r a p h i c s >
|
| 106 |
+
|
| 107 |
+
Fig. 5: CPU utilization compared to allocation for three of our serverless functions. The input size is fixed at max value.
|
| 108 |
+
|
| 109 |
+
To understand these differences, we compare video properties beyond just size: frame rate per second, video length, bit rate, and video resolution. We find that the resolution is the key property affecting resource utilization. While the resolution is constant in set-2 $\left( {{1280} \times {720}}\right)$ , it widely varies between the different video sizes in set-1. Inputs with higher resolutions $\left( {{1280} \times {720}}\right)$ have lower CPU and higher memory utilization, whereas the inverse is seen for lower resolution inputs.
|
| 110 |
+
|
| 111 |
+
Insights: Function semantics and input properties (not just limited to size) affect performance and resource utilization. Existing resource allocators that ignore input properties beyond size are thus suboptimal. Instead, functions can benefit from allocators that account for inputs and function semantics.
|
| 112 |
+
|
| 113 |
+
§ B. IMPACT OF ADDED RESOURCES
|
| 114 |
+
|
| 115 |
+
We now evaluate the effect of adding resources to a function. Observations: Figures 4a and 4c show that lr-train and resnet-50 can benefit from more cores (execution time decreases). matmult, compress, and linpack also exhibit these trends. However, lr-train shows that the gains of increasing CPU saturate: execution time improvements saturate at 8 cores. In fact, Figure 5a shows that utilization never surpasses 5 cores. lr-train uses scikit-learn's LogisticRegressionCV( ) with $n$ _____jobs $= - 1$ to implement training. This setting specifies to use as many cores as possible. Since ${lr}$ -train does not specify the number of cross-validation folds, 5 folds (the default) are in the loop for training, and thus at most 5 cores are fully utilized.
|
| 116 |
+
|
| 117 |
+
Meanwhile, Figure 4b shows that image-process does not benefit from more cores, even though its performance is input-dependent as explained in $§$ III-A. Figure 5b shows that regardless of CPU allocation, utilization is always hovering around 1 core. In fact, several of our functions are single-threaded: mobilenet, sentiment, encrypt, and speech2text.
|
| 118 |
+
|
| 119 |
+
Insights: Serverless platforms see a mix of single- and multi-threaded functions with potentially bounded parallelism. Adding resources may not always help. Hence, resource allocators should tailor their policies to suit the type of function.
|
| 120 |
+
|
| 121 |
+
< g r a p h i c s >
|
| 122 |
+
|
| 123 |
+
Fig. 6: (a) Current serverless platforms vs (b) Lachesis.
|
| 124 |
+
|
| 125 |
+
§ C. IMPACT OF COUPLED RESOURCE TYPES
|
| 126 |
+
|
| 127 |
+
Existing allocation policies scale CPU in proportion to the user-specified memory size [5]. However, this assumes functions are both CPU- and memory-intensive. Here, we evaluate the accuracy of this assumption.
|
| 128 |
+
|
| 129 |
+
Observations: Figure 3 shows that video-process uses up to 50 cores, but its memory utilization is at most 41% (0.8GB). Thus, video-process (also matmult, linpack, and lr-train) is compute-intensive. Conversely, we found sentiment to be memory-bound (100% memory utilization while it uses at most 1 core). Thus, different functions may utilize resource types in different proportions. Cloud providers can experience severe underutilization due to resource coupling. For example, providing enough memory to sentiment would lead to ${50}\%$ underutilization of allocated vCPUs. Meanwhile, to allocate 50 vCPUs to video-process would require a 88GB memory allocation, resulting in $\sim {99}\%$ memory underutilization.
|
| 130 |
+
|
| 131 |
+
Insights: It is imperative that allocators decouple resources to improve utilization while meeting resource demands.
|
| 132 |
+
|
| 133 |
+
§ IV. LACHESIS DESIGN AND IMPLEMENTATION
|
| 134 |
+
|
| 135 |
+
We now present Lachesis, a system that makes fine-grained and decoupled resource allocations per invocation using an online learning agent. Figure 6a shows a simplified architecture of existing serverless systems. Figure $6\mathrm{\;b}$ shows the changes we make to the existing workflow of serverless frameworks: Lachesis simplifies the user interface by removing the need for users to specify a static memory limit during function submission. Instead, users can simply provide an SLO per invocation. Given a function, input, and SLO, Lachesis aims to right-size invocations by dynamically allocating the minimum amount of resources to meet the SLO. Algorithm 1 summarizes Lachesis' logic. For an invocation, the online learner predicts the minimum number of cores to allocate. Lachesis defaults this value if the learner has not seen enough invocations for the given function. It then launches the invocation with the determined CPU limit and collects utilization and duration metrics for feedback. Finally, it uses the observed data to compute costs and updates the online learner. Next, we describe the formulation of our online learning agent.
|
| 136 |
+
|
| 137 |
+
Prediction Target: As our goal is to meet user-specified SLOs with efficient use of resources, a natural prediction target is the minimum number of cores a function invocation needs for a target execution time.
|
| 138 |
+
|
| 139 |
+
Model Inputs: Our model's inputs are the serverless function, user inputs, and an SLO. We extract features from functions (number of function calls, libraries used, loop sizes) and inputs (image size and resolution, matrix size and density) to construct a vector for model updating and prediction.
|
| 140 |
+
|
| 141 |
+
Algorithm 1 Lachesis' logic using online learning
|
| 142 |
+
|
| 143 |
+
Input ${fxn},{in},{slo}$
|
| 144 |
+
|
| 145 |
+
Determine cpu_lim: default or ModelPredict(in, slo)
|
| 146 |
+
|
| 147 |
+
Launch ${fxn}$ with given in and determined cpu_lim
|
| 148 |
+
|
| 149 |
+
Observe fxn's max_cpu and exec_time during runtime
|
| 150 |
+
|
| 151 |
+
Use max_cpu and exec_time to ComputeCosts( )
|
| 152 |
+
|
| 153 |
+
Update online model: ModelUpdate(fxn, in, slo, costs)
|
| 154 |
+
|
| 155 |
+
Feedback: On each worker machine, we deploy a daemon that captures CPU utilization over the invocation's runtime. These collected metrics are then used by our cost function to update our model's weights online.
|
| 156 |
+
|
| 157 |
+
Learning Algorithm: We approach predicting core count as a supervised learning problem, which can be solved with regression or classification. We opt to not use regressors because of the difficulty in formulating a cost function to differentiate between underpredictions and overpredictions upon an SLO violation. Instead, we use cost-sensitive multi-class classification to make predictions. Each class (core count) has its own linear regressor that predicts the class's cost for an invocation. We select the class with the lowest cost as the allocation. Now, we can differentiate costs for different classes without worrying about the relationship between them.
|
| 158 |
+
|
| 159 |
+
Cost Function: Our cost function is rather intuitive. First, we determine the class to assign the lowest cost of one to. There are three cases. (1) If an invocation's SLO is met, the max_cpu (i.e., the maximum number of cores used by the invocation) class is given the lowest cost. (2) If an invocation's SLO is met and all assigned cores are used, we may assign a class lower than max_cpu the lowest cost (depending on the slack between the invocation's execution time and SLO), as this informs our learner that less cores may also satisfy this invocation's SLO. (3) Upon an SLO violation, we assign a class greater than max_cpu (at most $+ {10}$ ) the lowest cost in an attempt to meet the SLO in the next invocation. The difference between execution time and SLO determines this class. The costs of the remaining classes grow linearly, with underpredictions being penalized further by a hyperparameter.
|
| 160 |
+
|
| 161 |
+
Implementation: We implement Algorithm 1 as a shim layer that can run on top of any serverless platform. This layer runs on the same node as our dispatcher. We use Apache Open-Whisk [17] as our base serverless platform and implement our online learning agent using Vowpal Wabbit [3], a library with an efficient online implementation of the cost-sensitive multi-class classification algorithm. On each Invoker, we launch a metric aggregation daemon that collects utilization and runtime metrics per container and persists the data in a MetaData store for the shim layer to use when updating its models. Why Online Learning: The fundamental limitation of existing public serverless platforms [5], [10] is their inability to right-size containers dynamically based on inputs. Meanwhile, for Cypress to achieve high utilization, arrival patterns need to be frequent enough to pack invocations in one container within the window of an SLO [7]. Hence, Cypress is susceptible to severe resource underutilization with sparse resource arrival patterns. Finally, as shown in $§$ III, it is infeasible to use heuristics to predict optimal resource allocation because of variation in function behaviors depending on function semantics and input types/properties. This prompts our use of online learning, enabling Lachesis to dynamically right-size containers and adapt to changes in function and input distribution over time.
|
| 162 |
+
|
| 163 |
+
< g r a p h i c s >
|
| 164 |
+
|
| 165 |
+
Fig. 7: Difference in (a) SLO violation percentage and (b) idle cores between Lachesis and our baselines for three functions.
|
| 166 |
+
|
| 167 |
+
§ V. EVALUATION
|
| 168 |
+
|
| 169 |
+
We aim to show Lachesis' efficacy in allocating resources per invocation. Specifically, we evaluate the impact of per-invocation allocations on the number of SLO violations, resource utilization, and user cost.
|
| 170 |
+
|
| 171 |
+
§ A. METHODOLOGY
|
| 172 |
+
|
| 173 |
+
Baselines: We compare Lachesis to three baselines, on Open-Whisk (ow), users might choose when providing resource needs to existing serverless platforms. Users may ask for the maximum, median, or minimum amount of resources for all their invocations. These correspond to our ow-large (64 cores), ow-medium (32 cores), and ow-small (1 core) baselines.
|
| 174 |
+
|
| 175 |
+
Workloads: We evaluate Lachesis with three serverless functions from Table I: image-process, matmult, and resnet-50. While image-process is single-threaded, both matmult and resnet-50 are multi-threaded, showing the robustness of our system to both types of functions. For each function, we run over 100 invocations with over 60 different inputs for image-process and 20 for both resnet-50 and matmult. The trace of invocations is the same on Lachesis and our three baselines.
|
| 176 |
+
|
| 177 |
+
Evaluation Metrics: Lachesis aims to meet an invocation's SLO using the minimum number of cores it can. Hence, we are interested in two metrics: a function's SLO violation ratio and CPU utilization. (1) Each function input has its own SLO (max execution time). We determined this value by profiling each input with different allocations and extracting the best execution time we could achieve. We then increased this value by ${10}\%$ and considered this the input's SLO. A function's SLO violation ratio is the number of SLO violations to the number of invocations. (2) We report CPU utilization as the number of idle allocated cores. This is because a ${50}\%$ underutilization using $\frac{1}{2}$ cores is not as severe as using only $\frac{16}{32}$ .
|
| 178 |
+
|
| 179 |
+
§ B. RESULTS
|
| 180 |
+
|
| 181 |
+
We compare the SLO violations (Figure 7a) and CPU utilization (Figure 7b) between Lachesis and the three baselines. Our baselines display an inherent tradeoff between meeting invocation SLOs and achieving optimal CPU utilization. While ow-large meets all SLOs, resource utilization is poor, as most inputs do not require an allocation of 64 cores. Meanwhile, ow-small is unable to meet any of the SLOs (100% violation) for our multithreaded-functions (matmult, resnet-50), but achieves perfect CPU utilization because every function uses at least 1 core. The ow-medium baseline allocates 32 cores to all invocations. While 32 cores are enough for many invocations, there are still plenty of inputs that require more than 32 cores to meet the SLO. Lachesis dynamically learns the minimum core count required to meet the SLO, thereby reducing the number of idle allocated cores while decreasing the number of SLO violations compared to ow-medium. This translates into a significant impact on user cost, as for resnet-50 alone Lachesis reduces cost by ${63}\%$ for 100 invocations.
|
| 182 |
+
|
| 183 |
+
< g r a p h i c s >
|
| 184 |
+
|
| 185 |
+
Fig. 8: A timeline view of Lachesis' number of unused cores (blue) and SLO violations (green) over 100 invocations of resnet-50.
|
| 186 |
+
|
| 187 |
+
Figure 8 shows Lachesis' number of idle cores and SLO violations over the course of 100 invocations of resnet-50 with various inputs. It takes 28 invocations for Lachesis to stabilize and learn the minimum number of cores required for different inputs. For the remaining invocations, the number of idle cores is less than 8, except for one spike at invocation 47 . Interestingly, throughout the course of the 100 invocations, there continues to be periodic SLO violations. We noticed that these violations are for the same input, which had an unrealistic SLO. For each invocation of this input, Lachesis would allocate more cores in an attempt to meet the SLO, however even with the max 64 cores, the SLO was never met.
|
| 188 |
+
|
| 189 |
+
§ VI. CONCLUSION
|
| 190 |
+
|
| 191 |
+
For ease-of-use and resource efficiency of serverless platforms, our analysis motivates that resource allocation should be fine-grained per invocation and per resource type, to account for various input properties. We present Lachesis that uses an online learner to predict the number of cores required to meet an invocation's SLO and show its efficacy in improving performance, resource utilization, and user cost.
|
| 192 |
+
|
| 193 |
+
Next steps: Currently, Lachesis creates one online agent per function due to the variable number of features extracted from different function input types (e.g., video, audio). Instead, we plan to standardize features to enable one agent to make allocation decisions for all function invocations. We also need to design a scheduler that closely interacts with the allocator to reason about resource availability and the trade-off between suffering potential performance degradation or resource underutilization and tolerating cold starts (which we inherently increase with dynamic allocation) to right-size containers.
|
papers/ISCA/ISCA 2023/ISCA 2023 Workshop/ISCA 2023 Workshop ASSYST/6d5El_LENnf/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,359 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# TAP: Efficient Derivation of Tensor Parallel Plans for Foundation Models
|
| 2 |
+
|
| 3 |
+
ISCA 2023 Submission #5 - Confidential Draft - Do NOT Distribute!!
|
| 4 |
+
|
| 5 |
+
Abstract-Model parallelism is essential to train large language models efficiently. However, determining the optimal model parallel schedule for a given neural network can be slow and inefficient due to the vast choice space. To address this challenge, we propose a tensor model parallelism framework called TAP, which automatically searches for the best data and tensor parallel schedules.
|
| 6 |
+
|
| 7 |
+
Our approach is based on the observation that a neural network can be represented as a directed acyclic graph, within which only exists a limited set of frequent subgraphs. With that, we design a graph pruning algorithm that efficiently folds the search space. As a result, TAP runs at sub-linear complexity with respect to model size, which makes it a practical solution for large-scale networks.
|
| 8 |
+
|
| 9 |
+
Experimental results demonstrate that TAP outperforms the state-of-the-art automatic parallelism frameworks by ${20} - {160} \times$ in searching time. Moreover, the performance of TAP's discovered schedules is competitive with expert-engineered ones. In summary, TAP provides a powerful and efficient tool for model parallelism that can help alleviate the burden of manual tuning.
|
| 10 |
+
|
| 11 |
+
## I. INTRODUCTION
|
| 12 |
+
|
| 13 |
+
Recent years have witnessed a burgeoning of large deep neural networks (DNNs) that deliver unprecedented accuracy across a wide range of AI tasks. The rate of DNN model size increase, however, has far surpassed the growth in accelerator memory capacity. To address this challenge, model parallelism has been proposed, where model weights are sharded onto multiple devices during distributed DNN training.
|
| 14 |
+
|
| 15 |
+
There are two main paradigms in model parallelism: pipeline parallelism and tensor parallelism. Pipeline parallelism divides the model by layers. Only activations are communicated during the forward pass, while gradient tensors are exchanged in the backward phase. Pipeline parallelism has recently drawn much attention, with many proposed algorithms aiming to find the optimal pipeline schedule that minimizes the pipeline idle time (i.e., "bubble size"). However, pipeline parallelism suffers from two significant drawbacks: 1) each layer must fit into a single accelerator's memory, and 2) interleaving different layers can be challenging for models with imbalanced architectures. As an alternative, tensor parallelism partitions the model weights and distributes them to multiple devices, thus lifting the restriction on the size of individual layers. In this work, we focus on tensor parallelism.
|
| 16 |
+
|
| 17 |
+
Manual specification of tensor parallelism is a daunting task, given that the quality of a partitioning scheme depends on both the neural network architecture and the hardware system. To address this challenge, automatic parallelism approaches have been proposed which leverages user hints or guided searches over the entire partitioning candidate space. We argue that a brute-force search of the space is unnecessary in majority of the cases. Our research makes two key observations: Firstly, most neural networks include shared subgraphs that can significantly reduce the search space. Secondly, communication is the primary bottleneck during tensor parallelism training, and contiguous partitions in a block cannot overlap. Therefore, the search process can be accelerated by only searching for unique neural network sub-modules and evaluating candidate strategies based on their communication cost.
|
| 18 |
+
|
| 19 |
+
Based on those observations, we present TAP, a deep learning framework that automatically derives tensor-parallel plans for arbitrary neural networks without requiring expert annotations. TAP first constructs a skimmed DAG by removing auxiliary nodes, then it finds all of the shared subgraphs and searches for the optimal sharding schedule for each of them. In the end, TAP reconstructs the DAG by applying the found solution to the original graph. TAP drastically reduces the search space for tensor parallel plans, achieving ${20} \times - {160} \times$ speedup compared with the state-of-the-art auto-parallel framework. Evaluations demonstrate that our approach can also generate tensor parallel plans comparable solutions designed by an expert.
|
| 20 |
+
|
| 21 |
+
Our paper makes the following contributions:
|
| 22 |
+
|
| 23 |
+
- A set of intermediate representations (IRs) of the computational graph that abstract away from low-level implementation details;
|
| 24 |
+
|
| 25 |
+
- A graph pruning algorithm that exploits the shared substructure to facilitate efficient searching;
|
| 26 |
+
|
| 27 |
+
- A communication-based cost model that accurately captures the communication requirements for tensor-parallel training.
|
| 28 |
+
|
| 29 |
+
## II. BACKGROUND
|
| 30 |
+
|
| 31 |
+
## A. Model Parallelism
|
| 32 |
+
|
| 33 |
+
Model parallelism distributes model weights onto different devices and synchronizes the full model through collective communication [6]. Model parallelism can be further divided into categories, pipeline parallelism and tensor parallelism.
|
| 34 |
+
|
| 35 |
+
1) Tensor Parallelism: Tensor parallelism splits the model layer and distributes it across multiple devices, thus dispersing the computational overhead of the layer [17], [23], [26]. Each device stores only a portion of the input tensors in its local memory. The final result therefore needs to be aggregated from partial results through collective communication. Tensor parallelism can alleviate the challenge of training heterogeneous models using pipeline parallelism and can achieve better performance.
|
| 36 |
+
|
| 37 |
+
## B. Automatic Parallelism
|
| 38 |
+
|
| 39 |
+
Automatic parallelism is a recent line of research on automatically distributing a local model from a single device to multiple devices using the data and model parallel strategies. Existing approaches for automatic parallelism rely on user hints or brute-force searches across the entire space.
|
| 40 |
+
|
| 41 |
+
1) User hint: User-hint-based automatic parallelism scales single-device programs to multi-device systems by incorporating user annotations. For instance, GSPMD [26] infers the operator partitioning scheme based on user annotations, while Whale [12] allows for the inclusion of user hints for semi-auto parallelization of large models and introduces a hardware-ware load balance algorithm. However, user-hint-based automatic parallelism approaches require users to possess a deep understanding of both the system and model, and hard-coded user hints may not be transferable when either the model or system changes.
|
| 42 |
+
|
| 43 |
+
2) Search algorithm: Recent work has proposed fully automatic approaches based on search algorithms to optimize distributed DNN training. For example, Tofu [25] uses a recursive search algorithm based on dynamic programming and DNN-specific heuristics to minimize communication for the entire dataflow graph. Flexflow [13] employs randomized search to find the best parallel strategy in the SOAP (Sample, Operator, Attribute, and Parameter) space. Alpa [28] optimizes large DL models through two-level optimizations: inter-operator and intra-operator. It automates inter-operator parallelism using dynamic programming and intra-operator parallelism with integer linear programming. Unity [24] represents parallelization and algebraic transformations as substitutions on a unified graph representation, uses a novel hierarchical search algorithm to identify an optimized sequence of substitutions, and scales to large numbers of GPUs and complex DNNs.
|
| 44 |
+
|
| 45 |
+
3) Challenge of exploding search space: Search-based approaches face the challenge of exploding search space as model size scales, resulting in significant time cost. For example, each tensor (assuming 2D) can be partitioned in three ways: not sharded, sharded along the first dimension (row-wise), or sharded along the second dimension (columnwise). Given a neural network $G\left( {E, V}\right)$ with $V$ weight tensors, there exists ${3}^{V}$ possible sharding plans. Therefore, finding an optimal sharding plan is an NP-hard problem.
|
| 46 |
+
|
| 47 |
+
## III. APPROACH
|
| 48 |
+
|
| 49 |
+
In this section, we formulate the problem of searching for an optimal tensor parallel schedule, followed by our observation of the common presence of shared sub-structures in a large neural network, leading to the motivation of our design.
|
| 50 |
+
|
| 51 |
+
## A. Problem Formulation
|
| 52 |
+
|
| 53 |
+
A neural network can be represented as a directed acyclic graph $G\left( {E, V}\right)$ comprised of $L$ layers. The set of vertices $V$ represents the operators, and the set of edges $E$ represents the data flow from producer to consumer operators. Operators can optionally carry a weight tensor. During the forward pass, an edge represents an activation tensor, while in the backward phase, it represents a gradient tensor. A layer ${L}_{i} \in L$ is either a layer or a cluster of operators with a similar composition. Let the physical training system be $S\left( {m, n}\right)$ where $m$ is the number of worker nodes, and $n$ is the number of accelerators per worker node. A parallel plan $p$ is a new graph mathematically equivalent to $G$ . The cost function, $\operatorname{Cost}\left( {p, S}\right)$ , measures training latency for a given plan and training system. The goal is to find an optimal parallel plan ${p}^{ * }$ where:
|
| 54 |
+
|
| 55 |
+
$$
|
| 56 |
+
\mathop{\operatorname{minimize}}\limits_{p}\operatorname{Cost}\left( {p, S}\right)
|
| 57 |
+
$$
|
| 58 |
+
|
| 59 |
+
$$
|
| 60 |
+
\text{subject to}p\left( X\right) = G\left( X\right) \forall X
|
| 61 |
+
$$
|
| 62 |
+
|
| 63 |
+
How can an automated system find such a plan? Fig. 1 illustrates the typical workflow of an auto-parallel system. The system first reduce the search space for model splitting using pruning techniques. Next, a search algorithm is employed to generate one or more candidate plans for evaluation. Finally, a cost model evaluates all candidate plans and selects the one with the lowest cost based on predefined evaluation criteria.
|
| 64 |
+
|
| 65 |
+

|
| 66 |
+
|
| 67 |
+
Fig. 1: General recipe of automatic model parallelism frameworks.
|
| 68 |
+
|
| 69 |
+
The end-to-end duration to produce an optimal schedule is a critical metric for an auto-parallel system. We identify three primary factors that contribute to the overall completion time: the size of the search space, the time complexity of the searching algorithm, and the speed of the evaluation method.
|
| 70 |
+
|
| 71 |
+
## B. Challenges and Observations
|
| 72 |
+
|
| 73 |
+
As mentioned earlier, a major challenge faced by auto-parallel systems is the search space explosion problem. This exponential increase in candidate space has led to impractical search time for modern large models [28] ( $§\mathrm{V} - \mathrm{B}$ ). This creates a dilemma: while auto-parallel systems aim to accelerate large model training, if the derivation step itself is too slow, it may offset the benefit of using an auto-parallel system.
|
| 74 |
+
|
| 75 |
+
How to effectively reduce this large candidate search space? To answer this question, we studied common scaling techniques for popular DNN models and summarized our findings in Table I. We observe that these techniques can be grouped into two categories: scaling on the width, achieved by increasing the dimension of layers (e.g., adding more classes, attention heads, or convolutional channels), or scaling on the depth by increasing the number of layers. Notbaly, both techniques start with a base subgraph, a group of layers or operators, and expand from it. For instance, large pre-trained language models like BERT [7] and T5 [19] comprise tens of transformer layers, while multi-class object classification networks like ResNet-50 [11] are built from convolutional layers.
|
| 76 |
+
|
| 77 |
+
<table><tr><td>Scaling Technique</td><td>Task</td><td>Model</td><td>#Params</td><td>Shared Subgraph (SS)</td><td>#of SS</td></tr><tr><td rowspan="5">By width</td><td>Vision</td><td>ResNet50 [11]</td><td>23M</td><td>Conv</td><td>${50} \times$</td></tr><tr><td>Vision + Language</td><td>CLIP-Base [18]</td><td>63M</td><td>Transformer</td><td>${12} \times$</td></tr><tr><td>Language Model</td><td>WideNet [27]</td><td>63M</td><td>MoE layer</td><td>32x</td></tr><tr><td>Vision</td><td>ViT-Huge [8]</td><td>632M</td><td>Transformer</td><td>${32} \times$</td></tr><tr><td>Vision</td><td>V-MoE [22]</td><td>15B</td><td>MoE layer</td><td>${24} \times$</td></tr><tr><td rowspan="5">By depth</td><td>Speech</td><td>wav2vec 2.0 [3]</td><td>317M</td><td>Conv, Transformer</td><td>$7 \times ,{24} \times$</td></tr><tr><td>Language Model</td><td>BERT [7]</td><td>340M</td><td>Transformer</td><td>${24} \times$</td></tr><tr><td>Language Model</td><td>T5-Large [19]</td><td>770M</td><td>Transformer</td><td>${24} \times$</td></tr><tr><td>Language Model</td><td>GPT-3 [4]</td><td>175B</td><td>Transformer</td><td>96x</td></tr><tr><td>Language Model</td><td>Switch Transformer [10]</td><td>1571B</td><td>MoE layer</td><td>${15} \times$</td></tr></table>
|
| 78 |
+
|
| 79 |
+
TABLE I: Shared subgraphs exist on many neural network models. "Conv" means convolutional layer, "MoE" means Mixture-of-Expert layer.
|
| 80 |
+
|
| 81 |
+
Furthermore, upon analyzing expert-designed parallel schedules ( [17], [20], [21]), we notice that parallel schedules are predominately similar for layers of the same type. This is due to the fact that similar layers have comparable computational and memory consumption. This finding motivates us to investigate reusing parallel schedules discovered for identical layers, which can reduce the search effort.
|
| 82 |
+
|
| 83 |
+
## IV. DESIGN AND IMPLEMENTATION
|
| 84 |
+
|
| 85 |
+
## A. Overview
|
| 86 |
+
|
| 87 |
+
Fig. 2 illustrates the workflow of TAP . Given a neural network represented as a graph, TAP first converts the graph into an intermediate representation $\left( {§\mathrm{{IV}} - \mathrm{B}}\right)$ called GraphNode and removes auxiliary nodes. TAP then performs graph pruning $\left( {§\mathrm{{IV}} - \mathrm{C}}\right)$ to restrict the search space from the complete graph to the subgraphs. After pruning, TAP explores the possible sharding opportunities using pre-defined sharding patterns(§ IV-D) and validates the candidate plans(§ IV-E). If a valid plan is found, it is evaluated using the cost model $\left( {§\text{IV-F}}\right)$ . TAP takes the overall best plan, performs additional communication-level optimizations, and rewrites the model into a parallel version $\left( {§\mathrm{{IV}} - \mathrm{G}}\right)$ . To use TAP , users only need to specify the device mesh as shown in the example below.
|
| 88 |
+
|
| 89 |
+
## 1: Example with TAP on 2 workers each with 8 GPUs
|
| 90 |
+
|
| 91 |
+
---
|
| 92 |
+
|
| 93 |
+
import tensor_auto_parallel as tap
|
| 94 |
+
|
| 95 |
+
mesh = $\left\lbrack {2,8}\right\rbrack$
|
| 96 |
+
|
| 97 |
+
tap.auto_parallel(tap.split(mesh))
|
| 98 |
+
|
| 99 |
+
model_def( )
|
| 100 |
+
|
| 101 |
+
---
|
| 102 |
+
|
| 103 |
+
## B. Intermediate Representation
|
| 104 |
+
|
| 105 |
+
TAP defines a family of high-level Intermediate Representations (IRs) to facilitate the derivation of parallel schedules. Compared to MLIR HLO [14], TAP IRs operate on a coarser granularity while preserving the necessary information for sharding.
|
| 106 |
+
|
| 107 |
+
Upon obtaining the original neural network graph, TAP first trims the graph by deleting the auxiliary operators (Step 1) in Fig. 2). This will remove the initialization and checkpoint-related operators, which will be recovered when converted back to a neural network graph later. As a result, the remaining graph will consist of only computing and communication operators.
|
| 108 |
+
|
| 109 |
+
## TAP IRs consists of:
|
| 110 |
+
|
| 111 |
+
a) GraphNode.: A GraphNode represents a group of computing or communication operators. It can be a layer or a logical group of operators, which is the basic unit for deriving the sharding schedule. The TAP graph is made of GraphNode while preserving the directed edges from the original DAG. Using the GraphNode IR, we reduce the number of nodes in the T5-large model from ${60}\mathrm{k}$ to 1015 weight variables.
|
| 112 |
+
|
| 113 |
+
b) Sharding pattern.: A GraphNode could have multiple ways of sharding. For instance, a 2D matrix weight can be split on either dimension or replicated. TAP defines each sharding pattern using the SRC abstraction. TAP also establishes the cost of each sharding pattern based on communication cost.
|
| 114 |
+
|
| 115 |
+
c) Sharding plan.: A sharding plan is a set of subgraphs (blocks of GraphNodes) with sharding patterns connecting them.
|
| 116 |
+
|
| 117 |
+
## C. Pruning using Shared Subgraph
|
| 118 |
+
|
| 119 |
+
It is common for DNN models to contain shared subgraphs. If we could identify the shared subgraphs, we could prune the search space by searching only within the subgraph. We propose a graph pruning algorithm to compress the search space into a shared structure (Step ②):
|
| 120 |
+
|
| 121 |
+
In deep learning frameworks like TensorFlow [2], each variable is referred to by the operator that produces it. As such, variables under the same layer share the same name scope because they receive input from the same operator. Therefore, it is possible to cluster operators that fall under the same name scope.
|
| 122 |
+
|
| 123 |
+
Algorithm 1 starts by constructing a nodeTree, which identifies and groups the GraphNodes on each level by using the longest common prefix algorithm on the GraphNodes names (line2-5). After that, it finds the blocks of GraphNodes with a similar composition of operators and compares the number of operators with the minimum duplicate threshold (line 7). As the depth decreases, we will see a larger subgraph with less homogeneous compositions. Notice that multiple shared subgraphs may exist since a neural network may have multiple leaf nodes.
|
| 124 |
+
|
| 125 |
+
Neural Network
|
| 126 |
+
|
| 127 |
+

|
| 128 |
+
|
| 129 |
+
Fig. 2: Overview of the TAP system.
|
| 130 |
+
|
| 131 |
+
Algorithm 1 Graph Pruning
|
| 132 |
+
|
| 133 |
+
---
|
| 134 |
+
|
| 135 |
+
procedure PRUNEGRAPH(model Def, minDuplicate)
|
| 136 |
+
|
| 137 |
+
nodeTree $\leftarrow \varnothing$
|
| 138 |
+
|
| 139 |
+
maxDepth $\leftarrow$ modelDef.depth
|
| 140 |
+
|
| 141 |
+
for all depth $\in \max$ Depth $\cdots 1$ do
|
| 142 |
+
|
| 143 |
+
nodeTree $\left\lbrack \text{depth}\right\rbrack$
|
| 144 |
+
|
| 145 |
+
longestCommonPrefix(modelDef.nodes.name)
|
| 146 |
+
|
| 147 |
+
opCount $=$ findSimilarBlk(nodeTree $\left\lbrack \text{depth}\right\rbrack )$
|
| 148 |
+
|
| 149 |
+
if opCount $\geq$ minDuplicate then
|
| 150 |
+
|
| 151 |
+
subgraphs.append(nodeTree $\left\lbrack \text{depth}\right\rbrack$ )
|
| 152 |
+
|
| 153 |
+
else
|
| 154 |
+
|
| 155 |
+
break
|
| 156 |
+
|
| 157 |
+
returnsubgraphs
|
| 158 |
+
|
| 159 |
+
---
|
| 160 |
+
|
| 161 |
+
## D. Sharding Plan Generator
|
| 162 |
+
|
| 163 |
+
A sharding pattern, defining the way a GraphNode can be sharded, also serves as the directed edge between nodes. According to the SRC abstractions, the communication pattern is determined once the split/replica decision is made. Under the hood, the sharding patterns connect to each other like a chain.
|
| 164 |
+
|
| 165 |
+
After pruning, TAP proceeds to derive the optimal plan (Step 3 and 4) using Algorithm 2. In the first phase, TAP enumerates all possible sharding plans given the subgraphs. TAP only needs to work on hundreds of plans thanks to pruning. However, not every plan is valid because we only have weekly connected subgraphs. Therefore, the candidate plans need to be validated by checking the connectivity (line 5-10). Upon checking, TAP evaluates the performance of each plan using a cost model and selects the best plan.
|
| 166 |
+
|
| 167 |
+
Algorithm 2 Derivation of Optimal Plan
|
| 168 |
+
|
| 169 |
+
---
|
| 170 |
+
|
| 171 |
+
procedure DERIVEPLAN(modelDef, shardingPatterns)
|
| 172 |
+
|
| 173 |
+
subgraphs $\leftarrow$ PruneGraph(modelDef)
|
| 174 |
+
|
| 175 |
+
candidatePlans
|
| 176 |
+
|
| 177 |
+
enumerateAllPlans(subgraphs)
|
| 178 |
+
|
| 179 |
+
validPlans $\leftarrow \{ \}$
|
| 180 |
+
|
| 181 |
+
for all $p \in$ candidatePlans do
|
| 182 |
+
|
| 183 |
+
validated $\leftarrow$ PatternRouting(p)
|
| 184 |
+
|
| 185 |
+
if validated then
|
| 186 |
+
|
| 187 |
+
validPlans.insert(p)
|
| 188 |
+
|
| 189 |
+
bestPlan $\leftarrow$ min(QueryCost(validPlans))
|
| 190 |
+
|
| 191 |
+
returnbestPlan
|
| 192 |
+
|
| 193 |
+
---
|
| 194 |
+
|
| 195 |
+
## E. Pattern Routing
|
| 196 |
+
|
| 197 |
+
Algorithm 3 Plan Validation
|
| 198 |
+
|
| 199 |
+
---
|
| 200 |
+
|
| 201 |
+
procedure PATTERNROUTING(currPlan)
|
| 202 |
+
|
| 203 |
+
TopoSort(currPlan)
|
| 204 |
+
|
| 205 |
+
nodes $Q \leftarrow$ currPlan.root
|
| 206 |
+
|
| 207 |
+
while nodes $Q \neq \varnothing$ do
|
| 208 |
+
|
| 209 |
+
currNode $\leftarrow$ nodesQ.dequeue( )
|
| 210 |
+
|
| 211 |
+
for all childNode $\in$ currNode.next( ) do
|
| 212 |
+
|
| 213 |
+
${sp} \leftarrow$ lookUpShrdPatn(currNode, childNode)
|
| 214 |
+
|
| 215 |
+
if ${sp} \neq \varnothing$ then
|
| 216 |
+
|
| 217 |
+
if childNode $= =$ currPlan.leaf then
|
| 218 |
+
|
| 219 |
+
return TRUE
|
| 220 |
+
|
| 221 |
+
else
|
| 222 |
+
|
| 223 |
+
nodeQ.enqueue(childNode)
|
| 224 |
+
|
| 225 |
+
return FALSE
|
| 226 |
+
|
| 227 |
+
---
|
| 228 |
+
|
| 229 |
+
In the pattern routing step (Algorithm 3), TAP tries to assemble the weakly connected GraphNodes into a valid sharding plan by checking the connectivities. This is to ensure the success of graph rewriting (Step 5). TAP does so using breadth-first-search (BFS) starting from the root node, and the goal is to make sure there exists at least a connected path from the root to the leaf chained using the sharding patterns.
|
| 230 |
+
|
| 231 |
+
One challenge is that a pair of contracting sharding patterns may have different input and output tensors, and a consumer operator's input is not ready until its producer is ready. In other words, dependencies exist between GraphNodes, but the information was kept in the original edges and could be lost when we perform pruning.
|
| 232 |
+
|
| 233 |
+
To solve it, we perform a topological search for the GraphN-ode based on the readiness of the input tensor. We leverage that neural networks can be represented using a directed acyclic graph, and reconstruct the edges based on the order of the producer-consumer relationship. This way, TAP avoids checking the order for every pair of GraphNodes.
|
| 234 |
+
|
| 235 |
+
## F. Cost Model
|
| 236 |
+
|
| 237 |
+
To build a cost model, we first profile different tensor parallel plans to understand the bottleneck. Fig. 3 summarizes the result. Data were collected from two nodes interconnected by 32 Gbps Ethernet, each equipped with 8 GPUs. We observe that inter-node communication is the main bottleneck for tensor parallelism, and the best plan is not necessarily the one that splits every weight tensor, in line with [6].
|
| 238 |
+
|
| 239 |
+

|
| 240 |
+
|
| 241 |
+
Fig. 3: Time breakdown for tensor parallel plans on T5-large model on 8 and 16 GPUs ( $8\mathrm{w}/{16}\mathrm{w}$ ). DP means data parallel, MHA means sharding the multi-head attention, FFN means sharding the feed-forward layer, and Megatron refers to the tensor sharding plan described in [17].
|
| 242 |
+
|
| 243 |
+
As the number of devices increases from $8 \times$ to ${16} \times$ , the difference between communication time and computation time is further pronounced. This is because the bottleneck has shifted from high-speed intra-node communication (PCI-e) to slower inter-node communication (Ethernet).
|
| 244 |
+
|
| 245 |
+
Furthermore, the best tensor parallel plan for 16 GPUs (16w-FFN) only shards the weight in the feed-forward layer. We conjecture that with more tensors split instead of replicated, there are fewer FLOPs per device and the computation time is lower. However, this comes at the cost of having more communication. In the case of training in the data center where nodes are interconnected by Ethernet, the speed bottleneck may shift from computation to communication instead. Therefore, communication cost is the main consideration when we design the cost model.
|
| 246 |
+
|
| 247 |
+
TAP addresses these issues using an analytical cost model based on the tensor's communication method, shape, and data format. Each sharding pattern is associated with a cost, and the total cost is calculated by summing all pattern costs along the critical path.
|
| 248 |
+
|
| 249 |
+
## G. Graph Rewriting
|
| 250 |
+
|
| 251 |
+
After evaluating the cost of each sharding plan, TAP assembles the parallel plan. It does so by first restoring the original order of operators. Then, TAP identifies optimization opportunities that can be performed through gradient packing. In the end, TAP passes the resulting parallelized neural network plan to the deep learning framework runtime.
|
| 252 |
+
|
| 253 |
+
## H. Limitation and Future Work
|
| 254 |
+
|
| 255 |
+
To further optimize the memory consumption, TAP could leverage other orthogonal techniques such as Auto Mixed Precision (AMP) [1], recomputation [5], and pipeline parallelism. Since both AMP and TAP optimize on the graph representation of the neural network, they can be made into different passes. Also, gradient checkpointing can be used to offload the selected GraphNode onto the main memory. TAP may also be used with pipeline parallelism through automatic [9], [12], [15], [16] or manual placements.
|
| 256 |
+
|
| 257 |
+
## V. Preliminary Evaluation
|
| 258 |
+
|
| 259 |
+
## A. Setup
|
| 260 |
+
|
| 261 |
+
We first evaluate the pruning algorithm and the use of JustIn-Time compilation for TAP . Then, for comparison with another auto-parallel framework, we use Alpa version 0.7 running with JAX 0.3.5. Next, we use Megatron running on PyTorch to compare against expert-engineered tensor parallel plans. Finally, we present the training convergence running gigantic neural networks.
|
| 262 |
+
|
| 263 |
+
The evaluation was performed on Company A's public cloud node with 756GB main memory, $2 \times$ Intel 8163 CPUs at 24 cores each, and $8 \times$ Nvidia V100 SXM2 32GB GPUs. Additionally, TAP builds on top of TensorFlow 1.12.
|
| 264 |
+
|
| 265 |
+
## B. End-to-End Evaluation
|
| 266 |
+
|
| 267 |
+
In this section, we compare TAP with auto-parallel framework Alpa on search time and performance of the discovered plan.
|
| 268 |
+
|
| 269 |
+
1) Search time.: As explained in $§$ ??, TAP has a sublinear time complexity, which is desirable when the models' size scales up. In the experiments with Alpa, we present the end-to-end search time with respect to model scaling, defined by the duration from the start of the experiment till the moment that the training process begins. Due to time constraints, we shortlisted a search space of 16 plans for T5 and 5 plans for ResNet, while we did not restrict the search space for TAP .
|
| 270 |
+
|
| 271 |
+

|
| 272 |
+
|
| 273 |
+
Fig. 4: End-to-end search time when scaling on the number of parameters for dense transformer model.
|
| 274 |
+
|
| 275 |
+

|
| 276 |
+
|
| 277 |
+
Fig. 5: End-to-end search time when scaling on the number of parameters for the large-scale classification model.
|
| 278 |
+
|
| 279 |
+
To scale the model along the depth, we increase the number of transformer layers for T5, an encoder-decoder transformer architecture for language modeling. Increasing the depth of dense transformer models is a common practice to improve performance. Fig. 4 shows that, with rising parameters, TAP can still find a plausible schedule in under 15 mins, which is ${21} \times - {67} \times$ faster than Alpa.
|
| 280 |
+
|
| 281 |
+
To scale the model size along the width for the ResNet50 model, we choose to increase the size of the classification layer. The original ResNet50 model has 1024 classes in the classification layer. As we increase the dimensions for the classification layer, the total number of parameters also scales up. As shown in Fig. 5, TAP is two orders of magnitude faster than Alpa in finding the optimal solution. Our system outperforms it by ${103} \times - {162} \times$ .
|
| 282 |
+
|
| 283 |
+
We further analyze the time breakdown during the search. For example, for 24-layer T5-large (770M parameters), Alpa spent 5 mins profiling the operators and 5 mins constructing the pipeline stages out of the operators. Instead, TAP reduces the architecture to one transformer block and searches for shardable parameters within that only, drastically reducing the search space. As a result, Alpa takes 197 minutes to search for 16 candidate plans, while TAP requires only 6 minutes to examine 729 candidate plans.
|
| 284 |
+
|
| 285 |
+

|
| 286 |
+
|
| 287 |
+
Fig. 6: Training time per iteration for T5 (batch size=16). The blue band represents the standard derivation.
|
| 288 |
+
|
| 289 |
+

|
| 290 |
+
|
| 291 |
+
Fig. 7: Training time per iteration for ResNet50 (batch size=1024).
|
| 292 |
+
|
| 293 |
+
2) Training speed.: We also evaluate the performance of the best plans produced by Alpa and TAP . We observe that Alpa favors pipeline parallel schedules, while the optimal schedule found by TAP is similar to the Megatron-style tensor parallel schedule. Since the plans using pipeline parallelism require less communication, the plans from Alpa have a higher throughput.
|
| 294 |
+
|
| 295 |
+
We also observe that as the width of the model increases, the performance of TAP plans is better and more consistent. Fig. 7 shows the time to finish one iteration of training for parallel plans of ResNet50. We first observe that TAP consistently outperforms Alpa. Further, the variance (blue band) in plans discovered by Alpa shows that it struggles to find consistently good plans.
|
| 296 |
+
|
| 297 |
+
## VI. CONCLUSION
|
| 298 |
+
|
| 299 |
+
We present TAP, an automatic parallelism framework that efficiently discovers data/tensor parallel plans for large models. Leveraging the observation that shared subgraphs widely exist in neural networks, we design a pruning algorithm that efficiently reduces the search space with a sub-linear end-to-end complexity. The best plans found by TAP are comparable with the state-of-the-art expert-engineered plans while only taking minutes to discover.
|
| 300 |
+
|
| 301 |
+
## REFERENCES
|
| 302 |
+
|
| 303 |
+
[1] "Automatic mixed precision for deep learning," https://developer.nvidia.com/automatic-mixed-precision.
|
| 304 |
+
|
| 305 |
+
[2] M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean, M. Devin, S. Ghemawat, I. Goodfellow, A. Harp, G. Irving, M. Isard, Y. Jia, R. Jozefowicz, L. Kaiser, M. Kudlur, J. Levenberg, D. Mane, R. Monga, S. Moore, D. Murray, C. Olah, M. Schuster, J. Shlens, B. Steiner, I. Sutskever, K. Talwar, P. Tucker, V. Vanhoucke, V. Vasudevan, F. Viegas, O. Vinyals, P. Warden, M. Wattenberg, M. Wicke, Y. Yu, and X. Zheng, "TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems," 2016. [Online]. Available: http://arxiv.org/abs/1603.04467
|
| 306 |
+
|
| 307 |
+
[3] A. Baevski, Y. Zhou, A. Mohamed, and M. Auli, "wav2vec 2.0: A framework for self-supervised learning of speech representations," Advances in Neural Information Processing Systems, vol. 33, pp. 12449-12460, 2020.
|
| 308 |
+
|
| 309 |
+
[4] T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell et al., "Language models are few-shot learners," Advances in neural information processing systems, vol. 33, pp. 1877-1901, 2020.
|
| 310 |
+
|
| 311 |
+
[5] T. Chen, B. Xu, C. Zhang, and C. Guestrin, "Training deep nets with sublinear memory cost," arXiv preprint arXiv:1604.06174, 2016.
|
| 312 |
+
|
| 313 |
+
[6] J. Dean, G. S. Corrado, R. Monga, K. Chen, M. Devin, Q. V. Le, M. Z. Mao, M. A. Ranzato, A. Senior, P. Tucker, K. Yang, and A. Y. Ng, "Large scale distributed deep networks," Tech. Rep., 2012.
|
| 314 |
+
|
| 315 |
+
[7] J. Devlin, M. W. Chang, K. Lee, and K. Toutanova, "BERT: Pre-training of deep bidirectional transformers for language understanding," Tech. Rep., 2019. [Online]. Available: https://github.com/tensorflow/tensor2tensor
|
| 316 |
+
|
| 317 |
+
[8] A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly et al., "An image is worth ${16} \times {16}$ words: Transformers for image recognition at scale," arXiv preprint arXiv:2010.11929, 2020.
|
| 318 |
+
|
| 319 |
+
[9] S. Fan, Y. Rong, C. Meng, Z. Cao, S. Wang, Z. Zheng, C. Wu, G. Long, J. Yang, L. Xia, L. Diao, X. Liu, and W. Lin, "DAPPLE: A pipelined data parallel approach for training large models," Proceedings of the ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, PPOPP, vol. 21, pp. 431-445, 2021. [Online]. Available: https://doi.org/10.1145/3437801.3441593
|
| 320 |
+
|
| 321 |
+
[10] W. Fedus, B. Zoph, and N. Shazeer, "Switch transformers: Scaling to trillion parameter models with simple and efficient sparsity," 2021.
|
| 322 |
+
|
| 323 |
+
[11] K. He, X. Zhang, S. Ren, and J. Sun, "Deep residual learning for image recognition," in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 2016-Decem, 2016, pp. 770-778. [Online]. Available: http://image-net.org/challenges/LSVRC/2015/
|
| 324 |
+
|
| 325 |
+
[12] X. Jia, L. Jiang, A. Wang, W. Xiao, Z. Shi, J. Zhang, X. Li, L. Chen, Y. Li, Z. Zheng, X. Liu, and W. Lin, "Whale: Efficient giant model training over heterogeneous gpus," in USENIX Annual Technical Conference. USENIX, 2022.
|
| 326 |
+
|
| 327 |
+
[13] Z. Jia, M. Zaharia, and A. Aiken, "Beyond Data and Model Parallelism for Deep Neural Networks," arXiv, 2018. [Online]. Available: http://arxiv.org/abs/1807.05358
|
| 328 |
+
|
| 329 |
+
[14] C. Lattner, M. Amini, U. Bondhugula, A. Cohen, A. Davis, J. Pienaar,
|
| 330 |
+
|
| 331 |
+
R. Riddle, T. Shpeisman, N. Vasilache, and O. Zinenko, "MLIR: Scaling compiler infrastructure for domain specific computation," in 2021 IEEE/ACM International Symposium on Code Generation and Optimization (CGO), 2021, pp. 2-14.
|
| 332 |
+
|
| 333 |
+
[15] Z. Li, S. Zhuang, S. Guo, D. Zhuo, H. Zhang, D. Song, and I. Stoica, "TeraPipe: Token-Level Pipeline Parallelism for Training Large-Scale Language Models," 2021. [Online]. Available: http://arxiv.org/abs/2102.07988
|
| 334 |
+
|
| 335 |
+
[16] D. Narayanan, A. Harlap, A. Phanishayee, V. Seshadri, N. R. Devanur, G. R. Ganger, P. B. Gibbons, and M. Zaharia, "Pipedream: Generalized pipeline parallelism for DNN training," SOSP 2019 - Proceedings of the 27th ACM Symposium on Operating Systems Principles, pp. 1-15, 2019. [Online]. Available: https://doi.org/10.1145/3341301.3359646
|
| 336 |
+
|
| 337 |
+
[17] D. Narayanan, M. Shoeybi, J. Casper, P. LeGresley, M. Patwary, V. Korthikanti, D. Vainbrand, P. Kashinkunti, J. Bernauer, B. Catanzaro, A. Phanishayee, and M. Zaharia, "Efficient Large-Scale Language Model Training on GPU Clusters Using Megatron-LM," International Conference for High Performance Computing, Networking, Storage and Analysis, SC, 2021. [Online]. Available: http://arxiv.org/abs/2104.04473
|
| 338 |
+
|
| 339 |
+
[18] A. Radford, J. W. Kim, C. Hallacy, A. Ramesh, G. Goh, S. Agarwal, G. Sastry, A. Askell, P. Mishkin, J. Clark et al., "Learning transferable visual models from natural language supervision," in International Conference on Machine Learning. PMLR, 2021, pp. 8748-8763.
|
| 340 |
+
|
| 341 |
+
[19] C. Raffel, N. Shazeer, A. Roberts, K. Lee, S. Narang, M. Matena, Y. Zhou, W. Li, and P. J. Liu, "Exploring the limits of transfer learning with a unified text-to-text transformer," Tech. Rep., 2020. [Online]. Available: http://jmlr.org/papers/v21/20-074.html.
|
| 342 |
+
|
| 343 |
+
[20] S. Rajbhandari, J. Rasley, O. Ruwase, and Y. He, "Zero: Memory optimizations toward training trillion parameter models," International Conference for High Performance Computing, Networking, Storage and Analysis, SC, vol. 2020-Novem, 2020.
|
| 344 |
+
|
| 345 |
+
[21] J. Ren, S. Rajbhandari, R. Y. Aminabadi, O. Ruwase, S. Yang, M. Zhang, D. Li, and Y. He, "ZeRO-offload: Democratizing billion-scale model training," 2021 USENIX Annual Technical Conference, pp. 551-564, 2021. [Online]. Available: https://www.deepspeed.ai/tutorials/
|
| 346 |
+
|
| 347 |
+
[22] C. Riquelme, J. Puigcerver, B. Mustafa, M. Neumann, R. Jenatton, A. Susano Pinto, D. Keysers, and N. Houlsby, "Scaling vision with sparse mixture of experts," Advances in Neural Information Processing Systems, vol. 34, pp. 8583-8595, 2021.
|
| 348 |
+
|
| 349 |
+
[23] M. Shoeybi, M. Patwary, R. Puri, P. LeGresley, J. Casper, and B. Catanzaro, "Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism," 2019. [Online]. Available: http://arxiv.org/abs/1909.08053
|
| 350 |
+
|
| 351 |
+
[24] C. Unger, Z. Jia, W. Wu, S. Lin, M. Baines, C. Efrain, Q. Narvaez, V. Ramakrishnaiah, N. Prajapati, P. Mccormick, J. Mohd-yusof, J. Park, M. Smelyanskiy, A. Aiken, P. Mccormick, J. M.-y. Xi, and L. Dheevatsa, "Unity : Accelerating DNN Training Through Joint Optimization of Algebraic Transformations and Parallelization This paper is included in the Proceedings of the," 2022. [Online]. Available: https://www.usenix.org/conference/osdi22/presentation/unger
|
| 352 |
+
|
| 353 |
+
[25] M. Wang, C. c. Huang, and J. Li, "Supporting very large models using automatic dataflow graph partitioning," Proceedings of the 14th EuroSys Conference 2019, 2019.
|
| 354 |
+
|
| 355 |
+
[26] Y. Xu, H. Lee, D. Chen, B. Hechtman, Y. Huang, R. Joshi, M. Krikun, D. Lepikhin, A. Ly, M. Maggioni, R. Pang, N. Shazeer, S. Wang, T. Wang, Y. Wu, and Z. Chen, "GSPMD: General and Scalable Parallelization for ML Computation Graphs," 2021. [Online]. Available: http://arxiv.org/abs/2105.04663
|
| 356 |
+
|
| 357 |
+
[27] F. Xue, Z. Shi, F. Wei, Y. Lou, Y. Liu, and Y. You, "Go wider instead of deeper," in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 36, no. 8, 2022, pp. 8779-8787.
|
| 358 |
+
|
| 359 |
+
[28] L. Zheng, Z. Li, H. Zhang, Y. Zhuang, Z. Chen, Y. Huang, Y. Wang, Y. Xu, D. Zhuo, J. E. Gonzalez, and I. Stoica, "Alpa: Automating Inter- and Intra-Operator Parallelism for Distributed Deep Learning," 2022. [Online]. Available: http://arxiv.org/abs/2201.12023
|
papers/ISCA/ISCA 2023/ISCA 2023 Workshop/ISCA 2023 Workshop ASSYST/6d5El_LENnf/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,317 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ TAP: EFFICIENT DERIVATION OF TENSOR PARALLEL PLANS FOR FOUNDATION MODELS
|
| 2 |
+
|
| 3 |
+
ISCA 2023 Submission #5 - Confidential Draft - Do NOT Distribute!!
|
| 4 |
+
|
| 5 |
+
Abstract-Model parallelism is essential to train large language models efficiently. However, determining the optimal model parallel schedule for a given neural network can be slow and inefficient due to the vast choice space. To address this challenge, we propose a tensor model parallelism framework called TAP, which automatically searches for the best data and tensor parallel schedules.
|
| 6 |
+
|
| 7 |
+
Our approach is based on the observation that a neural network can be represented as a directed acyclic graph, within which only exists a limited set of frequent subgraphs. With that, we design a graph pruning algorithm that efficiently folds the search space. As a result, TAP runs at sub-linear complexity with respect to model size, which makes it a practical solution for large-scale networks.
|
| 8 |
+
|
| 9 |
+
Experimental results demonstrate that TAP outperforms the state-of-the-art automatic parallelism frameworks by ${20} - {160} \times$ in searching time. Moreover, the performance of TAP's discovered schedules is competitive with expert-engineered ones. In summary, TAP provides a powerful and efficient tool for model parallelism that can help alleviate the burden of manual tuning.
|
| 10 |
+
|
| 11 |
+
§ I. INTRODUCTION
|
| 12 |
+
|
| 13 |
+
Recent years have witnessed a burgeoning of large deep neural networks (DNNs) that deliver unprecedented accuracy across a wide range of AI tasks. The rate of DNN model size increase, however, has far surpassed the growth in accelerator memory capacity. To address this challenge, model parallelism has been proposed, where model weights are sharded onto multiple devices during distributed DNN training.
|
| 14 |
+
|
| 15 |
+
There are two main paradigms in model parallelism: pipeline parallelism and tensor parallelism. Pipeline parallelism divides the model by layers. Only activations are communicated during the forward pass, while gradient tensors are exchanged in the backward phase. Pipeline parallelism has recently drawn much attention, with many proposed algorithms aiming to find the optimal pipeline schedule that minimizes the pipeline idle time (i.e., "bubble size"). However, pipeline parallelism suffers from two significant drawbacks: 1) each layer must fit into a single accelerator's memory, and 2) interleaving different layers can be challenging for models with imbalanced architectures. As an alternative, tensor parallelism partitions the model weights and distributes them to multiple devices, thus lifting the restriction on the size of individual layers. In this work, we focus on tensor parallelism.
|
| 16 |
+
|
| 17 |
+
Manual specification of tensor parallelism is a daunting task, given that the quality of a partitioning scheme depends on both the neural network architecture and the hardware system. To address this challenge, automatic parallelism approaches have been proposed which leverages user hints or guided searches over the entire partitioning candidate space. We argue that a brute-force search of the space is unnecessary in majority of the cases. Our research makes two key observations: Firstly, most neural networks include shared subgraphs that can significantly reduce the search space. Secondly, communication is the primary bottleneck during tensor parallelism training, and contiguous partitions in a block cannot overlap. Therefore, the search process can be accelerated by only searching for unique neural network sub-modules and evaluating candidate strategies based on their communication cost.
|
| 18 |
+
|
| 19 |
+
Based on those observations, we present TAP, a deep learning framework that automatically derives tensor-parallel plans for arbitrary neural networks without requiring expert annotations. TAP first constructs a skimmed DAG by removing auxiliary nodes, then it finds all of the shared subgraphs and searches for the optimal sharding schedule for each of them. In the end, TAP reconstructs the DAG by applying the found solution to the original graph. TAP drastically reduces the search space for tensor parallel plans, achieving ${20} \times - {160} \times$ speedup compared with the state-of-the-art auto-parallel framework. Evaluations demonstrate that our approach can also generate tensor parallel plans comparable solutions designed by an expert.
|
| 20 |
+
|
| 21 |
+
Our paper makes the following contributions:
|
| 22 |
+
|
| 23 |
+
* A set of intermediate representations (IRs) of the computational graph that abstract away from low-level implementation details;
|
| 24 |
+
|
| 25 |
+
* A graph pruning algorithm that exploits the shared substructure to facilitate efficient searching;
|
| 26 |
+
|
| 27 |
+
* A communication-based cost model that accurately captures the communication requirements for tensor-parallel training.
|
| 28 |
+
|
| 29 |
+
§ II. BACKGROUND
|
| 30 |
+
|
| 31 |
+
§ A. MODEL PARALLELISM
|
| 32 |
+
|
| 33 |
+
Model parallelism distributes model weights onto different devices and synchronizes the full model through collective communication [6]. Model parallelism can be further divided into categories, pipeline parallelism and tensor parallelism.
|
| 34 |
+
|
| 35 |
+
1) Tensor Parallelism: Tensor parallelism splits the model layer and distributes it across multiple devices, thus dispersing the computational overhead of the layer [17], [23], [26]. Each device stores only a portion of the input tensors in its local memory. The final result therefore needs to be aggregated from partial results through collective communication. Tensor parallelism can alleviate the challenge of training heterogeneous models using pipeline parallelism and can achieve better performance.
|
| 36 |
+
|
| 37 |
+
§ B. AUTOMATIC PARALLELISM
|
| 38 |
+
|
| 39 |
+
Automatic parallelism is a recent line of research on automatically distributing a local model from a single device to multiple devices using the data and model parallel strategies. Existing approaches for automatic parallelism rely on user hints or brute-force searches across the entire space.
|
| 40 |
+
|
| 41 |
+
1) User hint: User-hint-based automatic parallelism scales single-device programs to multi-device systems by incorporating user annotations. For instance, GSPMD [26] infers the operator partitioning scheme based on user annotations, while Whale [12] allows for the inclusion of user hints for semi-auto parallelization of large models and introduces a hardware-ware load balance algorithm. However, user-hint-based automatic parallelism approaches require users to possess a deep understanding of both the system and model, and hard-coded user hints may not be transferable when either the model or system changes.
|
| 42 |
+
|
| 43 |
+
2) Search algorithm: Recent work has proposed fully automatic approaches based on search algorithms to optimize distributed DNN training. For example, Tofu [25] uses a recursive search algorithm based on dynamic programming and DNN-specific heuristics to minimize communication for the entire dataflow graph. Flexflow [13] employs randomized search to find the best parallel strategy in the SOAP (Sample, Operator, Attribute, and Parameter) space. Alpa [28] optimizes large DL models through two-level optimizations: inter-operator and intra-operator. It automates inter-operator parallelism using dynamic programming and intra-operator parallelism with integer linear programming. Unity [24] represents parallelization and algebraic transformations as substitutions on a unified graph representation, uses a novel hierarchical search algorithm to identify an optimized sequence of substitutions, and scales to large numbers of GPUs and complex DNNs.
|
| 44 |
+
|
| 45 |
+
3) Challenge of exploding search space: Search-based approaches face the challenge of exploding search space as model size scales, resulting in significant time cost. For example, each tensor (assuming 2D) can be partitioned in three ways: not sharded, sharded along the first dimension (row-wise), or sharded along the second dimension (columnwise). Given a neural network $G\left( {E,V}\right)$ with $V$ weight tensors, there exists ${3}^{V}$ possible sharding plans. Therefore, finding an optimal sharding plan is an NP-hard problem.
|
| 46 |
+
|
| 47 |
+
§ III. APPROACH
|
| 48 |
+
|
| 49 |
+
In this section, we formulate the problem of searching for an optimal tensor parallel schedule, followed by our observation of the common presence of shared sub-structures in a large neural network, leading to the motivation of our design.
|
| 50 |
+
|
| 51 |
+
§ A. PROBLEM FORMULATION
|
| 52 |
+
|
| 53 |
+
A neural network can be represented as a directed acyclic graph $G\left( {E,V}\right)$ comprised of $L$ layers. The set of vertices $V$ represents the operators, and the set of edges $E$ represents the data flow from producer to consumer operators. Operators can optionally carry a weight tensor. During the forward pass, an edge represents an activation tensor, while in the backward phase, it represents a gradient tensor. A layer ${L}_{i} \in L$ is either a layer or a cluster of operators with a similar composition. Let the physical training system be $S\left( {m,n}\right)$ where $m$ is the number of worker nodes, and $n$ is the number of accelerators per worker node. A parallel plan $p$ is a new graph mathematically equivalent to $G$ . The cost function, $\operatorname{Cost}\left( {p,S}\right)$ , measures training latency for a given plan and training system. The goal is to find an optimal parallel plan ${p}^{ * }$ where:
|
| 54 |
+
|
| 55 |
+
$$
|
| 56 |
+
\mathop{\operatorname{minimize}}\limits_{p}\operatorname{Cost}\left( {p,S}\right)
|
| 57 |
+
$$
|
| 58 |
+
|
| 59 |
+
$$
|
| 60 |
+
\text{ subject to }p\left( X\right) = G\left( X\right) \forall X
|
| 61 |
+
$$
|
| 62 |
+
|
| 63 |
+
How can an automated system find such a plan? Fig. 1 illustrates the typical workflow of an auto-parallel system. The system first reduce the search space for model splitting using pruning techniques. Next, a search algorithm is employed to generate one or more candidate plans for evaluation. Finally, a cost model evaluates all candidate plans and selects the one with the lowest cost based on predefined evaluation criteria.
|
| 64 |
+
|
| 65 |
+
< g r a p h i c s >
|
| 66 |
+
|
| 67 |
+
Fig. 1: General recipe of automatic model parallelism frameworks.
|
| 68 |
+
|
| 69 |
+
The end-to-end duration to produce an optimal schedule is a critical metric for an auto-parallel system. We identify three primary factors that contribute to the overall completion time: the size of the search space, the time complexity of the searching algorithm, and the speed of the evaluation method.
|
| 70 |
+
|
| 71 |
+
§ B. CHALLENGES AND OBSERVATIONS
|
| 72 |
+
|
| 73 |
+
As mentioned earlier, a major challenge faced by auto-parallel systems is the search space explosion problem. This exponential increase in candidate space has led to impractical search time for modern large models [28] ( $§\mathrm{V} - \mathrm{B}$ ). This creates a dilemma: while auto-parallel systems aim to accelerate large model training, if the derivation step itself is too slow, it may offset the benefit of using an auto-parallel system.
|
| 74 |
+
|
| 75 |
+
How to effectively reduce this large candidate search space? To answer this question, we studied common scaling techniques for popular DNN models and summarized our findings in Table I. We observe that these techniques can be grouped into two categories: scaling on the width, achieved by increasing the dimension of layers (e.g., adding more classes, attention heads, or convolutional channels), or scaling on the depth by increasing the number of layers. Notbaly, both techniques start with a base subgraph, a group of layers or operators, and expand from it. For instance, large pre-trained language models like BERT [7] and T5 [19] comprise tens of transformer layers, while multi-class object classification networks like ResNet-50 [11] are built from convolutional layers.
|
| 76 |
+
|
| 77 |
+
max width=
|
| 78 |
+
|
| 79 |
+
Scaling Technique Task Model #Params Shared Subgraph (SS) #of SS
|
| 80 |
+
|
| 81 |
+
1-6
|
| 82 |
+
5*By width Vision ResNet50 [11] 23M Conv ${50} \times$
|
| 83 |
+
|
| 84 |
+
2-6
|
| 85 |
+
Vision + Language CLIP-Base [18] 63M Transformer ${12} \times$
|
| 86 |
+
|
| 87 |
+
2-6
|
| 88 |
+
Language Model WideNet [27] 63M MoE layer 32x
|
| 89 |
+
|
| 90 |
+
2-6
|
| 91 |
+
Vision ViT-Huge [8] 632M Transformer ${32} \times$
|
| 92 |
+
|
| 93 |
+
2-6
|
| 94 |
+
Vision V-MoE [22] 15B MoE layer ${24} \times$
|
| 95 |
+
|
| 96 |
+
1-6
|
| 97 |
+
5*By depth Speech wav2vec 2.0 [3] 317M Conv, Transformer $7 \times ,{24} \times$
|
| 98 |
+
|
| 99 |
+
2-6
|
| 100 |
+
Language Model BERT [7] 340M Transformer ${24} \times$
|
| 101 |
+
|
| 102 |
+
2-6
|
| 103 |
+
Language Model T5-Large [19] 770M Transformer ${24} \times$
|
| 104 |
+
|
| 105 |
+
2-6
|
| 106 |
+
Language Model GPT-3 [4] 175B Transformer 96x
|
| 107 |
+
|
| 108 |
+
2-6
|
| 109 |
+
Language Model Switch Transformer [10] 1571B MoE layer ${15} \times$
|
| 110 |
+
|
| 111 |
+
1-6
|
| 112 |
+
|
| 113 |
+
TABLE I: Shared subgraphs exist on many neural network models. "Conv" means convolutional layer, "MoE" means Mixture-of-Expert layer.
|
| 114 |
+
|
| 115 |
+
Furthermore, upon analyzing expert-designed parallel schedules ( [17], [20], [21]), we notice that parallel schedules are predominately similar for layers of the same type. This is due to the fact that similar layers have comparable computational and memory consumption. This finding motivates us to investigate reusing parallel schedules discovered for identical layers, which can reduce the search effort.
|
| 116 |
+
|
| 117 |
+
§ IV. DESIGN AND IMPLEMENTATION
|
| 118 |
+
|
| 119 |
+
§ A. OVERVIEW
|
| 120 |
+
|
| 121 |
+
Fig. 2 illustrates the workflow of TAP . Given a neural network represented as a graph, TAP first converts the graph into an intermediate representation $\left( {§\mathrm{{IV}} - \mathrm{B}}\right)$ called GraphNode and removes auxiliary nodes. TAP then performs graph pruning $\left( {§\mathrm{{IV}} - \mathrm{C}}\right)$ to restrict the search space from the complete graph to the subgraphs. After pruning, TAP explores the possible sharding opportunities using pre-defined sharding patterns(§ IV-D) and validates the candidate plans(§ IV-E). If a valid plan is found, it is evaluated using the cost model $\left( {§\text{ IV-F }}\right)$ . TAP takes the overall best plan, performs additional communication-level optimizations, and rewrites the model into a parallel version $\left( {§\mathrm{{IV}} - \mathrm{G}}\right)$ . To use TAP, users only need to specify the device mesh as shown in the example below.
|
| 122 |
+
|
| 123 |
+
§ 1: EXAMPLE WITH TAP ON 2 WORKERS EACH WITH 8 GPUS
|
| 124 |
+
|
| 125 |
+
import tensor_auto_parallel as tap
|
| 126 |
+
|
| 127 |
+
mesh = $\left\lbrack {2,8}\right\rbrack$
|
| 128 |
+
|
| 129 |
+
tap.auto_parallel(tap.split(mesh))
|
| 130 |
+
|
| 131 |
+
model_def( )
|
| 132 |
+
|
| 133 |
+
§ B. INTERMEDIATE REPRESENTATION
|
| 134 |
+
|
| 135 |
+
TAP defines a family of high-level Intermediate Representations (IRs) to facilitate the derivation of parallel schedules. Compared to MLIR HLO [14], TAP IRs operate on a coarser granularity while preserving the necessary information for sharding.
|
| 136 |
+
|
| 137 |
+
Upon obtaining the original neural network graph, TAP first trims the graph by deleting the auxiliary operators (Step 1) in Fig. 2). This will remove the initialization and checkpoint-related operators, which will be recovered when converted back to a neural network graph later. As a result, the remaining graph will consist of only computing and communication operators.
|
| 138 |
+
|
| 139 |
+
§ TAP IRS CONSISTS OF:
|
| 140 |
+
|
| 141 |
+
a) GraphNode.: A GraphNode represents a group of computing or communication operators. It can be a layer or a logical group of operators, which is the basic unit for deriving the sharding schedule. The TAP graph is made of GraphNode while preserving the directed edges from the original DAG. Using the GraphNode IR, we reduce the number of nodes in the T5-large model from ${60}\mathrm{k}$ to 1015 weight variables.
|
| 142 |
+
|
| 143 |
+
b) Sharding pattern.: A GraphNode could have multiple ways of sharding. For instance, a 2D matrix weight can be split on either dimension or replicated. TAP defines each sharding pattern using the SRC abstraction. TAP also establishes the cost of each sharding pattern based on communication cost.
|
| 144 |
+
|
| 145 |
+
c) Sharding plan.: A sharding plan is a set of subgraphs (blocks of GraphNodes) with sharding patterns connecting them.
|
| 146 |
+
|
| 147 |
+
§ C. PRUNING USING SHARED SUBGRAPH
|
| 148 |
+
|
| 149 |
+
It is common for DNN models to contain shared subgraphs. If we could identify the shared subgraphs, we could prune the search space by searching only within the subgraph. We propose a graph pruning algorithm to compress the search space into a shared structure (Step ②):
|
| 150 |
+
|
| 151 |
+
In deep learning frameworks like TensorFlow [2], each variable is referred to by the operator that produces it. As such, variables under the same layer share the same name scope because they receive input from the same operator. Therefore, it is possible to cluster operators that fall under the same name scope.
|
| 152 |
+
|
| 153 |
+
Algorithm 1 starts by constructing a nodeTree, which identifies and groups the GraphNodes on each level by using the longest common prefix algorithm on the GraphNodes names (line2-5). After that, it finds the blocks of GraphNodes with a similar composition of operators and compares the number of operators with the minimum duplicate threshold (line 7). As the depth decreases, we will see a larger subgraph with less homogeneous compositions. Notice that multiple shared subgraphs may exist since a neural network may have multiple leaf nodes.
|
| 154 |
+
|
| 155 |
+
Neural Network
|
| 156 |
+
|
| 157 |
+
< g r a p h i c s >
|
| 158 |
+
|
| 159 |
+
Fig. 2: Overview of the TAP system.
|
| 160 |
+
|
| 161 |
+
Algorithm 1 Graph Pruning
|
| 162 |
+
|
| 163 |
+
procedure PRUNEGRAPH(model Def, minDuplicate)
|
| 164 |
+
|
| 165 |
+
nodeTree $\leftarrow \varnothing$
|
| 166 |
+
|
| 167 |
+
maxDepth $\leftarrow$ modelDef.depth
|
| 168 |
+
|
| 169 |
+
for all depth $\in \max$ Depth $\cdots 1$ do
|
| 170 |
+
|
| 171 |
+
nodeTree $\left\lbrack \text{ depth }\right\rbrack$
|
| 172 |
+
|
| 173 |
+
longestCommonPrefix(modelDef.nodes.name)
|
| 174 |
+
|
| 175 |
+
opCount $=$ findSimilarBlk(nodeTree $\left\lbrack \text{ depth }\right\rbrack )$
|
| 176 |
+
|
| 177 |
+
if opCount $\geq$ minDuplicate then
|
| 178 |
+
|
| 179 |
+
subgraphs.append(nodeTree $\left\lbrack \text{ depth }\right\rbrack$ )
|
| 180 |
+
|
| 181 |
+
else
|
| 182 |
+
|
| 183 |
+
break
|
| 184 |
+
|
| 185 |
+
returnsubgraphs
|
| 186 |
+
|
| 187 |
+
§ D. SHARDING PLAN GENERATOR
|
| 188 |
+
|
| 189 |
+
A sharding pattern, defining the way a GraphNode can be sharded, also serves as the directed edge between nodes. According to the SRC abstractions, the communication pattern is determined once the split/replica decision is made. Under the hood, the sharding patterns connect to each other like a chain.
|
| 190 |
+
|
| 191 |
+
After pruning, TAP proceeds to derive the optimal plan (Step 3 and 4) using Algorithm 2. In the first phase, TAP enumerates all possible sharding plans given the subgraphs. TAP only needs to work on hundreds of plans thanks to pruning. However, not every plan is valid because we only have weekly connected subgraphs. Therefore, the candidate plans need to be validated by checking the connectivity (line 5-10). Upon checking, TAP evaluates the performance of each plan using a cost model and selects the best plan.
|
| 192 |
+
|
| 193 |
+
Algorithm 2 Derivation of Optimal Plan
|
| 194 |
+
|
| 195 |
+
procedure DERIVEPLAN(modelDef, shardingPatterns)
|
| 196 |
+
|
| 197 |
+
subgraphs $\leftarrow$ PruneGraph(modelDef)
|
| 198 |
+
|
| 199 |
+
candidatePlans
|
| 200 |
+
|
| 201 |
+
enumerateAllPlans(subgraphs)
|
| 202 |
+
|
| 203 |
+
validPlans $\leftarrow \{ \}$
|
| 204 |
+
|
| 205 |
+
for all $p \in$ candidatePlans do
|
| 206 |
+
|
| 207 |
+
validated $\leftarrow$ PatternRouting(p)
|
| 208 |
+
|
| 209 |
+
if validated then
|
| 210 |
+
|
| 211 |
+
validPlans.insert(p)
|
| 212 |
+
|
| 213 |
+
bestPlan $\leftarrow$ min(QueryCost(validPlans))
|
| 214 |
+
|
| 215 |
+
returnbestPlan
|
| 216 |
+
|
| 217 |
+
§ E. PATTERN ROUTING
|
| 218 |
+
|
| 219 |
+
Algorithm 3 Plan Validation
|
| 220 |
+
|
| 221 |
+
procedure PATTERNROUTING(currPlan)
|
| 222 |
+
|
| 223 |
+
TopoSort(currPlan)
|
| 224 |
+
|
| 225 |
+
nodes $Q \leftarrow$ currPlan.root
|
| 226 |
+
|
| 227 |
+
while nodes $Q \neq \varnothing$ do
|
| 228 |
+
|
| 229 |
+
currNode $\leftarrow$ nodesQ.dequeue( )
|
| 230 |
+
|
| 231 |
+
for all childNode $\in$ currNode.next( ) do
|
| 232 |
+
|
| 233 |
+
${sp} \leftarrow$ lookUpShrdPatn(currNode, childNode)
|
| 234 |
+
|
| 235 |
+
if ${sp} \neq \varnothing$ then
|
| 236 |
+
|
| 237 |
+
if childNode $= =$ currPlan.leaf then
|
| 238 |
+
|
| 239 |
+
return TRUE
|
| 240 |
+
|
| 241 |
+
else
|
| 242 |
+
|
| 243 |
+
nodeQ.enqueue(childNode)
|
| 244 |
+
|
| 245 |
+
return FALSE
|
| 246 |
+
|
| 247 |
+
In the pattern routing step (Algorithm 3), TAP tries to assemble the weakly connected GraphNodes into a valid sharding plan by checking the connectivities. This is to ensure the success of graph rewriting (Step 5). TAP does so using breadth-first-search (BFS) starting from the root node, and the goal is to make sure there exists at least a connected path from the root to the leaf chained using the sharding patterns.
|
| 248 |
+
|
| 249 |
+
One challenge is that a pair of contracting sharding patterns may have different input and output tensors, and a consumer operator's input is not ready until its producer is ready. In other words, dependencies exist between GraphNodes, but the information was kept in the original edges and could be lost when we perform pruning.
|
| 250 |
+
|
| 251 |
+
To solve it, we perform a topological search for the GraphN-ode based on the readiness of the input tensor. We leverage that neural networks can be represented using a directed acyclic graph, and reconstruct the edges based on the order of the producer-consumer relationship. This way, TAP avoids checking the order for every pair of GraphNodes.
|
| 252 |
+
|
| 253 |
+
§ F. COST MODEL
|
| 254 |
+
|
| 255 |
+
To build a cost model, we first profile different tensor parallel plans to understand the bottleneck. Fig. 3 summarizes the result. Data were collected from two nodes interconnected by 32 Gbps Ethernet, each equipped with 8 GPUs. We observe that inter-node communication is the main bottleneck for tensor parallelism, and the best plan is not necessarily the one that splits every weight tensor, in line with [6].
|
| 256 |
+
|
| 257 |
+
< g r a p h i c s >
|
| 258 |
+
|
| 259 |
+
Fig. 3: Time breakdown for tensor parallel plans on T5-large model on 8 and 16 GPUs ( $8\mathrm{w}/{16}\mathrm{w}$ ). DP means data parallel, MHA means sharding the multi-head attention, FFN means sharding the feed-forward layer, and Megatron refers to the tensor sharding plan described in [17].
|
| 260 |
+
|
| 261 |
+
As the number of devices increases from $8 \times$ to ${16} \times$ , the difference between communication time and computation time is further pronounced. This is because the bottleneck has shifted from high-speed intra-node communication (PCI-e) to slower inter-node communication (Ethernet).
|
| 262 |
+
|
| 263 |
+
Furthermore, the best tensor parallel plan for 16 GPUs (16w-FFN) only shards the weight in the feed-forward layer. We conjecture that with more tensors split instead of replicated, there are fewer FLOPs per device and the computation time is lower. However, this comes at the cost of having more communication. In the case of training in the data center where nodes are interconnected by Ethernet, the speed bottleneck may shift from computation to communication instead. Therefore, communication cost is the main consideration when we design the cost model.
|
| 264 |
+
|
| 265 |
+
TAP addresses these issues using an analytical cost model based on the tensor's communication method, shape, and data format. Each sharding pattern is associated with a cost, and the total cost is calculated by summing all pattern costs along the critical path.
|
| 266 |
+
|
| 267 |
+
§ G. GRAPH REWRITING
|
| 268 |
+
|
| 269 |
+
After evaluating the cost of each sharding plan, TAP assembles the parallel plan. It does so by first restoring the original order of operators. Then, TAP identifies optimization opportunities that can be performed through gradient packing. In the end, TAP passes the resulting parallelized neural network plan to the deep learning framework runtime.
|
| 270 |
+
|
| 271 |
+
§ H. LIMITATION AND FUTURE WORK
|
| 272 |
+
|
| 273 |
+
To further optimize the memory consumption, TAP could leverage other orthogonal techniques such as Auto Mixed Precision (AMP) [1], recomputation [5], and pipeline parallelism. Since both AMP and TAP optimize on the graph representation of the neural network, they can be made into different passes. Also, gradient checkpointing can be used to offload the selected GraphNode onto the main memory. TAP may also be used with pipeline parallelism through automatic [9], [12], [15], [16] or manual placements.
|
| 274 |
+
|
| 275 |
+
§ V. PRELIMINARY EVALUATION
|
| 276 |
+
|
| 277 |
+
§ A. SETUP
|
| 278 |
+
|
| 279 |
+
We first evaluate the pruning algorithm and the use of JustIn-Time compilation for TAP . Then, for comparison with another auto-parallel framework, we use Alpa version 0.7 running with JAX 0.3.5. Next, we use Megatron running on PyTorch to compare against expert-engineered tensor parallel plans. Finally, we present the training convergence running gigantic neural networks.
|
| 280 |
+
|
| 281 |
+
The evaluation was performed on Company A's public cloud node with 756GB main memory, $2 \times$ Intel 8163 CPUs at 24 cores each, and $8 \times$ Nvidia V100 SXM2 32GB GPUs. Additionally, TAP builds on top of TensorFlow 1.12.
|
| 282 |
+
|
| 283 |
+
§ B. END-TO-END EVALUATION
|
| 284 |
+
|
| 285 |
+
In this section, we compare TAP with auto-parallel framework Alpa on search time and performance of the discovered plan.
|
| 286 |
+
|
| 287 |
+
1) Search time.: As explained in $§$ ??, TAP has a sublinear time complexity, which is desirable when the models' size scales up. In the experiments with Alpa, we present the end-to-end search time with respect to model scaling, defined by the duration from the start of the experiment till the moment that the training process begins. Due to time constraints, we shortlisted a search space of 16 plans for T5 and 5 plans for ResNet, while we did not restrict the search space for TAP .
|
| 288 |
+
|
| 289 |
+
< g r a p h i c s >
|
| 290 |
+
|
| 291 |
+
Fig. 4: End-to-end search time when scaling on the number of parameters for dense transformer model.
|
| 292 |
+
|
| 293 |
+
< g r a p h i c s >
|
| 294 |
+
|
| 295 |
+
Fig. 5: End-to-end search time when scaling on the number of parameters for the large-scale classification model.
|
| 296 |
+
|
| 297 |
+
To scale the model along the depth, we increase the number of transformer layers for T5, an encoder-decoder transformer architecture for language modeling. Increasing the depth of dense transformer models is a common practice to improve performance. Fig. 4 shows that, with rising parameters, TAP can still find a plausible schedule in under 15 mins, which is ${21} \times - {67} \times$ faster than Alpa.
|
| 298 |
+
|
| 299 |
+
To scale the model size along the width for the ResNet50 model, we choose to increase the size of the classification layer. The original ResNet50 model has 1024 classes in the classification layer. As we increase the dimensions for the classification layer, the total number of parameters also scales up. As shown in Fig. 5, TAP is two orders of magnitude faster than Alpa in finding the optimal solution. Our system outperforms it by ${103} \times - {162} \times$ .
|
| 300 |
+
|
| 301 |
+
We further analyze the time breakdown during the search. For example, for 24-layer T5-large (770M parameters), Alpa spent 5 mins profiling the operators and 5 mins constructing the pipeline stages out of the operators. Instead, TAP reduces the architecture to one transformer block and searches for shardable parameters within that only, drastically reducing the search space. As a result, Alpa takes 197 minutes to search for 16 candidate plans, while TAP requires only 6 minutes to examine 729 candidate plans.
|
| 302 |
+
|
| 303 |
+
< g r a p h i c s >
|
| 304 |
+
|
| 305 |
+
Fig. 6: Training time per iteration for T5 (batch size=16). The blue band represents the standard derivation.
|
| 306 |
+
|
| 307 |
+
< g r a p h i c s >
|
| 308 |
+
|
| 309 |
+
Fig. 7: Training time per iteration for ResNet50 (batch size=1024).
|
| 310 |
+
|
| 311 |
+
2) Training speed.: We also evaluate the performance of the best plans produced by Alpa and TAP . We observe that Alpa favors pipeline parallel schedules, while the optimal schedule found by TAP is similar to the Megatron-style tensor parallel schedule. Since the plans using pipeline parallelism require less communication, the plans from Alpa have a higher throughput.
|
| 312 |
+
|
| 313 |
+
We also observe that as the width of the model increases, the performance of TAP plans is better and more consistent. Fig. 7 shows the time to finish one iteration of training for parallel plans of ResNet50. We first observe that TAP consistently outperforms Alpa. Further, the variance (blue band) in plans discovered by Alpa shows that it struggles to find consistently good plans.
|
| 314 |
+
|
| 315 |
+
§ VI. CONCLUSION
|
| 316 |
+
|
| 317 |
+
We present TAP, an automatic parallelism framework that efficiently discovers data/tensor parallel plans for large models. Leveraging the observation that shared subgraphs widely exist in neural networks, we design a pruning algorithm that efficiently reduces the search space with a sub-linear end-to-end complexity. The best plans found by TAP are comparable with the state-of-the-art expert-engineered plans while only taking minutes to discover.
|
papers/ISCA/ISCA 2023/ISCA 2023 Workshop/ISCA 2023 Workshop ASSYST/GtyQbLUUagE/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,185 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Full Stack Optimization of Transformer Inference
|
| 2 |
+
|
| 3 |
+
ISCA 2023 Submission #NaN - Confidential Draft - Do NOT Distribute!!
|
| 4 |
+
|
| 5 |
+
Abstract-Recent advances in state-of-the-art neural network architecture design have been moving toward Transformer models. These models achieve superior accuracy across a wide range of applications in computer vision, natural language processing, and speech recognition. This trend has been consistent over the past several years since Transformer models were originally introduced. However, the amount of compute and bandwidth required for inference of recent Transformer models is growing at a significant rate, and this has made their deployment in latency-sensitive applications challenging. As such, there has been an increased focus on making Transformer models more efficient, with methods that range from changing the architecture design, all the way to developing dedicated domain-specific accelerators.
|
| 6 |
+
|
| 7 |
+
In this work, we pursue a full-stack approach to optimizing Transformer inference. We analyze the implications of the Transformer architecture on hardware, including the impact of nonlinear operations such as Layer Normalization, Softmax, and GELU, as well as linear operations, and we use this analysis to optimize a fixed Transformer architecture. We assess the challenges with finding the right mapping and scheduling of operations for Transformer models, and pursue neural architecture search to further optimize the Transformer network. We find that a full-stack co-design approach with the aforementioned methods can result in up to ${88.7} \times$ end-to-end speedup with minimal performance degradation for Transformer inference.
|
| 8 |
+
|
| 9 |
+
## I. INTRODUCTION
|
| 10 |
+
|
| 11 |
+
Deep learning models have scaled up to billions of parameters and billions of multiply-accumulate operations during both training and inference. As a result, there has been a growing interest in computing these models efficiently and in deploying these compute and memory-intensive workloads on resource-constrained edge devices. These edge devices have tight energy and memory constraints, and the corresponding applications that leverage deep learning models also often have real-time latency constraints.
|
| 12 |
+
|
| 13 |
+
The demand for fast and efficient computation, coupled with the characteristics of deep learning workloads that involve a small set of distinct operations with substantial data reuse, have led to the use of hardware accelerators. A multitude of enterprise deep learning accelerators, such as [1], [3], [17], [23], [25], [27]-[29], [36], [43], [45], have been developed and integrated into commodity hardware by industry in the past decade. This parallels many research accelerators developed in academia [7]-[10], [16], [18]-[20], [35]. Together with hardware accelerator development, the software frameworks [2], [5], [24], [33] and compilers [6], [31], [41] for deploying various deep learning algorithms have also enhanced and matured. These tools enable the execution of deep learning algorithms on accelerators, and they perform mapping optimizations to improve the performance and efficiency of the full deep learning pipeline. Nonetheless, the fast-evolving deep learning algorithms still keep introducing new demands for hardware and software support, as well as their co-optimization, to satisfy various deployment constraints.
|
| 14 |
+
|
| 15 |
+
The recent rise in popularity of Transformers and large language models [4], [12], [14], [15], [21], [37]-[40], [42], [44] for solving various natural language processing (NLP) tasks presents a brand new set of challenges in the design of accelerators as well as frameworks. There has been an increased focus on making Transformer inference more efficient, especially due to their growing size and run-time complexity. However, there is still a lack of understanding regarding the workload characteristics of Transformer architectures, and thus of the design principles necessary for effectively running these models, when compared to the more well-known convolutional neural network (CNN) architectures. For instance, compared to the conventional CNN-focused design, Transformers are mostly composed of matrix multiplications (matmuls) together with memory-intensive nonlinear operations. In addition, the computational graph and dataflow of Transformer models are more complex than that of CNNs, with more types of operation nodes, as well as more dataflow splits and concatenations. All these challenges require us to undertake a comprehensive analysis of the current hardware and software solutions as well as the various design trade-offs for Transformer inference.
|
| 16 |
+
|
| 17 |
+
Our analysis yielded several key findings:
|
| 18 |
+
|
| 19 |
+
- We adapt Gemmini [19], which was originally designed for CNN workloads, for Transformer inference. Without modifications, the primary bottleneck for running Transformers on CNN accelerators is the time spent on floating-point non-linear operations. However, by adapting Gemmini to support an integer-only BERT variant [26], and tuning the memory configuration, we improve performance by ${39.6} \times$ .
|
| 20 |
+
|
| 21 |
+
- Fusing BatchNorm with the neighboring convolution in CNNs is straightforward. However, the benefits of fusing operations in the Transformer architecture with the preceding matmuls depends on the particular operation as it can impose constraints on the mapping, leading to runtime costs that outweigh the gains from operator fusion.
|
| 22 |
+
|
| 23 |
+
- We apply automated neural architecture search (NAS) to search for efficient and high-performance Transformer architectures on Gemmini-driven hardware. NAS finds an architecture that improves EDP by ${10.6} \times$ with minimal degradation on target benchmark. Combined with the hardware improvement, we achieve ${88.7} \times$ end-to-end speedup.
|
| 24 |
+
|
| 25 |
+
## II. Hardware Architecture Optimization
|
| 26 |
+
|
| 27 |
+
We first illustrate how architects familiar with mainstream accelerators for convolutional, vision-based workloads can design state-of-the-art Transformer accelerators. We start with a fairly typical CNN accelerator generated by the Gemmini [19] accelerator-generator, optimized primarily for ResNet50-like workloads, and we discuss changes we made to this accelerator and its software stack to efficiently support Transformer workloads such as BERT. Throughout this section, we use BERT-Base as a workload.
|
| 28 |
+
|
| 29 |
+

|
| 30 |
+
|
| 31 |
+
Fig. 1: Map of the computations performed in (Top) the multihead attention (MHA) module and (Bottom) the feed-forward network (FFN) module in the Transformer encoder block
|
| 32 |
+
|
| 33 |
+
1) Baseline Accelerator: We first generate a fairly typical CNN accelerator with a ${16} \times {16}$ systolic array and the weight-stationary dataflow using the Gemmini accelerator-generator. The 8-bit integer weights and inputs are stored in a ${256}\mathrm{{kB}}$ local scratchpad memory, and the 32-bit partial sums are stored in a dual-ported ${64}\mathrm{\;{kB}}$ accumulator SRAM which performs matrix additions. When DNN layers are too large to fit into the local scratchpad, they fall back onto an external L2 cache and DRAM which are shared with CPUs and other accelerators on the system-on-chip (SoC). A host CPU tiles such layers to compute the full outputs. The baseline accelerator produced by Gemmini incorporates peripheral circuitry that enables the execution of ReLU and max-pool operations, alongside integer-float multipliers that facilitate the scaling of 32-bit partial sums into 8-bit inputs for the subsequent layer. Native support for these operations is important, as it eliminates the necessity of offloading such operations to the host CPUs, thereby circumventing the costly transfers of activations between DRAM or outer caches and the local scratchpad.
|
| 34 |
+
|
| 35 |
+
Finally, note that this baseline CNN accelerator does not include any Transformer-specific features. In particular, there is no support for nonlinear normalization operations such as GELU, Softmax, or LayerNorm. Therefore, although it achieves real-time or near-real-time performance on end-to-end CNN workloads, the performance on Transformer workloads such as BERT is severely limited [19] as will be discussed in more detail.
|
| 36 |
+
|
| 37 |
+
2) Performance Bottlenecks: Our observation has revealed that the baseline CNN accelerator, when deployed for Transformer inference, exhibits $< 1\%$ utilization of its functional units. Although individual matmuls exhibit 74% utilization, the performance is severely impeded by the nonlinear operations that need to be executed on the host CPU as they are not natively supported by the accelerator. This is further exacerbated by the fact that the nonlinear operations necessitate the use of floating-point arithmetic. Not only it is less energy and latency efficient than their integer counterparts [22], it also entails dequantization and re-quantization of the activations. These overheads account for ${96}\%$ of the overall execution time (Fig. 2). Given that the majority of FLOPs in Transformer inference are matmuls, the time spent on the nonlinear operations in the baseline accelerator is far from the theoretical optimal, unless further optimizations are implemented.
|
| 38 |
+
|
| 39 |
+
In contrast to the convolutions in CNNs, which exhibit high arithmetic intensity, Transformers mostly comprise matmuls, often with small and/or rectangular matrices, which translate to lower arithmetic intensities and different optimal tiling strategies. This indicates that the memory hierarchy and memory bandwidth of our baseline CNN accelerator need to be recalibrated for more efficient Transformer inference.
|
| 40 |
+
|
| 41 |
+
3) Memory Configuration Re-adjustment: We have observed that the performance of BERT matmul operations can be significantly improved by adjusting the sizes of the input/weight scratchpad and the partial sum accumulator. Specifically, we have found that larger accumulators with higher output-reuse are more suitable for several matmuls in Transformers, such as the query $\times$ key matmuls, which have $l \times l$ output activation matrices which can be much larger than the $l \times d/h$ input matrices for $l, d$ , and $h$ sequence length, hidden dimension, and number of heads, respectively. Based on this observation, we have modified the CNN-optimized memory configuration of our baseline accelerator by reducing the size of the scratchpad from ${256}\mathrm{\;{kB}}$ to ${64}\mathrm{\;{kB}}$ , and increasing the size of the accumulator from ${64}\mathrm{\;{kB}}$ to ${256}\mathrm{\;{kB}}$ . Importantly, these changes do not result in an increase in the total SRAM capacity or the total area; however, they result in a substantial ${36}\%$ reduction in total matmul latency.
|
| 42 |
+
|
| 43 |
+
4) Hardware-Software Co-Design: To alleviate the overhead incurred by runtime quantization and dequantization, as well as the offloading of nonlinear operations to the CPU, we have transitioned our baseline Transformer workload from a naive BERT implementation, where only matmuls are quantized, to an integer-only BERT variant known as I-BERT [26]. I-BERT substitutes floating-point nonlinear operations with integer polynomial approximations, which can be implemented faster and more efficiently in specialized accelerators. To incorporate I-BERT, we add new integer implementations of I-BERT's GELU, LayerNorm, and Softmax variants to our baseline CNN accelerator. The 32-bit matmul results residing in the accumulator are fed into a newly added "normalization unit" which computes reduction operations (e.g. sum, sum-of-square, max, etc.) which are used by LayerNorm and Softmax. Multiple passes of accumulator reads are required to compute all the reductions in these operations. Subsequentially, the mat-mul results in the accumulator undergo a final read operation to be fed into a set of 16 activation units, which compute I-BERT's non-linear variants in parallel.
|
| 44 |
+
|
| 45 |
+
With these new features, overall end-to-end BERT inference performance improved by ${39.6} \times$ over the baseline accelerator's initial performance. As Fig. 2 illustrates, the computational bottleneck once again became the matmuls rather than normalization or activation functions. Quantization and dequantization no longer become necessary and GELU can be trivially fused with the preceding matmuls, so that they become one pipelined operation. When synthesized with the ASAP7 PDK [13], the new hardware units increased the total area consumption of the accelerator by only ${14}\%$ , and the GELU, LayerNorm, and Softmax operations increased the power consumption of a BERT inference by only 9.3%.
|
| 46 |
+
|
| 47 |
+

|
| 48 |
+
|
| 49 |
+
Fig. 2: The time breakdown of a BERT inference with a sequence-length of 512, when running on (Left) the baseline CNN accelerator, and (Middle) the accelerator with I-BERT's hardware/software features incorporated. (Right) The time breakdown with different sequence lengths after the change. For all sequence lengths, the total execution time is dominated by matmuls.
|
| 50 |
+
|
| 51 |
+
## III. SchEDULING OPTIMIZATION
|
| 52 |
+
|
| 53 |
+
In Sec. II, we have demonstrated that the nonlinear operations in Transformers introduce challenges to efficient accelerator design. We further find that these operations present non-trivial challenges to the scheduling problem as well.
|
| 54 |
+
|
| 55 |
+
Generally in DNN scheduling, it is an enticing strategy to fuse relatively high-arithmetic-intensity matmuls with the following low-arithmetic-intensity normalization operations. For example, execution schedulers for CNN-type accelerators often fuse convolutions with ReLU or max-pool operations. This strategy is especially applicable in the case of quantized workloads, where partial sums awaiting normalization are often of higher bitwidth than the final normalized outputs.
|
| 56 |
+
|
| 57 |
+
Similarly, for Transformer encoders, we could overlap the execution of normalization operations (LayerNorm and Soft-max) with their preceding matmuls. However, this strategy may require hardware/software changes. First in the case of DNN accelerators like Gemmini, additional hardware support for directly accessing partial sums by normalization operation units may be required. Second, appropriate constraints on the matmul execution schedule are necessary. In particular, the tiling factor size of either output dimension of the matmul must be maximized, so that rows/columns are immediately ready and stored at the Gemmini accumulator scratchpad for computing the mean and standard deviation. We refer to this alternate scheduling approach as fusion-optimized scheduling.
|
| 58 |
+
|
| 59 |
+
In Fig. 3, we take a deeper look into the performance implications of fusion-optimized scheduling for the BERT-Base encoder. We model the total latency of each adjacent pair of matmul and LayerNorm/Softmax operations via Timeloop [32] with the target hardware being the I-BERT modified Gemmini described in Sec. II. Opportunities for overlapping computations include: (1) the MHA query $\times$ key matmul and following Softmax; (2) MHA ${W}_{\text{out }}$ projection and following LayerNorm; and (3) FFN ${W}_{2}$ projection and following LayerNorm. The two scheduling strategies we compare are: (1) fusion-optimized scheduling and (2) Gemmini's default heuristic-based scheduler, which greedily maximizes loop tile factors at the local SRAM level for each of the three matmul dimensions. We refer to the second, default scheduling approach as non-fused scheduling.
|
| 60 |
+
|
| 61 |
+

|
| 62 |
+
|
| 63 |
+
Fig. 3: (Left) Impact of fusion-optimized scheduling for MHA execution. Hiding the Softmax latency via fusion-optimized scheduling improves overall MHA latency by ${78}\%$ , but overlapping ${W}_{\text{out }}$ projection with LayerNorm can hurt total latency. (Right) Impact of fusion-optimized scheduling for FFN matmul that enables latency hiding of the LayerNorm operation. We observe that fusion-optimized scheduling hurts total latency by 27%. In both cases, we assume an input sequence length of 512 and accumulator size of ${256}\mathrm{{kB}}$ .
|
| 64 |
+
|
| 65 |
+
The left plot of Fig. 3 showcases the promises of mat-mul and non-linear operator fusion within the MHA. With Gemmini on-chip scratchpad and accumulator SRAM sizes of ${256}\mathrm{{KB}}$ , we observe that it is advantageous to fuse query $\times$ key matmuls with Softmax for each attention head and thereby hide the relatively high latency of executing the Softmax operation. Assuming an input sequence length of 512, the Softmax latency is significant compared to the matmul, taking up around ${78}\%$ of the total cycles and contributes greatly to the total latency.
|
| 66 |
+
|
| 67 |
+
On the other hand, the right plot of Fig. 3 shows the results on matmul and LayerNorm overlapping in the FFN ${W}_{2}$ projection. Here, we observe that fusion-optimized scheduling worsens total latency by ${27}\%$ . When scheduling the FFN, we find that at the BERT-Base scale, it is consistently favorable to overlap the MHA query $\times$ key with the ensuing Softmax but consistently disadvantageous to chain the FFN ${W}_{2}$ projection matmul with LayerNorm. This is in contrast with previous studies on GPU kernel fusion for Transformers [11], [34], and it highlights how scheduling for Transformer matmuls becomes more complex when targeting different styles of custom hardware designs, including the Gemmini accelerator.
|
| 68 |
+
|
| 69 |
+
## IV. NEURAL ARCHITECTURE OPTIMIZATION
|
| 70 |
+
|
| 71 |
+
Another important avenue in full stack optimization of DNNs is optimizing DNN architectures and tailoring them for specific hardware platforms. However, the exponential search space of DNN architectures often makes it challenging to find an optimal architecture, even without considering the underlying hardware. To address this issue, automated neural architecture search (NAS) methods have been proposed to adapt DNNs for given hardware constraints. In this regard, we apply hardware-aware NAS to search for Transformer architectures that are optimal on the Gemmini-driven accelerator with better efficient and performance trade-offs.
|
| 72 |
+
|
| 73 |
+

|
| 74 |
+
|
| 75 |
+
Fig. 4: (Left) EDP-perplexity, (Middle) Latency-perplexity, and (Right) Energy-perplexity plots of the Transformer architectures found via evolutionary search on our Gemmini hardware configuration. Lower perplexity indicates better performance of the trained models. For better comparison, we additionally plot lines to illustrate $+ {0.1}$ and +1 point perplexity degradation.
|
| 76 |
+
|
| 77 |
+
1) Experiment Setup: As a baseline architecture, we use a 6-layer Transformer architecture with all other model configurations remaining the same as BERT-Base. We use language modeling on the WikiText-2 [30] as a training objective. To evaluate the model performance, we measured perplexity on the validation examples, where lower scores indicate better performance. The stand-alone baseline model was trained for 50 epochs with the Adam optimizer and a linear learning rate scheduling with a peak learning rate of $\{ 5,2,1,{0.5}\} \times {10}^{-5}$ . We use a sequence length of 512 and a batch size of 16 .
|
| 78 |
+
|
| 79 |
+
For NAS, we adopt the BigNAS [46] strategy to train a supernet using the same training hyperparameters as the stand-alone training. The NAS search space is comprised of various combinations of the number of layers in $\{ 3,4,5,6\}$ , number of heads in $\{ 4,6,8,{10},{12}\}$ , hidden dimension in [384,768], and FFN dimension in $\left\lbrack {{768},{3072}}\right\rbrack$ . Subsequently, we use evolutionary search for 40 iterations with a population size of 40 and mutation probability of 0.2 to search optimal subnets out of the fully trained supernet. After every iteration, only the subnets that are Pareto-optimal in EDP (energy-delay-product) and perplexity are retained. To measure the hardware cost, we use a lookup table-based method for quickly assessing the latency and energy consumption of each subnet on the target hardware, instead of using time-consuming RTL simulation. The lookup table contains Timeloop [32] simulated latency and energy numbers for each operation, which are then summed up to estimate the end-to-end values for the entire subnets. After the evolutionary search, the Pareto-optimal subnets are then evaluated with an RTL simulator to obtain a more precise estimation of the latency. For the energy measure, we continue to use the numbers from Timeloop. For the target hardware, we use Gemmini with the optimizations applied in Sec. II.
|
| 80 |
+
|
| 81 |
+
2) Experiment Results: We show the NAS Pareto-frontier results for EDP, latency and energy in Fig. 4 (blue curves) where each point corresponds to a different Transformer architecture found from the evolutionary search algorithm. Additionally, we plot the stand-alone trained baseline Transformer model trained as a reference ( $\times$ mark). As can be seen in the EDP plot (Fig. 4 Left), the NAS framework allows us to obtain multiple Transformer architectures with better hardware cost to perplexity trade-offs. That is, it finds architectures with similar or even better perplexity, as compared to the baseline with smaller hardware costs.
|
| 82 |
+
|
| 83 |
+
Fig. 4 (Middle and Right) further illustrates latency and energy separately. As one can see, it is possible to attain a ${1.4} \times$ reduction in latency versus the baseline Transformer with 0.1 point perplexity degradation. If one could tolerate 1 point degradation in perplexity, latency can be reduced by ${2.4} \times$ . With regards to energy, one can attain a ${1.6} \times$ improvement considering 0.1 point perplexity degradation, and ${4.4} \times$ when allowing perplexity degradation of 1 point. Taking both together, it is possible to reduce EDP by ${2.2} \times$ with just 0.1 point perplexity degradation, and ${10.6} \times$ with 1 point perplexity degradation. These examples illustrate the power of co-design in allowing practitioners to choose a combination that best matches their needs. It is important to note that this represents a single run of our co-design methodology on a specific hardware platform, and results may vary depending on the target hardware and optimization goals.
|
| 84 |
+
|
| 85 |
+
## V. CONCLUSION
|
| 86 |
+
|
| 87 |
+
While Transformer models have shown significant performance improvements, their growing size and run-time complexity present a critical challenge in efficient inference. In this work, we have demonstrated the benefits of a full stack approach by leveraging the advantages of co-design and co-optimization techniques across the stack. We adapted a CNN-oriented accelerator to efficient Transformer inference by supporting integer-only nonlinear operations [26] and rebalancing the memory hierarchy, which yielded a ${39.6} \times$ latency reduction. We also applied NAS to search for Pareto-optimal Transformer architectures given the tradeoff between EDP and perplexity, leading to a ${10.6} \times$ EDP reduction with minimal performance drop. Altogether, we have exhibited a ${88.7} \times$ latency improvement without a noticeable performance drop compared to a naive implementation without full-stack considerations. We have also demonstrated that unlike in CNNs, nonlinear operations in Transformers require careful consideration when performing operator fusion when targeting custom accelerators, e.g. systolic-array based architectures. We expect more improvement when we take this into consideration when designing the end-to-end full stack optimization pipeline.
|
| 88 |
+
|
| 89 |
+
## REFERENCES
|
| 90 |
+
|
| 91 |
+
[1] "Edge TPU," https://cloud.google.com/edge-tpu/, accessed: 2018-12-05.
|
| 92 |
+
|
| 93 |
+
[2] M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin,
|
| 94 |
+
|
| 95 |
+
S. Ghemawat, G. Irving, M. Isard et al., "\{TensorFlow\}: a system for \{Large-Scale\} machine learning," in USENIX Symposium on Operating Systems Design and Implementation (OSDI), 2016.
|
| 96 |
+
|
| 97 |
+
[3] D. Abts, J. Kim, G. Kimmell, M. Boyd, K. Kang, S. Parmar, A. Ling, A. Bitar, I. Ahmed, and J. Ross, "The groq software-defined scale-out tensor streaming multiprocessor: From chips-to-systems architectural overview," in IEEE Hot Chips Symposium, 2022, pp. 1-69.
|
| 98 |
+
|
| 99 |
+
[4] T. B. Brown, B. Mann, N. Ryder, M. Subbiah, J. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell et al., "Language models are few-shot learners," arXiv preprint arXiv:2005.14165, 2020.
|
| 100 |
+
|
| 101 |
+
[5] T. Chen, M. Li, Y. Li, M. Lin, N. Wang, M. Wang, T. Xiao, B. Xu, C. Zhang, and Z. Zhang, "Mxnet: A flexible and efficient machine learning library for heterogeneous distributed systems," arXiv preprint arXiv:1512.01274, 2015.
|
| 102 |
+
|
| 103 |
+
[6] T. Chen, T. Moreau, Z. Jiang, L. Zheng, E. Yan, H. Shen, M. Cowan, L. Wang, Y. Hu, L. Ceze et al., "\{TVM\}: An automated end-to-end optimizing compiler for deep learning," in 13th [USENIX] Symposium on Operating Systems Design and Implementation ( $\{$ OSDI $\} {18}$ ),2018, pp. 578-594.
|
| 104 |
+
|
| 105 |
+
[7] T. Chen, Z. Du, N. Sun, J. Wang, C. Wu, Y. Chen, and O. Temam, "Diannao: A small-footprint high-throughput accelerator for ubiquitous machine-learning," in Proceedings of the 19th International Conference on Architectural Support for Programming Languages and Operating Systems, ser. ASPLOS '14. New York, NY, USA: ACM, 2014, pp. 269-284.
|
| 106 |
+
|
| 107 |
+
[8] Y.-H. Chen, J. Emer, and V. Sze, "Eyeriss: A Spatial Architecture for Energy-efficient Dataflow for Convolutional Neural Networks," in Proceedings of the International Symposium on Computer Architecture (ISCA), 2016.
|
| 108 |
+
|
| 109 |
+
[9] Y.-H. Chen, T.-J. Yang, J. Emer, and V. Sze, "Eyeriss v2: A flexible accelerator for emerging deep neural networks on mobile devices," IEEE Journal on Emerging and Selected Topics in Circuits and Systems, 2019.
|
| 110 |
+
|
| 111 |
+
[10] Y. Chen, T. Luo, S. Liu, S. Zhang, L. He, J. Wang, L. Li, T. Chen, Z. Xu, N. Sun, and O. Temam, "DaDianNao: A Machine-learning Supercomputer," in Proceedings of the International Symposium on Microarchitecture (MICRO), 2014.
|
| 112 |
+
|
| 113 |
+
[11] J. Choi, H. Li, B. Kim, S. Hwang, and J. H. Ahn, "Accelerating transformer networks through recomposing softmax layers," in International Symposium on Workload Characterization (IISWC), 2021.
|
| 114 |
+
|
| 115 |
+
[12] A. Chowdhery, S. Narang, J. Devlin, M. Bosma, G. Mishra, A. Roberts, P. Barham, H. W. Chung, C. Sutton, S. Gehrmann et al., "Palm: Scaling language modeling with pathways," arXiv preprint arXiv:2204.02311, 2022.
|
| 116 |
+
|
| 117 |
+
[13] L. Clark, V. Vashishtha, L. Shifren, A. Gujia, S. Sinha, B. Cline, C. Ramamurthya, and G. Yeric, "ASAP7: A 7-nm FinFET Predictive Process Design Kit," Microelectronics Journal, 2016.
|
| 118 |
+
|
| 119 |
+
[14] J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, "BERT: Pre-training of deep bidirectional transformers for language understanding," arXiv preprint arXiv:1810.04805, 2018.
|
| 120 |
+
|
| 121 |
+
[15] N. Du, Y. Huang, A. M. Dai, S. Tong, D. Lepikhin, Y. Xu, M. Krikun, Y. Zhou, A. W. Yu, O. Firat et al., "Glam: Efficient scaling of language models with mixture-of-experts," in International Conference on Machine Learning. PMLR, 2022, pp. 5547-5569.
|
| 122 |
+
|
| 123 |
+
[16] Z. Du, R. Fasthuber, T. Chen, P. Ienne, L. Li, T. Luo, X. Feng, Y. Chen, and O. Temam, "Shidiannao: Shifting vision processing closer to the sensor," in 2015 ACM/IEEE 42nd Annual International Symposium on Computer Architecture (ISCA), 2015, pp. 92-104.
|
| 124 |
+
|
| 125 |
+
[17] H. Esmaeilzadeh, A. Sampson, L. Ceze, and D. Burger, "Neural Acceleration for General-Purpose Approximate Programs," in Proceedings of the International Symposium on Microarchitecture (MICRO), 2012.
|
| 126 |
+
|
| 127 |
+
[18] M. Gao, J. Pu, X. Yang, M. Horowitz, and C. Kozyrakis, "Tetris: Scalable and Efficient Neural Network Acceleration with 3D Memory," in Proceedings of the International Conference on Architectural Support for Programming Languages and Operation Systems (ASPLOS), 2017.
|
| 128 |
+
|
| 129 |
+
[19] H. Genc, S. Kim, A. Amid, A. Haj-Ali, V. Iyer, P. Prakash, J. Zhao, D. Grubb, H. Liew, H. Mao, A. Ou, C. Schmidt, S. Steffl, J. Wright, I. Stoica, J. Ragan-Kelley, K. Asanovic, B. Nikolic, and Y. S. Shao, "Gemmini: Enabling systematic deep-learning architecture evaluation via full-stack integration," in Proceedings of the 58th Annual Design Automation Conference (DAC), 2021.
|
| 130 |
+
|
| 131 |
+
[20] S. Han, X. Liu, H. Mao, J. Pu, A. Pedram, M. A. Horowitz, and W. J. Dally, "Eie: Efficient inference engine on compressed deep neural network," SIGARCH Comput. Archit. News, vol. 44, no. 3, Jun. 2016.
|
| 132 |
+
|
| 133 |
+
[21] J. Hoffmann, S. Borgeaud, A. Mensch, E. Buchatskaya, T. Cai,
|
| 134 |
+
|
| 135 |
+
E. Rutherford, D. d. L. Casas, L. A. Hendricks, J. Welbl, A. Clark et al., "Training compute-optimal large language models," arXiv preprint arXiv:2203.15556, 2022.
|
| 136 |
+
|
| 137 |
+
[22] M. Horowitz, "1.1 computing's energy problem (and what we can do about it)," in 2014 IEEE International Solid-State Circuits Conference Digest of Technical Papers (ISSCC), 2014, pp. 10-14.
|
| 138 |
+
|
| 139 |
+
[23] J. Hruska, "New movidius myriad x vpu packs a custom neural compute engine," 2017.
|
| 140 |
+
|
| 141 |
+
[24] Y. Jia, E. Shelhamer, J. Donahue, S. Karayev, J. Long, R. B. Girshick, S. Guadarrama, and T. Darrell, "Caffe: Convolutional Architecture for Fast Feature Embedding," CoRR, vol. abs/1408.5093, 2014.
|
| 142 |
+
|
| 143 |
+
[25] N. P. Jouppi, C. Young, N. Patil, D. Patterson, G. Agrawal, R. Bajwa, S. Bates, S. Bhatia, N. Boden, A. Borchers, R. Boyle, P. Cantin, C. Chao, C. Clark, J. Coriell, M. Daley, M. Dau, J. Dean, B. Gelb, T. V. Ghaem-maghami, R. Gottipati, W. Gulland, R. Hagmann, C. R. Ho, D. Hogberg, J. Hu, R. Hundt, D. Hurt, J. Ibarz, A. Jaffey, A. Jaworski, A. Kaplan, H. Khaitan, D. Killebrew, A. Koch, N. Kumar, S. Lacy, J. Laudon, J. Law, D. Le, C. Leary, Z. Liu, K. Lucke, A. Lundin, G. MacKean, A. Maggiore, M. Mahony, K. Miller, R. Nagarajan, R. Narayanaswami, R. Ni, K. Nix, T. Norrie, M. Omernick, N. Penukonda, A. Phelps, J. Ross, M. Ross, A. Salek, E. Samadiani, C. Severn, G. Sizikov, M. Snelham, J. Souter, D. Steinberg, A. Swing, M. Tan, G. Thorson, B. Tian, H. Toma, E. Tuttle, V. Vasudevan, R. Walter, W. Wang, E. Wilcox, and D. H. Yoon, "In-datacenter performance analysis of a tensor processing unit," in 2017 ACM/IEEE 44th Annual International Symposium on Computer Architecture (ISCA), June 2017, pp. 1-12.
|
| 144 |
+
|
| 145 |
+
[26] S. Kim, A. Gholami, Z. Yao, M. W. Mahoney, and K. Keutzer, "I-bert: Integer-only bert quantization," in International conference on machine learning. PMLR, 2021, pp. 5506-5518.
|
| 146 |
+
|
| 147 |
+
[27] S. Knowles, "Graphcore," in IEEE Hot Chips Symposium, 2021, pp. 1-25.
|
| 148 |
+
|
| 149 |
+
[28] H. Liao, J. Tu, J. Xia, and X. Zhou, "Davinci: A scalable architecture for neural network computing." in IEEE Hot Chips Symposium, 2019, pp. 1-44.
|
| 150 |
+
|
| 151 |
+
[29] S. Lie, "Cerebras architecture deep dive: First look inside the hw/sw co-design for deep learning: Cerebras systems," in IEEE Hot Chips Symposium, 2022, pp. 1-34.
|
| 152 |
+
|
| 153 |
+
[30] S. Merity, C. Xiong, J. Bradbury, and R. Socher, "Pointer sentinel mixture models," 2016.
|
| 154 |
+
|
| 155 |
+
[31] NVIDIA. (2018) TensorRT: https://developer.nvidia.com/tensorrt.
|
| 156 |
+
|
| 157 |
+
[32] A. Parashar, P. Raina, Y. S. Shao, Y.-H. Chen, V. A. Ying, A. Mukkara, R. Venkatesan, B. Khailany, S. W. Keckler, and J. Emer, "Timeloop: A systematic approach to dnn accelerator evaluation," in 2019 IEEE international symposium on performance analysis of systems and software (ISPASS). IEEE, 2019, pp. 304-315.
|
| 158 |
+
|
| 159 |
+
[33] A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga et al., "Pytorch: An imperative style, high-performance deep learning library," Advances in neural information processing systems, vol. 32, 2019.
|
| 160 |
+
|
| 161 |
+
[34] S. Pati, S. Aga, N. Jayasena, and M. D. Sinclair, "Demystifying bert: Implications for accelerator design," in International Symposium on Workload Characterization (IISWC), 2021.
|
| 162 |
+
|
| 163 |
+
[35] J. Pei, L. Deng, S. Song, M. Zhao, Y. Zhang, S. Wu, G. Wang, Z. Zou, Z. Wu, W. He et al., "Towards artificial general intelligence with hybrid tianjic chip architecture," Nature, vol. 572, no. 7767, pp. 106-111, 2019.
|
| 164 |
+
|
| 165 |
+
[36] R. Prabhakar and S. Jairath, "Sambanova sn10 rdu: Accelerating software 2.0 with dataflow," in IEEE Hot Chips Symposium, 2021, pp. 1-37.
|
| 166 |
+
|
| 167 |
+
[37] A. Radford, K. Narasimhan, T. Salimans, and I. Sutskever, "Improving language understanding by generative pre-training," 2018.
|
| 168 |
+
|
| 169 |
+
[38] A. Radford, J. Wu, R. Child, D. Luan, D. Amodei, and I. Sutskever, "Language models are unsupervised multitask learners," OpenAI blog, vol. 1, no. 8, p. 9, 2019.
|
| 170 |
+
|
| 171 |
+
[39] J. W. Rae, S. Borgeaud, T. Cai, K. Millican, J. Hoffmann, F. Song, J. Aslanides, S. Henderson, R. Ring, S. Young et al., "Scaling language models: Methods, analysis & insights from training gopher," arXiv preprint arXiv:2112.11446, 2021.
|
| 172 |
+
|
| 173 |
+
[40] C. Raffel, N. Shazeer, A. Roberts, K. Lee, S. Narang, M. Matena, Y. Zhou, W. Li, and P. J. Liu, "Exploring the limits of transfer learning with a unified text-to-text transformer," arXiv preprint arXiv:1910.10683, 2019.
|
| 174 |
+
|
| 175 |
+
[41] A. Sabne, "Xla: Compiling machine learning for peak performance," 2020.
|
| 176 |
+
|
| 177 |
+
[42] T. L. Scao, A. Fan, C. Akiki, E. Pavlick, S. Ilić, D. Hesslow, R. Castagné, A. S. Luccioni, F. Yvon, M. Gallé et al., "Bloom: A 176b-parameter open-access multilingual language model," arXiv preprint arXiv:2211.05100, 2022.
|
| 178 |
+
|
| 179 |
+
[43] F. Sijstermans, "The NVIDIA Deep Learning Accelerator," in Hot Chips, 2018.
|
| 180 |
+
|
| 181 |
+
[44] S. Smith, M. Patwary, B. Norick, P. LeGresley, S. Rajbhandari, J. Casper, Z. Liu, S. Prabhumoye, G. Zerveas, V. Korthikanti et al., "Using deepspeed and megatron to train megatron-turing nlg 530b, a large-scale generative language model," arXiv preprint arXiv:2201.11990, 2022.
|
| 182 |
+
|
| 183 |
+
[45] E. Talpes, D. D. Sarma, G. Venkataramanan, P. Bannon, B. McGee, B. Floering, A. Jalote, C. Hsiong, S. Arora, A. Gorti et al., "Compute solution for tesla's full self-driving computer," IEEE Micro, vol. 40, no. 2, pp. 25-35, 2020.
|
| 184 |
+
|
| 185 |
+
[46] J. Yu, P. Jin, H. Liu, G. Bender, P.-J. Kindermans, M. Tan, T. Huang, X. Song, R. Pang, and Q. Le, "Bignas: Scaling up neural architecture search with big single-stage models," in European Conference on Computer Vision. Springer, 2020, pp. 702-717.
|
papers/ISCA/ISCA 2023/ISCA 2023 Workshop/ISCA 2023 Workshop ASSYST/GtyQbLUUagE/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,87 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ FULL STACK OPTIMIZATION OF TRANSFORMER INFERENCE
|
| 2 |
+
|
| 3 |
+
ISCA 2023 Submission #NaN - Confidential Draft - Do NOT Distribute!!
|
| 4 |
+
|
| 5 |
+
Abstract-Recent advances in state-of-the-art neural network architecture design have been moving toward Transformer models. These models achieve superior accuracy across a wide range of applications in computer vision, natural language processing, and speech recognition. This trend has been consistent over the past several years since Transformer models were originally introduced. However, the amount of compute and bandwidth required for inference of recent Transformer models is growing at a significant rate, and this has made their deployment in latency-sensitive applications challenging. As such, there has been an increased focus on making Transformer models more efficient, with methods that range from changing the architecture design, all the way to developing dedicated domain-specific accelerators.
|
| 6 |
+
|
| 7 |
+
In this work, we pursue a full-stack approach to optimizing Transformer inference. We analyze the implications of the Transformer architecture on hardware, including the impact of nonlinear operations such as Layer Normalization, Softmax, and GELU, as well as linear operations, and we use this analysis to optimize a fixed Transformer architecture. We assess the challenges with finding the right mapping and scheduling of operations for Transformer models, and pursue neural architecture search to further optimize the Transformer network. We find that a full-stack co-design approach with the aforementioned methods can result in up to ${88.7} \times$ end-to-end speedup with minimal performance degradation for Transformer inference.
|
| 8 |
+
|
| 9 |
+
§ I. INTRODUCTION
|
| 10 |
+
|
| 11 |
+
Deep learning models have scaled up to billions of parameters and billions of multiply-accumulate operations during both training and inference. As a result, there has been a growing interest in computing these models efficiently and in deploying these compute and memory-intensive workloads on resource-constrained edge devices. These edge devices have tight energy and memory constraints, and the corresponding applications that leverage deep learning models also often have real-time latency constraints.
|
| 12 |
+
|
| 13 |
+
The demand for fast and efficient computation, coupled with the characteristics of deep learning workloads that involve a small set of distinct operations with substantial data reuse, have led to the use of hardware accelerators. A multitude of enterprise deep learning accelerators, such as [1], [3], [17], [23], [25], [27]-[29], [36], [43], [45], have been developed and integrated into commodity hardware by industry in the past decade. This parallels many research accelerators developed in academia [7]-[10], [16], [18]-[20], [35]. Together with hardware accelerator development, the software frameworks [2], [5], [24], [33] and compilers [6], [31], [41] for deploying various deep learning algorithms have also enhanced and matured. These tools enable the execution of deep learning algorithms on accelerators, and they perform mapping optimizations to improve the performance and efficiency of the full deep learning pipeline. Nonetheless, the fast-evolving deep learning algorithms still keep introducing new demands for hardware and software support, as well as their co-optimization, to satisfy various deployment constraints.
|
| 14 |
+
|
| 15 |
+
The recent rise in popularity of Transformers and large language models [4], [12], [14], [15], [21], [37]-[40], [42], [44] for solving various natural language processing (NLP) tasks presents a brand new set of challenges in the design of accelerators as well as frameworks. There has been an increased focus on making Transformer inference more efficient, especially due to their growing size and run-time complexity. However, there is still a lack of understanding regarding the workload characteristics of Transformer architectures, and thus of the design principles necessary for effectively running these models, when compared to the more well-known convolutional neural network (CNN) architectures. For instance, compared to the conventional CNN-focused design, Transformers are mostly composed of matrix multiplications (matmuls) together with memory-intensive nonlinear operations. In addition, the computational graph and dataflow of Transformer models are more complex than that of CNNs, with more types of operation nodes, as well as more dataflow splits and concatenations. All these challenges require us to undertake a comprehensive analysis of the current hardware and software solutions as well as the various design trade-offs for Transformer inference.
|
| 16 |
+
|
| 17 |
+
Our analysis yielded several key findings:
|
| 18 |
+
|
| 19 |
+
* We adapt Gemmini [19], which was originally designed for CNN workloads, for Transformer inference. Without modifications, the primary bottleneck for running Transformers on CNN accelerators is the time spent on floating-point non-linear operations. However, by adapting Gemmini to support an integer-only BERT variant [26], and tuning the memory configuration, we improve performance by ${39.6} \times$ .
|
| 20 |
+
|
| 21 |
+
* Fusing BatchNorm with the neighboring convolution in CNNs is straightforward. However, the benefits of fusing operations in the Transformer architecture with the preceding matmuls depends on the particular operation as it can impose constraints on the mapping, leading to runtime costs that outweigh the gains from operator fusion.
|
| 22 |
+
|
| 23 |
+
* We apply automated neural architecture search (NAS) to search for efficient and high-performance Transformer architectures on Gemmini-driven hardware. NAS finds an architecture that improves EDP by ${10.6} \times$ with minimal degradation on target benchmark. Combined with the hardware improvement, we achieve ${88.7} \times$ end-to-end speedup.
|
| 24 |
+
|
| 25 |
+
§ II. HARDWARE ARCHITECTURE OPTIMIZATION
|
| 26 |
+
|
| 27 |
+
We first illustrate how architects familiar with mainstream accelerators for convolutional, vision-based workloads can design state-of-the-art Transformer accelerators. We start with a fairly typical CNN accelerator generated by the Gemmini [19] accelerator-generator, optimized primarily for ResNet50-like workloads, and we discuss changes we made to this accelerator and its software stack to efficiently support Transformer workloads such as BERT. Throughout this section, we use BERT-Base as a workload.
|
| 28 |
+
|
| 29 |
+
< g r a p h i c s >
|
| 30 |
+
|
| 31 |
+
Fig. 1: Map of the computations performed in (Top) the multihead attention (MHA) module and (Bottom) the feed-forward network (FFN) module in the Transformer encoder block
|
| 32 |
+
|
| 33 |
+
1) Baseline Accelerator: We first generate a fairly typical CNN accelerator with a ${16} \times {16}$ systolic array and the weight-stationary dataflow using the Gemmini accelerator-generator. The 8-bit integer weights and inputs are stored in a ${256}\mathrm{{kB}}$ local scratchpad memory, and the 32-bit partial sums are stored in a dual-ported ${64}\mathrm{\;{kB}}$ accumulator SRAM which performs matrix additions. When DNN layers are too large to fit into the local scratchpad, they fall back onto an external L2 cache and DRAM which are shared with CPUs and other accelerators on the system-on-chip (SoC). A host CPU tiles such layers to compute the full outputs. The baseline accelerator produced by Gemmini incorporates peripheral circuitry that enables the execution of ReLU and max-pool operations, alongside integer-float multipliers that facilitate the scaling of 32-bit partial sums into 8-bit inputs for the subsequent layer. Native support for these operations is important, as it eliminates the necessity of offloading such operations to the host CPUs, thereby circumventing the costly transfers of activations between DRAM or outer caches and the local scratchpad.
|
| 34 |
+
|
| 35 |
+
Finally, note that this baseline CNN accelerator does not include any Transformer-specific features. In particular, there is no support for nonlinear normalization operations such as GELU, Softmax, or LayerNorm. Therefore, although it achieves real-time or near-real-time performance on end-to-end CNN workloads, the performance on Transformer workloads such as BERT is severely limited [19] as will be discussed in more detail.
|
| 36 |
+
|
| 37 |
+
2) Performance Bottlenecks: Our observation has revealed that the baseline CNN accelerator, when deployed for Transformer inference, exhibits $< 1\%$ utilization of its functional units. Although individual matmuls exhibit 74% utilization, the performance is severely impeded by the nonlinear operations that need to be executed on the host CPU as they are not natively supported by the accelerator. This is further exacerbated by the fact that the nonlinear operations necessitate the use of floating-point arithmetic. Not only it is less energy and latency efficient than their integer counterparts [22], it also entails dequantization and re-quantization of the activations. These overheads account for ${96}\%$ of the overall execution time (Fig. 2). Given that the majority of FLOPs in Transformer inference are matmuls, the time spent on the nonlinear operations in the baseline accelerator is far from the theoretical optimal, unless further optimizations are implemented.
|
| 38 |
+
|
| 39 |
+
In contrast to the convolutions in CNNs, which exhibit high arithmetic intensity, Transformers mostly comprise matmuls, often with small and/or rectangular matrices, which translate to lower arithmetic intensities and different optimal tiling strategies. This indicates that the memory hierarchy and memory bandwidth of our baseline CNN accelerator need to be recalibrated for more efficient Transformer inference.
|
| 40 |
+
|
| 41 |
+
3) Memory Configuration Re-adjustment: We have observed that the performance of BERT matmul operations can be significantly improved by adjusting the sizes of the input/weight scratchpad and the partial sum accumulator. Specifically, we have found that larger accumulators with higher output-reuse are more suitable for several matmuls in Transformers, such as the query $\times$ key matmuls, which have $l \times l$ output activation matrices which can be much larger than the $l \times d/h$ input matrices for $l,d$ , and $h$ sequence length, hidden dimension, and number of heads, respectively. Based on this observation, we have modified the CNN-optimized memory configuration of our baseline accelerator by reducing the size of the scratchpad from ${256}\mathrm{\;{kB}}$ to ${64}\mathrm{\;{kB}}$ , and increasing the size of the accumulator from ${64}\mathrm{\;{kB}}$ to ${256}\mathrm{\;{kB}}$ . Importantly, these changes do not result in an increase in the total SRAM capacity or the total area; however, they result in a substantial ${36}\%$ reduction in total matmul latency.
|
| 42 |
+
|
| 43 |
+
4) Hardware-Software Co-Design: To alleviate the overhead incurred by runtime quantization and dequantization, as well as the offloading of nonlinear operations to the CPU, we have transitioned our baseline Transformer workload from a naive BERT implementation, where only matmuls are quantized, to an integer-only BERT variant known as I-BERT [26]. I-BERT substitutes floating-point nonlinear operations with integer polynomial approximations, which can be implemented faster and more efficiently in specialized accelerators. To incorporate I-BERT, we add new integer implementations of I-BERT's GELU, LayerNorm, and Softmax variants to our baseline CNN accelerator. The 32-bit matmul results residing in the accumulator are fed into a newly added "normalization unit" which computes reduction operations (e.g. sum, sum-of-square, max, etc.) which are used by LayerNorm and Softmax. Multiple passes of accumulator reads are required to compute all the reductions in these operations. Subsequentially, the mat-mul results in the accumulator undergo a final read operation to be fed into a set of 16 activation units, which compute I-BERT's non-linear variants in parallel.
|
| 44 |
+
|
| 45 |
+
With these new features, overall end-to-end BERT inference performance improved by ${39.6} \times$ over the baseline accelerator's initial performance. As Fig. 2 illustrates, the computational bottleneck once again became the matmuls rather than normalization or activation functions. Quantization and dequantization no longer become necessary and GELU can be trivially fused with the preceding matmuls, so that they become one pipelined operation. When synthesized with the ASAP7 PDK [13], the new hardware units increased the total area consumption of the accelerator by only ${14}\%$ , and the GELU, LayerNorm, and Softmax operations increased the power consumption of a BERT inference by only 9.3%.
|
| 46 |
+
|
| 47 |
+
< g r a p h i c s >
|
| 48 |
+
|
| 49 |
+
Fig. 2: The time breakdown of a BERT inference with a sequence-length of 512, when running on (Left) the baseline CNN accelerator, and (Middle) the accelerator with I-BERT's hardware/software features incorporated. (Right) The time breakdown with different sequence lengths after the change. For all sequence lengths, the total execution time is dominated by matmuls.
|
| 50 |
+
|
| 51 |
+
§ III. SCHEDULING OPTIMIZATION
|
| 52 |
+
|
| 53 |
+
In Sec. II, we have demonstrated that the nonlinear operations in Transformers introduce challenges to efficient accelerator design. We further find that these operations present non-trivial challenges to the scheduling problem as well.
|
| 54 |
+
|
| 55 |
+
Generally in DNN scheduling, it is an enticing strategy to fuse relatively high-arithmetic-intensity matmuls with the following low-arithmetic-intensity normalization operations. For example, execution schedulers for CNN-type accelerators often fuse convolutions with ReLU or max-pool operations. This strategy is especially applicable in the case of quantized workloads, where partial sums awaiting normalization are often of higher bitwidth than the final normalized outputs.
|
| 56 |
+
|
| 57 |
+
Similarly, for Transformer encoders, we could overlap the execution of normalization operations (LayerNorm and Soft-max) with their preceding matmuls. However, this strategy may require hardware/software changes. First in the case of DNN accelerators like Gemmini, additional hardware support for directly accessing partial sums by normalization operation units may be required. Second, appropriate constraints on the matmul execution schedule are necessary. In particular, the tiling factor size of either output dimension of the matmul must be maximized, so that rows/columns are immediately ready and stored at the Gemmini accumulator scratchpad for computing the mean and standard deviation. We refer to this alternate scheduling approach as fusion-optimized scheduling.
|
| 58 |
+
|
| 59 |
+
In Fig. 3, we take a deeper look into the performance implications of fusion-optimized scheduling for the BERT-Base encoder. We model the total latency of each adjacent pair of matmul and LayerNorm/Softmax operations via Timeloop [32] with the target hardware being the I-BERT modified Gemmini described in Sec. II. Opportunities for overlapping computations include: (1) the MHA query $\times$ key matmul and following Softmax; (2) MHA ${W}_{\text{ out }}$ projection and following LayerNorm; and (3) FFN ${W}_{2}$ projection and following LayerNorm. The two scheduling strategies we compare are: (1) fusion-optimized scheduling and (2) Gemmini's default heuristic-based scheduler, which greedily maximizes loop tile factors at the local SRAM level for each of the three matmul dimensions. We refer to the second, default scheduling approach as non-fused scheduling.
|
| 60 |
+
|
| 61 |
+
< g r a p h i c s >
|
| 62 |
+
|
| 63 |
+
Fig. 3: (Left) Impact of fusion-optimized scheduling for MHA execution. Hiding the Softmax latency via fusion-optimized scheduling improves overall MHA latency by ${78}\%$ , but overlapping ${W}_{\text{ out }}$ projection with LayerNorm can hurt total latency. (Right) Impact of fusion-optimized scheduling for FFN matmul that enables latency hiding of the LayerNorm operation. We observe that fusion-optimized scheduling hurts total latency by 27%. In both cases, we assume an input sequence length of 512 and accumulator size of ${256}\mathrm{{kB}}$ .
|
| 64 |
+
|
| 65 |
+
The left plot of Fig. 3 showcases the promises of mat-mul and non-linear operator fusion within the MHA. With Gemmini on-chip scratchpad and accumulator SRAM sizes of ${256}\mathrm{{KB}}$ , we observe that it is advantageous to fuse query $\times$ key matmuls with Softmax for each attention head and thereby hide the relatively high latency of executing the Softmax operation. Assuming an input sequence length of 512, the Softmax latency is significant compared to the matmul, taking up around ${78}\%$ of the total cycles and contributes greatly to the total latency.
|
| 66 |
+
|
| 67 |
+
On the other hand, the right plot of Fig. 3 shows the results on matmul and LayerNorm overlapping in the FFN ${W}_{2}$ projection. Here, we observe that fusion-optimized scheduling worsens total latency by ${27}\%$ . When scheduling the FFN, we find that at the BERT-Base scale, it is consistently favorable to overlap the MHA query $\times$ key with the ensuing Softmax but consistently disadvantageous to chain the FFN ${W}_{2}$ projection matmul with LayerNorm. This is in contrast with previous studies on GPU kernel fusion for Transformers [11], [34], and it highlights how scheduling for Transformer matmuls becomes more complex when targeting different styles of custom hardware designs, including the Gemmini accelerator.
|
| 68 |
+
|
| 69 |
+
§ IV. NEURAL ARCHITECTURE OPTIMIZATION
|
| 70 |
+
|
| 71 |
+
Another important avenue in full stack optimization of DNNs is optimizing DNN architectures and tailoring them for specific hardware platforms. However, the exponential search space of DNN architectures often makes it challenging to find an optimal architecture, even without considering the underlying hardware. To address this issue, automated neural architecture search (NAS) methods have been proposed to adapt DNNs for given hardware constraints. In this regard, we apply hardware-aware NAS to search for Transformer architectures that are optimal on the Gemmini-driven accelerator with better efficient and performance trade-offs.
|
| 72 |
+
|
| 73 |
+
< g r a p h i c s >
|
| 74 |
+
|
| 75 |
+
Fig. 4: (Left) EDP-perplexity, (Middle) Latency-perplexity, and (Right) Energy-perplexity plots of the Transformer architectures found via evolutionary search on our Gemmini hardware configuration. Lower perplexity indicates better performance of the trained models. For better comparison, we additionally plot lines to illustrate $+ {0.1}$ and +1 point perplexity degradation.
|
| 76 |
+
|
| 77 |
+
1) Experiment Setup: As a baseline architecture, we use a 6-layer Transformer architecture with all other model configurations remaining the same as BERT-Base. We use language modeling on the WikiText-2 [30] as a training objective. To evaluate the model performance, we measured perplexity on the validation examples, where lower scores indicate better performance. The stand-alone baseline model was trained for 50 epochs with the Adam optimizer and a linear learning rate scheduling with a peak learning rate of $\{ 5,2,1,{0.5}\} \times {10}^{-5}$ . We use a sequence length of 512 and a batch size of 16 .
|
| 78 |
+
|
| 79 |
+
For NAS, we adopt the BigNAS [46] strategy to train a supernet using the same training hyperparameters as the stand-alone training. The NAS search space is comprised of various combinations of the number of layers in $\{ 3,4,5,6\}$ , number of heads in $\{ 4,6,8,{10},{12}\}$ , hidden dimension in [384,768], and FFN dimension in $\left\lbrack {{768},{3072}}\right\rbrack$ . Subsequently, we use evolutionary search for 40 iterations with a population size of 40 and mutation probability of 0.2 to search optimal subnets out of the fully trained supernet. After every iteration, only the subnets that are Pareto-optimal in EDP (energy-delay-product) and perplexity are retained. To measure the hardware cost, we use a lookup table-based method for quickly assessing the latency and energy consumption of each subnet on the target hardware, instead of using time-consuming RTL simulation. The lookup table contains Timeloop [32] simulated latency and energy numbers for each operation, which are then summed up to estimate the end-to-end values for the entire subnets. After the evolutionary search, the Pareto-optimal subnets are then evaluated with an RTL simulator to obtain a more precise estimation of the latency. For the energy measure, we continue to use the numbers from Timeloop. For the target hardware, we use Gemmini with the optimizations applied in Sec. II.
|
| 80 |
+
|
| 81 |
+
2) Experiment Results: We show the NAS Pareto-frontier results for EDP, latency and energy in Fig. 4 (blue curves) where each point corresponds to a different Transformer architecture found from the evolutionary search algorithm. Additionally, we plot the stand-alone trained baseline Transformer model trained as a reference ( $\times$ mark). As can be seen in the EDP plot (Fig. 4 Left), the NAS framework allows us to obtain multiple Transformer architectures with better hardware cost to perplexity trade-offs. That is, it finds architectures with similar or even better perplexity, as compared to the baseline with smaller hardware costs.
|
| 82 |
+
|
| 83 |
+
Fig. 4 (Middle and Right) further illustrates latency and energy separately. As one can see, it is possible to attain a ${1.4} \times$ reduction in latency versus the baseline Transformer with 0.1 point perplexity degradation. If one could tolerate 1 point degradation in perplexity, latency can be reduced by ${2.4} \times$ . With regards to energy, one can attain a ${1.6} \times$ improvement considering 0.1 point perplexity degradation, and ${4.4} \times$ when allowing perplexity degradation of 1 point. Taking both together, it is possible to reduce EDP by ${2.2} \times$ with just 0.1 point perplexity degradation, and ${10.6} \times$ with 1 point perplexity degradation. These examples illustrate the power of co-design in allowing practitioners to choose a combination that best matches their needs. It is important to note that this represents a single run of our co-design methodology on a specific hardware platform, and results may vary depending on the target hardware and optimization goals.
|
| 84 |
+
|
| 85 |
+
§ V. CONCLUSION
|
| 86 |
+
|
| 87 |
+
While Transformer models have shown significant performance improvements, their growing size and run-time complexity present a critical challenge in efficient inference. In this work, we have demonstrated the benefits of a full stack approach by leveraging the advantages of co-design and co-optimization techniques across the stack. We adapted a CNN-oriented accelerator to efficient Transformer inference by supporting integer-only nonlinear operations [26] and rebalancing the memory hierarchy, which yielded a ${39.6} \times$ latency reduction. We also applied NAS to search for Pareto-optimal Transformer architectures given the tradeoff between EDP and perplexity, leading to a ${10.6} \times$ EDP reduction with minimal performance drop. Altogether, we have exhibited a ${88.7} \times$ latency improvement without a noticeable performance drop compared to a naive implementation without full-stack considerations. We have also demonstrated that unlike in CNNs, nonlinear operations in Transformers require careful consideration when performing operator fusion when targeting custom accelerators, e.g. systolic-array based architectures. We expect more improvement when we take this into consideration when designing the end-to-end full stack optimization pipeline.
|
papers/ISCA/ISCA 2023/ISCA 2023 Workshop/ISCA 2023 Workshop ASSYST/PibYaG2C7An/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,209 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Efficient Deployment of Transformer Models on Edge TPU Accelerators: A Real System Evaluation
|
| 2 |
+
|
| 3 |
+
Abstract-Transformer models have become a dominant architecture in the world of machine learning. From natural language processing to more recent computer vision applications, Transformers have shown remarkable results and established a new state-of-the-art in many domains. However, this increase in performance has come at the cost of ever-increasing model sizes requiring more resources to deploy. Machine learning (ML) models are used in many real-world systems, such as robotics, mobile devices, and Internet of Things (IoT) devices, that require fast inference with low energy consumption. For battery-powered devices, lower energy consumption directly translates into longer battery life. To address these issues, several edge AI accelerators have been developed. Among these, the Coral Edge TPU has shown promising results for image classification while maintaining very low energy consumption. Many of these devices, including the Coral TPU, were originally designed to accelerate convolutional neural networks, making deployment of Transformers challenging. Here, we propose a methodology to deploy Transformers on Edge TPU. We provide extensive latency, power, and energy comparisons among the leading-edge devices and show that our methodology allows for real-time inference of large Transformers while maintaining the lowest power and energy consumption of the leading-edge devices on the market.
|
| 4 |
+
|
| 5 |
+
Index Terms-Tensor Processing Unit (TPU), Transformer Models, Edge AI Accelerators, BERT.
|
| 6 |
+
|
| 7 |
+
## I. INTRODUCTION
|
| 8 |
+
|
| 9 |
+
Since the introduction of Transformer models in 2017 [1], they have quickly risen to prominence in many areas, such as natural language processing and computer vision. These models have shown state-of-the-art results in a wide domain of tasks from machine translation [1] and question-answering [2] to computer vision tasks like image segmentation [3]. Many applications, such as self-driving cars, IoT devices, satellites, drones, and robots, require deploying models for real-time inference using low-power energy-constrained systems. Transformer-based models, however, often include a large number of processing layers, along with hundreds of millions of parameters. For instance, the Bidirectional Encoder Representations from Transformers (BERT) [4] models contain 109 million and 340 million parameters for the Base and Large models, respectively [5]. Therefore, deploying such massive models at the edge for real-time applications with tight restrictions on power and energy is challenging.
|
| 10 |
+
|
| 11 |
+
The surge in demand for specialized hardware for AI applications has resulted in a rapidly expanding industry for edge AI accelerators. Anticipating this trend, several companies have developed their own specialized accelerators. The NVIDIA Jetson Nano [6] is a low-cost development board for machine learning (ML) applications that employ NVIDIA TensorRT1 as the main driver. The Intel Movidius Neural Compute Stick 2 (NCS2) [7] is a small, low-power USB co-processor that enables the deployment of Deep Neural Networks (DNNs) and is powered by the Myriad Vision Processing Unit (VPU). Google's Coral Edge TPU is another device that leverages tensor processing units (TPUs) to accelerate ML applications. The coral TPU is used as a co-processor on Coral's Dev Board, as well as a USB accelerator [8] that can be integrated with tiny computers such as Raspberry Pi. With the peak performance of four tera-operations per second (TOPS) and two TOPS/W, Coral Edge TPU can be one of the promising technologies for realizing real-time Transformer models. While several studies have used the Coral TPU to accelerate their DNN applications, to the best of the authors' knowledge, no work has deployed Transformer-based models on Coral Edge TPU accelerators.
|
| 12 |
+
|
| 13 |
+
Herein, we propose a methodology to deploy Transformer models on the Coral Edge TPU. Because Transformers are often very large, training them is time-consuming, computationally expensive, and often requires very large datasets that are not always publicly available. For these reasons, it is crucial that our methodology support a wide range of existing Transformer architectures such as Vision Transformers (ViT) [9], left-right Transformers, also known as (a.k.a) Encoder-Decoder Transformers [1], [2] and BERT-like [4] Transformers without any need for retraining, aside from possible retraining associated with quantization. Here, we modify the computational graph to allow the model to run on the Edge TPU while remaining functionally identical to the original model. While common model optimization techniques such as pruning, knowledge distillation, hyper-parameter optimization, and neural architecture search ( [10] provides an overview of these techniques) can be used to improve the size, latency, power consumption, and energy consumption of models, the focus of this paper is on the efficient deployment of existing Transformer architectures on the Coral Edge TPU. Some or all of the aforementioned optimization techniques can be used on top of our work to further improve the latency and power consumption of models. Although we focus on the BERT Transformer architecture for the main body of the work, we show that this methodology can be generalized for both BERT-like and left-right Transformers.
|
| 14 |
+
|
| 15 |
+
## II. BACKGROUND
|
| 16 |
+
|
| 17 |
+
## A. Transformer Model
|
| 18 |
+
|
| 19 |
+
Transformer models can vary slightly in design, but the core architecture remains the same. Transformers use embedding layers to turn tokens into vectors of size ${d}_{\text{model }}$ , a.k.a hidden size. The exact number of embedding layers varies from one model to another. For instance, BERT uses three embedding layers, as shown in Fig. 1. Transformers also employ a stack of attention heads to capture different learned attention associations using scaled dot product attention that maps queries and key-value pairs to outputs. Scaled dot product attention uses a dot product between the queries(Q)and the keys(K) to compute attention scores. These scores are scaled down to create a mean of 0 and a variance of 1 , and the Softmax function is applied to generate weights for the values. The weights are then multiplied by the values (V) to generate the weighted attention scores for the tokens. At the end of the multi-headed attention layer, the values from each attention head are concatenated together and passed to a fully-connected (FC) layer and then an activation function is applied.
|
| 20 |
+
|
| 21 |
+
Add & Norm output Inner Feed Forward Masked Token Type Embedding Token Type n Add & Norm Multi-Headed Attention Add & Norm Word Embedding Positional Embedding Input Tokens
|
| 22 |
+
|
| 23 |
+
Fig. 1: BERT Architecture with $n$ encoder layers [1].
|
| 24 |
+
|
| 25 |
+
Most Transformers, including GPT-3 [2], and BERT use Gaussian Error Linear Units (GELU) [11] as the activation function which uses the non-linearity property of Rectified Linear Units (ReLU) with the regularization property of Dropout [12]. The output of the FC layer is added with previous layers using a residual connection. In the Encoder, these values are passed to two FC layers where the inner FC layer has size of ${d}_{ff}$ , a.k.a intermediate size, and the outer FC layer has size of ${d}_{\text{model }}$ . Again, the output is added with previous layers and normalized using residual connections. Finally, these values are passed to the next encoder layer, or if there is none, then the classification head/decoder layer. Left-right Transformers have a decoder layer that is nearly identical to the encoder layer, except it has one extra multiheaded attention layer before the feed-forward layers called the encoder-decoder multi-headed attention. The encoder-decoder multi-headed attention is the same as the encoder multi-headed attention except that the query and key vectors come from the encoder, and the values vector comes from the decoder. BERT was introduced in 2018 and builds upon prior Transformer architectures, with one key difference being bi-directionality. Unlike prior Transformer models, BERT is designed to train on both left and right contexts for text. Using a pre-trained BERT model and one additional classification layer, BERT can be fine-tuned to perform various language tasks.
|
| 26 |
+
|
| 27 |
+
Weight PE PE PE PE PE PE PE PE PE Edge TPU (a) Core Memory Multi-channel MUX (b) Memory Instruction Memory Parameter Memory Off-chip Controller memory PE Memory Output
|
| 28 |
+
|
| 29 |
+
Fig. 2: (a) Edge TPU architecture. (b) PE structure [13].
|
| 30 |
+
|
| 31 |
+
## B. Coral Edge TPU Architecture
|
| 32 |
+
|
| 33 |
+
In 2015, Google launched the TPU project in which they adopted the systolic array architecture to accelerate the DNN operations [13]. The first version of Google's TPU was designed to only accelerate the DNN inference on the cloud. In 2019, Google launched a smaller and low-power version of TPU, called Edge TPU, that is suited to accelerate the inference of the DNN at the edge. The Edge TPU uses 8- bit integer (int8) multiply and accumulate (MAC) core units in its processing elements (PEs) [8].
|
| 34 |
+
|
| 35 |
+
In general, the systolic array architecture includes a set of processing elements that are formed in single or multidimensional arrays that can collectively perform the computation on certain data brought from memory with no need to access it from the memory multiple times. The systolic arrays developed for the ML acceleration are designed to implement matrix-matrix, matrix-vector, and vector-vector multiplications which are the dominant operations in ML workloads. Systolic arrays increase performance by reusing the values fetched from memory and reducing the main memory accesses [14]. The dataflow in the systolic array is a mapping scheme that depends on the microarchitecture of PEs and determines how the input data is fed to the array, and how the partial results and outputs are generated and stored. Google adopted the weight stationary dataflow in their cloud TPU and Edge TPU designs [15], in which, the weights are pre-stored in the core memory of PEs. At each cycle, the input elements are fed to the PEs and multiplied by the pinned weights producing partial sums. This process is vertically distributed over columns in the systolic array to produce the output results.
|
| 36 |
+
|
| 37 |
+
Figure 2 shows the architecture of the Edge TPU and the microarchitecture of each PE within its 2D systolic array. The Edge TPU includes activation memory, instruction memory, parameter memory, controller, and PEs. The controller transfers the data between the off-chip memory and the PEs, fetches parameters and activation into the buffers, and reads the instructions that will be executed on the PEs. The Edge TPU supports a variety of commonly-used operations in DNN models [8]. Each PE in the Edge TPU has four parallel MAC units, as opposed to the cloud TPU v1 which has only one MAC unit per PE. As shown in Fig.2, the PEs in Edge TPU have a single-instruction-multiple-data (SIMD) architecture. They can perform the MAC operation on four data values at the same time using four 8-bit fixed point compute lanes. Moreover, each PE has a core memory and a PE memory. The PE memory is designed as a first in first out (FIFO) buffer that is shared among all PEs and used to store model activations, partial results, and final outputs. Since Edge TPU has a weight-stationary systolic array, the core memory is used to store model parameters, i.e., weights.
|
| 38 |
+
|
| 39 |
+
## III. Proposed Methodology to Deploy Small- And MEDIUM-SIZED TRANSFORMERS ON EDGE TPU
|
| 40 |
+
|
| 41 |
+
## A. Existing Edge TPU Deployment Process
|
| 42 |
+
|
| 43 |
+
For full Edge TPU utilization, several requirements must be met; otherwise, only parts of the model will run on the Edge TPU. The Coral documentation [16] contains an exhaustive list of requirements and all supported operations. Here, we only focus on the requirements that are relevant to the Transformer architecture.
|
| 44 |
+
|
| 45 |
+
The Edge TPU only supports TensorFlow Lite (TFLite) models. TFLite is a lightweight version of TensorFlow [17] that is optimized for deployment on edge systems. Using the TFLite interpreter, different delegates can be used depending on the hardware accelerator, such as NNAPI for android devices, GPU for mobile GPUs, Hexagon for DSPs, Core ML for iOS devices, and libedgetpu, which is the focus of this work, for the Coral Edge TPU. Note that TFLite only supports a subset of all TensorFlow operations and the Coral Edge TPU only supports a subset of all TFLite operations. A list of supported Edge TPU operations and any known limitations can be found at [16]. To fully utilize the TPU, the model must contain only supported Edge TPU operations.
|
| 46 |
+
|
| 47 |
+
Since the Edge TPU only supports 8-bit integer operations, any models aimed to be deployed on Edge TPU must be converted from 32-bit floating point (fp32) to int8 or unsigned int8 for all parameters, activations, and operations. This can be done using either quantization-aware training (QAT) or post-training quantization (PTQ) with a representative dataset. In [18], it is shown that using QAT, BERT can maintain state-of-the-art results using 8-bit integer-only inference. Once the model has been converted to a quantized TFLite model, the Edge TPU compiler maps the supported operations to the TPU and leaves the remaining operations on the CPU. The compiler maps all supported operations onto one graph to be loaded onto the TPU called the Edge TPU custom op. Currently, the Edge TPU graph only includes consecutive operations that are supported on Edge TPU. Once the compiler finds an operation in the model that is not supported by the TPU, all the following operations will be mapped to the CPU, regardless of being supported by TPU or not. Another deployment requirement for Edge TPU is that all tensor sizes should be constant at compilation time. After training, we change the batch size dimension to 1 and the sequence length dimension to 128 . Moreover, the existing Edge TPU devices do not support embedding layers. Therefore, since the embedding layers make up only a small portion of the overall Transformer model, we leave the operation to run on the CPU for inference.
|
| 48 |
+
|
| 49 |
+
To verify whether modifying Transformers based on the existing requirements mentioned above would be sufficient to successfully deploy them on Edge TPU, we have adapted BERT-Tiny to BERT-Large models accordingly and tried to deploy them on the Edge TPU. This experiment results in compilation failure or partial compilation for all the models. This is mainly due to the fact that Transformers include operations that are currently not supported by Edge TPU. Thus, we develop several methodologies in the following subsections to resolve the current deployment limitations of Transformers on the Edge TPU.
|
| 50 |
+
|
| 51 |
+
## B. Proposed Edge TPU Deployment Process for Transformers
|
| 52 |
+
|
| 53 |
+
To address the existing deployment challenges of the Transformers on Edge TPU, it is required to refactor their computational graph to alter their operations to those supported by Edge TPU without altering the model's functionality. Thus we developed a flexible in-house TensorFlow Transformer model using custom Keras layers. This custom Transformer model allows us to modify any operations in our model and replace them where necessary. In order to ensure backward compatibility with existing Transformers, we map pre-trained weights onto our model and verify that both models yield the same output for the same input. In the following, we discuss two of the operations in Transformers that cause the compilation failure in Edge TPU, and propose methods to refactor them such that they can be readily deployed on Edge TPU.
|
| 54 |
+
|
| 55 |
+
1) Refactoring GELU Activation Function: As mentioned, GELU [11] is used in many Transformers and is defined by the following equation:
|
| 56 |
+
|
| 57 |
+
$$
|
| 58 |
+
\operatorname{gelu}\left( x\right) = \frac{1}{2}x\left\lbrack {1 + \operatorname{erf}\left( \frac{x}{\sqrt{2}}\right) }\right\rbrack \tag{1}
|
| 59 |
+
$$
|
| 60 |
+
|
| 61 |
+
where $\operatorname{erf}\left( x\right)$ is the Gaussian error function which is defined
|
| 62 |
+
|
| 63 |
+
as:
|
| 64 |
+
|
| 65 |
+
$$
|
| 66 |
+
\operatorname{erf}\left( x\right) = \frac{2}{\sqrt{\pi }}{\int }_{0}^{x}{e}^{-{t}^{2}}{dt} \tag{2}
|
| 67 |
+
$$
|
| 68 |
+
|
| 69 |
+
The GELU activation function is not currently supported on Edge TPU. Several approximations for GELU have been developed, including those based on transcendental functions [11] and those based on polynomial equations [18]. For our purposes, we use the polynomial-based approximation of GELU known as I-GELU where $\operatorname{erf}\left( x\right)$ is approximated as:
|
| 70 |
+
|
| 71 |
+
$$
|
| 72 |
+
L\left( x\right) = \operatorname{sgn}\left( x\right) \cdot \left\lbrack {a \cdot {\left( \min \left( \left| x\right| , - b\right) + b\right) }^{2} + 1}\right\rbrack \tag{3}
|
| 73 |
+
$$
|
| 74 |
+
|
| 75 |
+
inputs weights outputs ${\mathrm{C}}_{00}$ ${\mathrm{C}}_{01}$ ${\mathrm{C}}_{11}$ ${\mathrm{C}}_{21}$ (a) outputs kernels ${\mathrm{C}}_{01}$ ${\mathrm{C}}_{11}$ $m$ (b) $n$ inputs $n$
|
| 76 |
+
|
| 77 |
+
Fig. 3: (a) standard matrix-matrix dot product (b) matrix-matrix dot product using convolutions.
|
| 78 |
+
|
| 79 |
+
where $a = - {0.2888}, b = - {1.769},{sgn}$ denotes the sign function, and min denotes the minimum function. I-GELU is defined as:
|
| 80 |
+
|
| 81 |
+
$$
|
| 82 |
+
I - {GELU}\left( x\right) = \frac{1}{2}x\left\lbrack {1 + L\left( \frac{x}{\sqrt{2}}\right) }\right\rbrack \tag{4}
|
| 83 |
+
$$
|
| 84 |
+
|
| 85 |
+
However, TFLite does not support the sign function, and the Edge TPU compiler does not support the absolute value function. Therefore, we further revised the GELU approximation and replaced the sign and absolute value functions with $\operatorname{sgn}\left( x\right) \approx \tanh \left( {x \cdot {10}^{3}}\right)$ and ${abs}\left( x\right) \approx x \cdot \operatorname{sgn}\left( x\right)$ , respectively. Thus, we approximate $L\left( x\right)$ in (3) as:
|
| 86 |
+
|
| 87 |
+
$$
|
| 88 |
+
L\left( x\right) = \tanh \left( {{10}^{3}x}\right) \left\lbrack {a{\left\lbrack \min \left( x \cdot \tanh \left( {10}^{3}x\right) , - b\right) + b\right\rbrack }^{2} + 1}\right\rbrack \tag{5}
|
| 89 |
+
$$
|
| 90 |
+
|
| 91 |
+
The proposed I-GELU approximation is supported by both TFLite and Edge TPU. Therefore, in the Transformers, we replace all instances of GELU with our approximation of GELU.
|
| 92 |
+
|
| 93 |
+
2) Refactoring Matrix-Matrix Dot Products for FC Layer: Many of the operations in Transformers are matrix-matrix dot products. Although the matrix-matrix dot product in the self-attention layer is supported by the Edge TPU, it cannot handle the matrix-matrix dot products in the FC layers, as described in the device documentation [16]. To perform matrix-matrix dot products in FC layers, we implement the dot product operation using convolutions. This can be done as follows: let $A$ be an $m \times n$ input matrix, $B$ be an $n \times k$ weight matrix, and $C$ be the $m \times k$ output matrix such that $A \cdot B = C$ . This is a standard matrix-matrix dot product, as shown in Fig. 3 (a). Now consider a convolution layer where we have $k$ convolution kernels, each with the size of $1 \times n$ called ${K}_{\text{conv }}$ (shown in Fig. 3b). We can map the weights from matrix $B$ one-to-one such that ${K}_{\text{conv }}\left\lbrack x\right\rbrack = {B}^{T}\left\lbrack x\right\rbrack$ . By convolving the kernels ${K}_{\text{conv }}$ across the input matrix $A$ with strides of 1 and no padding, the resulting matrix will be an $m \times k$ matrix identical to the original output matrix $C$ as illustrated in Fig. 3.
|
| 94 |
+
|
| 95 |
+
TABLE I: Bert models' specifications.
|
| 96 |
+
|
| 97 |
+
<table><tr><td>Model</td><td>Hidden size</td><td>Attention Heads</td><td>Hidden Layers</td><td>Intermediate Size</td><td>Parameters (millions)</td></tr><tr><td>Tiny</td><td>128</td><td>2</td><td>2</td><td>512</td><td>4.4</td></tr><tr><td>Mini</td><td>256</td><td>4</td><td>4</td><td>1024</td><td>11.2</td></tr><tr><td>Small</td><td>512</td><td>8</td><td>4</td><td>2048</td><td>28.8</td></tr><tr><td>Medium</td><td>512</td><td>8</td><td>8</td><td>2048</td><td>41.4</td></tr><tr><td>Base</td><td>768</td><td>12</td><td>12</td><td>3072</td><td>109.5</td></tr><tr><td>Large</td><td>1024</td><td>16</td><td>24</td><td>4096</td><td>335.1</td></tr></table>
|
| 98 |
+
|
| 99 |
+
tanh select (tanh (tanh mul min mul mul (c) (a) (b)
|
| 100 |
+
|
| 101 |
+
Fig. 4: Computational graphs for (a) $\tanh \left( x\right)$ (b) $\operatorname{ReLU}\left( x\right)$ (c) I-GELU( $x$ ) activation functions.
|
| 102 |
+
|
| 103 |
+
Using the aforementioned strategies, we can successfully compile small- and medium-sized Transformers, such as BERT-Tiny to BERT-Medium, on Edge TPU. However, the compilation still fails for larger Transformers such as BERT-Base. Unfortunately, the compiler does not provide detailed information about why the larger models cannot compile, so it is unclear whether the compilation fails due to some fundamental hardware limit in the Edge TPU or if there is an issue with the compiler itself. Regardless, in the next section, we discuss methods to identify the source of the issue and resolve it.
|
| 104 |
+
|
| 105 |
+
### IV.The Proposed Methodology to Deploy Large TRANSFORMERS ON EDGE TPU
|
| 106 |
+
|
| 107 |
+
By comparing the architecture of BERT-Medium and BERT-Base (see Table I), we narrow down the possible cause of the compilation failure to the increased hidden size, attention heads, hidden layers, intermediate size, or some combination of these model parameters. Starting with the BERT-Medium architecture, we change one of the model parameters to match BERT-Base until we reproduce the issue. Using this strategy, we identify the two layers that cause the compilation to fail: the inner FC layer and the embedding layer.
|
| 108 |
+
|
| 109 |
+
Further, we observe that for the inner FC layer, which uses the intermediate size from Tab. I, the model compiles for BERT-Medium using a size of 2048 but does not compile for BERT-Base using a size of 3072. Motivated by this observation, we use a binary search algorithm to determine the maximum size for the inner FC layer that can be compiled on Edge TPU. We find that when followed by the I-GELU activation function, the maximum inner FC layer size for Edge TPU is equal to 2728. Moreover, we find that the maximum size supported varies depending on the type of activation function. For instance, with no activation function or using ReLU, Sigmoid, and TanH, the intermediate size can be a maximum of 5376 neurons. This could be due to the computation size of the I-GELU activation function, as it can be seen in its computational graph in Fig. 4c, which can lead to increased memory demands beyond what is available in Edge TPU's PE memory.
|
| 110 |
+
|
| 111 |
+
output output concat approx GELU GELL + + bias $\left\lbrack {n/2 + 1 : n}\right\rbrack$ conv conv input (b) GELU bias + bias $\left\lbrack {1 : n/2}\right\rbrack$ weight input (a)
|
| 112 |
+
|
| 113 |
+
Fig. 5: (a) standard convolution-based fully connected layer (b) fully connected layer partitioned across the output dimension.
|
| 114 |
+
|
| 115 |
+
To address the aforementioned challenge, we propose partitioning the inner FC layer into two or more equal parts to reduce the size of the operations in the layer. For the BERT-Base model, we partition the inner FC layer into two parts, as shown in Figure 5. By partitioning the layer along the output, we reduce the size of the operation by splitting it into two $m \times n/2$ dot products instead of one $m \times n$ dot product and two $n/2$ I-GELU activations. For the BERT-Base model, this leads to 1536 I-GELU neurons, which is less than the maximum 2728 neurons supported by Edge TPU. At the end, we concatenate these two layers to realize the output. This approach allows the model with the intermediate size of 3072 , which is used in BERT-Base, to compile successfully.
|
| 116 |
+
|
| 117 |
+
Although partitioning the inner FC layer resolves the compilation issue for models with larger intermediate sizes, we still observe that increased size of the embedding layer causes the compilation failure for the model. Therefore, we leverage a similar partitioning mechanism for the embedding layer across the output dimension. Since the embedding layer itself cannot be mapped to the Edge TPU, the model's input for the Edge TPU is the output of the embedding layer. As discussed earlier, here we set the input sequence length to 128 ; therefore, using BERT-Base for example, the Edge TPU input includes three ${128} \times {768}$ matrices. Similar to how we partition the FC layer, we partition the embedding layers across the output dimension. This changes the Edge TPU input from three ${128} \times {768}$ matrices to six ${128} \times {384}$ matrices.
|
| 118 |
+
|
| 119 |
+
Using the aforementioned partitioning mechanisms in the inner FC layer and embedding layer, we successfully compile and deploy the BERT-Base and Bert-Large models on Edge TPU. To assess the validity of our approach for other types of transformers, we create a left-right transformer based on the model introduced in [1]. Without any modifications to the model, it fails to compile. However, leveraging our deployment methodology, we can compile and deploy this transformer model on Edge TPU as well, which exhibits the effectiveness of our approach for various architectures and sizes.
|
| 120 |
+
|
| 121 |
+
Multimeter NCS2 Coral USB (b) (d) (a) Raspberry Pi (c)
|
| 122 |
+
|
| 123 |
+
Fig. 6: Experimental setup. (a) Pi + NCS2 (b) Pi + Coral TPU (c) Coral Dev board (d) Jetson Nano.
|
| 124 |
+
|
| 125 |
+
## V. EXPERIMENTAL RESULTS
|
| 126 |
+
|
| 127 |
+
## A. Experimental Setup
|
| 128 |
+
|
| 129 |
+
After verifying the successful compilation of various-sized Transformer models on Edge TPU, we evaluate its performance against well-known edge AI accelerators existing in the market. In particular, we investigate two experimental setups: (1) USB accelerators, where we compare Intel NCS2 (Fig. 6a) with Coral TPU USB accelerator (Fig. 6b), and (2) Development Boards, in which we evaluate Coral Edge TPU Dev Board (Fig. 6c) against Nvidia Jetson Nano (Fig. 6d). The USB accelerators are integrated as a co-processor with Raspberry Pi 4. There are different settings required to run the models on each of the edge devices. For the Raspberry Pi 4 and both Coral products, we use TFLite models with fp32 and int8 precision, respectively. For the NCS2, we use OpenVino models with fp16 precision. For the Jetson, we use TensorRT models with fp16 precision. Jetson provides two different operating modes, i.e., low-power mode and Max-N or high-power mode. Here, we use six different BERT models for our experiments: Tiny, Mini, Small, Medium, Base, and Large. Due to the large size of the BERT-Base and BERT-Large models, we only use development boards to run these models. Also, since Jetson Nano could not compile BERT-Large, we only compared the Coral Dev board with Raspberry Pi for the BERT-Large model.
|
| 130 |
+
|
| 131 |
+
Large Base Medium Small Mini Tiny $\mathrm{{Pi}} + \mathrm{{NCS}}2$ Pi + Coral Jetson-low etson-high Coral Dev $\mathrm{{Pi}} + \mathrm{{NCS}}2$ Jetson-low etson-high Pi + NCS2 etson-high Coral Dev ${10}^{5}$ 16152 Inference Latency (ms) ${10}^{4}$ 4429 ${10}^{3}$ 11347 397 ${10}^{2}$ ${10}^{1}$ Jetson-low Coral Dev Pi + NCS2 Jetson-low etson-high Coral Dev
|
| 132 |
+
|
| 133 |
+
Fig. 7: Inference latency measurements for all models and devices.
|
| 134 |
+
|
| 135 |
+
## B. Inference Latency Measurement
|
| 136 |
+
|
| 137 |
+
For inference latency measurements, we split the process into three parts: (1) load the model, (2) allocate the tensors depending on the platform, and (3) perform 100 inferences using a subset of the Microsoft Research Paraphrase Corpus (MRPC) dataset [19]. We measure the total time taken for 100 inferences and report the mean inference time for one input sample. Figure 7 shows the inference results for all platforms using the six BERT models. We see that all edge accelerators provide significant speedup over the baseline Raspberry Pi 4 CPU. For the smallest model, BERT-Tiny, Coral Dev board has the fastest inference speed at $4\mathrm{\;{ms}}$ per inference. For the BERT Mini, Small, Medium, and Base models, we see that order of inference speed from least to greatest is as follows: Jetson high power mode, Jetson low power mode, Coral Dev board, Coral USB, NCS2, Raspberry Pi 4. We see that the larger the model is, the bigger the difference is between the coral products and the Jetson. Although we do not report the model load and allocation times, it is important to note that the Coral Dev board, Coral USB, Jetson, and Raspberry Pi all take less than 10 seconds to load and allocate BERT-Medium, while the NCS2 takes over 10 minutes to load and allocate the same model.
|
| 138 |
+
|
| 139 |
+
1) USB Accelerators: For the USB accelerators, both NCS2 and Coral USB accelerator show improvement over the baseline Raspberry Pi 4, except for the case of NCS2 and BERT-Tiny. For BERT-Tiny there is an improvement of ${0.76} \times$ for NCS2 and ${5.2} \times$ for Coral USB accelerator. For BERT-Medium, we observe approximately $6 \times$ reduction in inference latency for Coral USB accelerator compared to NCS2.
|
| 140 |
+
|
| 141 |
+
2) Development Boards: Both development boards offer significant speedups compared to the Raspberry Pi 4. For the BERT-Tiny model, we observe ${3.2} \times$ and ${5.2} \times$ improvement over the baseline model using Jetson low and high power modes. For Coral Dev board, we have ${6.5} \times$ improvement over the baseline model. Performing the same comparison for the BERT-Base model, we have ${33} \times$ and ${48} \times$ improvement over the baseline model for Jetson with low and high power, respectively. For Coral Dev board, we observe ${11} \times$ improvement of the baseline model. For smaller models, Coral Dev board is slightly faster than the Jetson, but for larger models, Jetson is up to ${4.3} \times$ faster. The faster inference of Jetson, however, is achieved at the cost of significantly more chip resources and increased power consumption, as discussed in the next subsection.
|
| 142 |
+
|
| 143 |
+
Large Base Medium Small Min Tiny 11.45 1.41 1.41 10.52 0.66 Pi + Coral letson-low Coral Dev Pi + NCS2 letson-low etson-high Pi + NCS2 Pi + Coral letson-low etson-high Coral Dev 6 Inference Power (w) 4 3 3.02 2.51 2 11.47 11.45 1 0.52 0 Coral Dev etson-high Coral Dev Pi + NCS2 letson-low letson-high Coral Dev
|
| 144 |
+
|
| 145 |
+
Fig. 8: Dynamic power for all models and devices.
|
| 146 |
+
|
| 147 |
+
## C. Inference Power Measurements
|
| 148 |
+
|
| 149 |
+
We use MakerHawk UM34C USB multimeter to measure the power dissipation of all devices, except for the Jetson, which has three internal sensors for measuring the input, CPU, and GPU powers. To obtain the average power consumption, we run each model on each platform for five minutes and record the corresponding power profiles. Figure 8 shows the dynamic power measurements for all the models and platforms. Coral Dev board has the lowest power consumption across all experiments. As shown in the figure, the power consumption remains roughly unchanged across various models for all the platforms, except for Jetson's power consumption, which grows with the model size.
|
| 150 |
+
|
| 151 |
+
1) USB Accelerators: For the BERT-Tiny and BERT-Medium models, NCS2 and Coral USB accelerator consume nearly ${1.9} \times$ , and ${1.6} \times$ more power than Raspberry Pi 4 alone. Coral USB consumes ${1.3}\mathrm{x}$ less power than NCS2.
|
| 152 |
+
|
| 153 |
+
2) Development Boards: For the BERT-Tiny model, Coral Dev board achieves ${2.1} \times$ reduction in power compared to Raspberry Pi and a ${4.8} \times$ improvement over Jetson in high-power mode. For the BERT-Base model, Coral Dev board realizes a ${9.4} \times$ and ${4.8} \times$ power reduction compared to Jetson in high-power and low-power modes, respectively. Finally, for the BERT-Large model, Coral Dev board can achieve ${2.4} \times$ reduction in power dissipation compared to Raspberry Pi.
|
| 154 |
+
|
| 155 |
+
## D. Inference Energy
|
| 156 |
+
|
| 157 |
+
In Fig. 9, we compare the results for inference energy. Aside from NCS2 we see that all accelerators significantly improve inference energy over baseline Raspberry Pi.
|
| 158 |
+
|
| 159 |
+
Large Base Medium Small Mini 1976 $\mathrm{{Pi}} + \mathrm{{NCS}}2$ Pi + Coral Jetson-low Coral Dev $\mathrm{{Pi}} + \mathrm{{NCS}}2$ Jetson-low etson-high $\mathrm{{Pi}} + \mathrm{{NCS}} >$ Pi + Coral etson-high Coral Dev . ${10}^{5}$ Inference Energy (mJ) ${10}^{4}$ ${10}^{3}$ ${10}^{2}$ ${10}^{1}$ $\mathrm{{Pi}} + \mathrm{{NCS}}2$ etson-high Coral Dev
|
| 160 |
+
|
| 161 |
+
Fig. 9: Inference energy for all models and devices.
|
| 162 |
+
|
| 163 |
+
1) USB Accelerators: For the USB accelerators, we compare the BERT-Medium model. For The NCS2, the inference energy is ${1.3} \times$ worse than the Raspberry Pi 4 with no acceleration. Interestingly, the Coral USB accelerator provides a ${5.9} \times$ and ${7.75} \times$ improvement in inference energy compared to the Raspberry Pi alone and Raspberry Pi with NCS2, respectively.
|
| 164 |
+
|
| 165 |
+
2) Development Boards: Compared to Raspberry Pi, Coral Dev board provides a ${12} \times$ decrease in inference energy for the BERT-Tiny model. When compared to Jetson-low and Jetson high, Coral Dev board provides $3 \times$ and $6 \times$ improvements, respectively, for the same model. For the BERT-Base model, Coral Dev board is ${1.6} \times$ and ${2.5} \times$ more efficient than Jetson-low and Jetson-high. Furthermore, Coral Dev board is ${31} \times$ more efficient than Raspberry Pi for the same model. Finally, for the BERT-Large model, Coral Dev board achieves a notable ${35} \times$ energy saving compared to Raspberry Pi.
|
| 166 |
+
|
| 167 |
+
## VI. CONCLUSION
|
| 168 |
+
|
| 169 |
+
This paper provides a methodology to deploy Transformer models on Edge TPU accelerators by identifying the layers in Transformers that are not supported by Edge TPU and refactoring their computational graph. We provide an extensive comparison of the leading edge devices on the market for Transformer models. Our methodology can deploy various Transformer architectures on the Coral Edge TPU and achieves real-time inference while maintaining the lowest energy consumption of the edge devices. We show that by adopting our approach for the Coral USB Accelerator, inference for medium-sized Transformers can be accelerated up to nearly ${10} \times$ while consuming $6 \times$ less energy. Further, for large Transformers, our approach may be the only viable approach due to the memory constraints associated with other edge devices.
|
| 170 |
+
|
| 171 |
+
## REFERENCES
|
| 172 |
+
|
| 173 |
+
[1] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin, "Attention is all you need," Advances in neural information processing systems, vol. 30, 2017.
|
| 174 |
+
|
| 175 |
+
[2] T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell et al., "Language models are few-shot learners," Advances in neural information processing systems, vol. 33, pp. 1877-1901, 2020.
|
| 176 |
+
|
| 177 |
+
[3] J. Yu, Z. Wang, V. Vasudevan, L. Yeung, M. Seyedhosseini, and Y. Wu, "Coca: Contrastive captioners are image-text foundation models," arXiv preprint arXiv:2205.01917, 2022.
|
| 178 |
+
|
| 179 |
+
[4] J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, "Bert: Pre-training of deep bidirectional transformers for language understanding," 2018. [Online]. Available: https://arxiv.org/abs/1810.04805
|
| 180 |
+
|
| 181 |
+
[5] I. Turc, M.-W. Chang, K. Lee, and K. Toutanova, "Well-read students learn better: On the importance of pre-training compact models," arXiv preprint arXiv:1908.08962v2, 2019.
|
| 182 |
+
|
| 183 |
+
[6] Nvidia. Jetson nano module datasheet. [Online]. Available: https://developer.nvidia.com/embedded/dlc/jetson-nano-system-module-datasheet
|
| 184 |
+
|
| 185 |
+
[7] Intel. Intel neural compute stick 2. [Online]. Available: https: //software.intel.com/en-us/neuralcompute-stick
|
| 186 |
+
|
| 187 |
+
[8] Google. Coral AI, "Tensorflow models on the edge tpu," 2020. [Online]. Available: https://coral.ai/docs/edgetpu/models-intro/ #compatibility-overview
|
| 188 |
+
|
| 189 |
+
[9] A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly et al., "An image is worth ${16} \times {16}$ words: Transformers for image recognition at scale," arXiv preprint arXiv:2010.11929, 2020.
|
| 190 |
+
|
| 191 |
+
[10] G. Menghani, "Efficient deep learning: A survey on making deep learning models smaller, faster, and better," arXiv:2106.08962, 2021.
|
| 192 |
+
|
| 193 |
+
[11] D. Hendrycks and K. Gimpel, "Gaussian error linear units (gelus)," 2016. [Online]. Available: https://arxiv.org/abs/1606.08415
|
| 194 |
+
|
| 195 |
+
[12] N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhut-dinov, "Dropout: a simple way to prevent neural networks from over-fitting," The journal of machine learning research, vol. 15, no. 1, pp. 1929-1958, 2014.
|
| 196 |
+
|
| 197 |
+
[13] A. Yazdanbakhsh, B. Akin, and K. K. Seshadri, "An evaluation of edge tpu accelerators for convolutional neural networks," https://arxiv.org/abs/2102.10423, 2021.
|
| 198 |
+
|
| 199 |
+
[14] M. E. Elbtity, P. S. Chandarana, B. Reidy, J. K. Eshraghian, and R. Zand, "Aptpu: Approximate computing based tensor processing unit," IEEE Transactions on Circuits and Systems I: Regular Papers, pp. 1-0, 2022.
|
| 200 |
+
|
| 201 |
+
[15] N. P. Jouppi, C. Young, N. Patil, D. Patterson, G. Agrawal, R. Bajwa, S. Bates, S. Bhatia, N. Boden, A. Borchers et al., "In-datacenter performance analysis of a tensor processing unit," in Proc. of the 44th Annual Int. Symp. on Comput. Architecture, 2017, pp. 1-12.
|
| 202 |
+
|
| 203 |
+
[16] Google. Coral AI, "Edge tpu inferencing overview," 2020. [Online]. Available: https://coral.ai/docs/edgetpu/inference/
|
| 204 |
+
|
| 205 |
+
[17] M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean, M. Devin, S. Ghemawat, I. Goodfellow, A. Harp, G. Irving, M. Isard, Y. Jia, R. Jozefowicz, L. Kaiser, M. Kudlur, J. Levenberg, D. Mané, R. Monga, S. Moore, D. Murray, C. Olah, M. Schuster, J. Shlens, B. Steiner, I. Sutskever, K. Talwar, P. Tucker, V. Vanhoucke, V. Vasudevan, F. Viégas, O. Vinyals, P. Warden, M. Wattenberg, M. Wicke, Y. Yu, and X. Zheng, "TensorFlow: Large-scale machine learning on heterogeneous systems," 2015, software available from tensorflow.org. [Online]. Available: https://www.tensorflow.org/
|
| 206 |
+
|
| 207 |
+
[18] S. Kim, A. Gholami, Z. Yao, M. W. Mahoney, and K. Keutzer, "I-bert: Integer-only bert quantization," in International conference on machine learning. PMLR, 2021, pp. 5506-5518.
|
| 208 |
+
|
| 209 |
+
[19] W. B. Dolan and C. Brockett, "Automatically constructing a corpus of sentential paraphrases," in Proceedings of the Third International Workshop on Paraphrasing (IWP2005), 2005.
|
papers/ISCA/ISCA 2023/ISCA 2023 Workshop/ISCA 2023 Workshop ASSYST/PibYaG2C7An/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,191 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ EFFICIENT DEPLOYMENT OF TRANSFORMER MODELS ON EDGE TPU ACCELERATORS: A REAL SYSTEM EVALUATION
|
| 2 |
+
|
| 3 |
+
Abstract-Transformer models have become a dominant architecture in the world of machine learning. From natural language processing to more recent computer vision applications, Transformers have shown remarkable results and established a new state-of-the-art in many domains. However, this increase in performance has come at the cost of ever-increasing model sizes requiring more resources to deploy. Machine learning (ML) models are used in many real-world systems, such as robotics, mobile devices, and Internet of Things (IoT) devices, that require fast inference with low energy consumption. For battery-powered devices, lower energy consumption directly translates into longer battery life. To address these issues, several edge AI accelerators have been developed. Among these, the Coral Edge TPU has shown promising results for image classification while maintaining very low energy consumption. Many of these devices, including the Coral TPU, were originally designed to accelerate convolutional neural networks, making deployment of Transformers challenging. Here, we propose a methodology to deploy Transformers on Edge TPU. We provide extensive latency, power, and energy comparisons among the leading-edge devices and show that our methodology allows for real-time inference of large Transformers while maintaining the lowest power and energy consumption of the leading-edge devices on the market.
|
| 4 |
+
|
| 5 |
+
Index Terms-Tensor Processing Unit (TPU), Transformer Models, Edge AI Accelerators, BERT.
|
| 6 |
+
|
| 7 |
+
§ I. INTRODUCTION
|
| 8 |
+
|
| 9 |
+
Since the introduction of Transformer models in 2017 [1], they have quickly risen to prominence in many areas, such as natural language processing and computer vision. These models have shown state-of-the-art results in a wide domain of tasks from machine translation [1] and question-answering [2] to computer vision tasks like image segmentation [3]. Many applications, such as self-driving cars, IoT devices, satellites, drones, and robots, require deploying models for real-time inference using low-power energy-constrained systems. Transformer-based models, however, often include a large number of processing layers, along with hundreds of millions of parameters. For instance, the Bidirectional Encoder Representations from Transformers (BERT) [4] models contain 109 million and 340 million parameters for the Base and Large models, respectively [5]. Therefore, deploying such massive models at the edge for real-time applications with tight restrictions on power and energy is challenging.
|
| 10 |
+
|
| 11 |
+
The surge in demand for specialized hardware for AI applications has resulted in a rapidly expanding industry for edge AI accelerators. Anticipating this trend, several companies have developed their own specialized accelerators. The NVIDIA Jetson Nano [6] is a low-cost development board for machine learning (ML) applications that employ NVIDIA TensorRT1 as the main driver. The Intel Movidius Neural Compute Stick 2 (NCS2) [7] is a small, low-power USB co-processor that enables the deployment of Deep Neural Networks (DNNs) and is powered by the Myriad Vision Processing Unit (VPU). Google's Coral Edge TPU is another device that leverages tensor processing units (TPUs) to accelerate ML applications. The coral TPU is used as a co-processor on Coral's Dev Board, as well as a USB accelerator [8] that can be integrated with tiny computers such as Raspberry Pi. With the peak performance of four tera-operations per second (TOPS) and two TOPS/W, Coral Edge TPU can be one of the promising technologies for realizing real-time Transformer models. While several studies have used the Coral TPU to accelerate their DNN applications, to the best of the authors' knowledge, no work has deployed Transformer-based models on Coral Edge TPU accelerators.
|
| 12 |
+
|
| 13 |
+
Herein, we propose a methodology to deploy Transformer models on the Coral Edge TPU. Because Transformers are often very large, training them is time-consuming, computationally expensive, and often requires very large datasets that are not always publicly available. For these reasons, it is crucial that our methodology support a wide range of existing Transformer architectures such as Vision Transformers (ViT) [9], left-right Transformers, also known as (a.k.a) Encoder-Decoder Transformers [1], [2] and BERT-like [4] Transformers without any need for retraining, aside from possible retraining associated with quantization. Here, we modify the computational graph to allow the model to run on the Edge TPU while remaining functionally identical to the original model. While common model optimization techniques such as pruning, knowledge distillation, hyper-parameter optimization, and neural architecture search ( [10] provides an overview of these techniques) can be used to improve the size, latency, power consumption, and energy consumption of models, the focus of this paper is on the efficient deployment of existing Transformer architectures on the Coral Edge TPU. Some or all of the aforementioned optimization techniques can be used on top of our work to further improve the latency and power consumption of models. Although we focus on the BERT Transformer architecture for the main body of the work, we show that this methodology can be generalized for both BERT-like and left-right Transformers.
|
| 14 |
+
|
| 15 |
+
§ II. BACKGROUND
|
| 16 |
+
|
| 17 |
+
§ A. TRANSFORMER MODEL
|
| 18 |
+
|
| 19 |
+
Transformer models can vary slightly in design, but the core architecture remains the same. Transformers use embedding layers to turn tokens into vectors of size ${d}_{\text{ model }}$ , a.k.a hidden size. The exact number of embedding layers varies from one model to another. For instance, BERT uses three embedding layers, as shown in Fig. 1. Transformers also employ a stack of attention heads to capture different learned attention associations using scaled dot product attention that maps queries and key-value pairs to outputs. Scaled dot product attention uses a dot product between the queries(Q)and the keys(K) to compute attention scores. These scores are scaled down to create a mean of 0 and a variance of 1, and the Softmax function is applied to generate weights for the values. The weights are then multiplied by the values (V) to generate the weighted attention scores for the tokens. At the end of the multi-headed attention layer, the values from each attention head are concatenated together and passed to a fully-connected (FC) layer and then an activation function is applied.
|
| 20 |
+
|
| 21 |
+
< g r a p h i c s >
|
| 22 |
+
|
| 23 |
+
Fig. 1: BERT Architecture with $n$ encoder layers [1].
|
| 24 |
+
|
| 25 |
+
Most Transformers, including GPT-3 [2], and BERT use Gaussian Error Linear Units (GELU) [11] as the activation function which uses the non-linearity property of Rectified Linear Units (ReLU) with the regularization property of Dropout [12]. The output of the FC layer is added with previous layers using a residual connection. In the Encoder, these values are passed to two FC layers where the inner FC layer has size of ${d}_{ff}$ , a.k.a intermediate size, and the outer FC layer has size of ${d}_{\text{ model }}$ . Again, the output is added with previous layers and normalized using residual connections. Finally, these values are passed to the next encoder layer, or if there is none, then the classification head/decoder layer. Left-right Transformers have a decoder layer that is nearly identical to the encoder layer, except it has one extra multiheaded attention layer before the feed-forward layers called the encoder-decoder multi-headed attention. The encoder-decoder multi-headed attention is the same as the encoder multi-headed attention except that the query and key vectors come from the encoder, and the values vector comes from the decoder. BERT was introduced in 2018 and builds upon prior Transformer architectures, with one key difference being bi-directionality. Unlike prior Transformer models, BERT is designed to train on both left and right contexts for text. Using a pre-trained BERT model and one additional classification layer, BERT can be fine-tuned to perform various language tasks.
|
| 26 |
+
|
| 27 |
+
< g r a p h i c s >
|
| 28 |
+
|
| 29 |
+
Fig. 2: (a) Edge TPU architecture. (b) PE structure [13].
|
| 30 |
+
|
| 31 |
+
§ B. CORAL EDGE TPU ARCHITECTURE
|
| 32 |
+
|
| 33 |
+
In 2015, Google launched the TPU project in which they adopted the systolic array architecture to accelerate the DNN operations [13]. The first version of Google's TPU was designed to only accelerate the DNN inference on the cloud. In 2019, Google launched a smaller and low-power version of TPU, called Edge TPU, that is suited to accelerate the inference of the DNN at the edge. The Edge TPU uses 8- bit integer (int8) multiply and accumulate (MAC) core units in its processing elements (PEs) [8].
|
| 34 |
+
|
| 35 |
+
In general, the systolic array architecture includes a set of processing elements that are formed in single or multidimensional arrays that can collectively perform the computation on certain data brought from memory with no need to access it from the memory multiple times. The systolic arrays developed for the ML acceleration are designed to implement matrix-matrix, matrix-vector, and vector-vector multiplications which are the dominant operations in ML workloads. Systolic arrays increase performance by reusing the values fetched from memory and reducing the main memory accesses [14]. The dataflow in the systolic array is a mapping scheme that depends on the microarchitecture of PEs and determines how the input data is fed to the array, and how the partial results and outputs are generated and stored. Google adopted the weight stationary dataflow in their cloud TPU and Edge TPU designs [15], in which, the weights are pre-stored in the core memory of PEs. At each cycle, the input elements are fed to the PEs and multiplied by the pinned weights producing partial sums. This process is vertically distributed over columns in the systolic array to produce the output results.
|
| 36 |
+
|
| 37 |
+
Figure 2 shows the architecture of the Edge TPU and the microarchitecture of each PE within its 2D systolic array. The Edge TPU includes activation memory, instruction memory, parameter memory, controller, and PEs. The controller transfers the data between the off-chip memory and the PEs, fetches parameters and activation into the buffers, and reads the instructions that will be executed on the PEs. The Edge TPU supports a variety of commonly-used operations in DNN models [8]. Each PE in the Edge TPU has four parallel MAC units, as opposed to the cloud TPU v1 which has only one MAC unit per PE. As shown in Fig.2, the PEs in Edge TPU have a single-instruction-multiple-data (SIMD) architecture. They can perform the MAC operation on four data values at the same time using four 8-bit fixed point compute lanes. Moreover, each PE has a core memory and a PE memory. The PE memory is designed as a first in first out (FIFO) buffer that is shared among all PEs and used to store model activations, partial results, and final outputs. Since Edge TPU has a weight-stationary systolic array, the core memory is used to store model parameters, i.e., weights.
|
| 38 |
+
|
| 39 |
+
§ III. PROPOSED METHODOLOGY TO DEPLOY SMALL- AND MEDIUM-SIZED TRANSFORMERS ON EDGE TPU
|
| 40 |
+
|
| 41 |
+
§ A. EXISTING EDGE TPU DEPLOYMENT PROCESS
|
| 42 |
+
|
| 43 |
+
For full Edge TPU utilization, several requirements must be met; otherwise, only parts of the model will run on the Edge TPU. The Coral documentation [16] contains an exhaustive list of requirements and all supported operations. Here, we only focus on the requirements that are relevant to the Transformer architecture.
|
| 44 |
+
|
| 45 |
+
The Edge TPU only supports TensorFlow Lite (TFLite) models. TFLite is a lightweight version of TensorFlow [17] that is optimized for deployment on edge systems. Using the TFLite interpreter, different delegates can be used depending on the hardware accelerator, such as NNAPI for android devices, GPU for mobile GPUs, Hexagon for DSPs, Core ML for iOS devices, and libedgetpu, which is the focus of this work, for the Coral Edge TPU. Note that TFLite only supports a subset of all TensorFlow operations and the Coral Edge TPU only supports a subset of all TFLite operations. A list of supported Edge TPU operations and any known limitations can be found at [16]. To fully utilize the TPU, the model must contain only supported Edge TPU operations.
|
| 46 |
+
|
| 47 |
+
Since the Edge TPU only supports 8-bit integer operations, any models aimed to be deployed on Edge TPU must be converted from 32-bit floating point (fp32) to int8 or unsigned int8 for all parameters, activations, and operations. This can be done using either quantization-aware training (QAT) or post-training quantization (PTQ) with a representative dataset. In [18], it is shown that using QAT, BERT can maintain state-of-the-art results using 8-bit integer-only inference. Once the model has been converted to a quantized TFLite model, the Edge TPU compiler maps the supported operations to the TPU and leaves the remaining operations on the CPU. The compiler maps all supported operations onto one graph to be loaded onto the TPU called the Edge TPU custom op. Currently, the Edge TPU graph only includes consecutive operations that are supported on Edge TPU. Once the compiler finds an operation in the model that is not supported by the TPU, all the following operations will be mapped to the CPU, regardless of being supported by TPU or not. Another deployment requirement for Edge TPU is that all tensor sizes should be constant at compilation time. After training, we change the batch size dimension to 1 and the sequence length dimension to 128 . Moreover, the existing Edge TPU devices do not support embedding layers. Therefore, since the embedding layers make up only a small portion of the overall Transformer model, we leave the operation to run on the CPU for inference.
|
| 48 |
+
|
| 49 |
+
To verify whether modifying Transformers based on the existing requirements mentioned above would be sufficient to successfully deploy them on Edge TPU, we have adapted BERT-Tiny to BERT-Large models accordingly and tried to deploy them on the Edge TPU. This experiment results in compilation failure or partial compilation for all the models. This is mainly due to the fact that Transformers include operations that are currently not supported by Edge TPU. Thus, we develop several methodologies in the following subsections to resolve the current deployment limitations of Transformers on the Edge TPU.
|
| 50 |
+
|
| 51 |
+
§ B. PROPOSED EDGE TPU DEPLOYMENT PROCESS FOR TRANSFORMERS
|
| 52 |
+
|
| 53 |
+
To address the existing deployment challenges of the Transformers on Edge TPU, it is required to refactor their computational graph to alter their operations to those supported by Edge TPU without altering the model's functionality. Thus we developed a flexible in-house TensorFlow Transformer model using custom Keras layers. This custom Transformer model allows us to modify any operations in our model and replace them where necessary. In order to ensure backward compatibility with existing Transformers, we map pre-trained weights onto our model and verify that both models yield the same output for the same input. In the following, we discuss two of the operations in Transformers that cause the compilation failure in Edge TPU, and propose methods to refactor them such that they can be readily deployed on Edge TPU.
|
| 54 |
+
|
| 55 |
+
1) Refactoring GELU Activation Function: As mentioned, GELU [11] is used in many Transformers and is defined by the following equation:
|
| 56 |
+
|
| 57 |
+
$$
|
| 58 |
+
\operatorname{gelu}\left( x\right) = \frac{1}{2}x\left\lbrack {1 + \operatorname{erf}\left( \frac{x}{\sqrt{2}}\right) }\right\rbrack \tag{1}
|
| 59 |
+
$$
|
| 60 |
+
|
| 61 |
+
where $\operatorname{erf}\left( x\right)$ is the Gaussian error function which is defined
|
| 62 |
+
|
| 63 |
+
as:
|
| 64 |
+
|
| 65 |
+
$$
|
| 66 |
+
\operatorname{erf}\left( x\right) = \frac{2}{\sqrt{\pi }}{\int }_{0}^{x}{e}^{-{t}^{2}}{dt} \tag{2}
|
| 67 |
+
$$
|
| 68 |
+
|
| 69 |
+
The GELU activation function is not currently supported on Edge TPU. Several approximations for GELU have been developed, including those based on transcendental functions [11] and those based on polynomial equations [18]. For our purposes, we use the polynomial-based approximation of GELU known as I-GELU where $\operatorname{erf}\left( x\right)$ is approximated as:
|
| 70 |
+
|
| 71 |
+
$$
|
| 72 |
+
L\left( x\right) = \operatorname{sgn}\left( x\right) \cdot \left\lbrack {a \cdot {\left( \min \left( \left| x\right| , - b\right) + b\right) }^{2} + 1}\right\rbrack \tag{3}
|
| 73 |
+
$$
|
| 74 |
+
|
| 75 |
+
< g r a p h i c s >
|
| 76 |
+
|
| 77 |
+
Fig. 3: (a) standard matrix-matrix dot product (b) matrix-matrix dot product using convolutions.
|
| 78 |
+
|
| 79 |
+
where $a = - {0.2888},b = - {1.769},{sgn}$ denotes the sign function, and min denotes the minimum function. I-GELU is defined as:
|
| 80 |
+
|
| 81 |
+
$$
|
| 82 |
+
I - {GELU}\left( x\right) = \frac{1}{2}x\left\lbrack {1 + L\left( \frac{x}{\sqrt{2}}\right) }\right\rbrack \tag{4}
|
| 83 |
+
$$
|
| 84 |
+
|
| 85 |
+
However, TFLite does not support the sign function, and the Edge TPU compiler does not support the absolute value function. Therefore, we further revised the GELU approximation and replaced the sign and absolute value functions with $\operatorname{sgn}\left( x\right) \approx \tanh \left( {x \cdot {10}^{3}}\right)$ and ${abs}\left( x\right) \approx x \cdot \operatorname{sgn}\left( x\right)$ , respectively. Thus, we approximate $L\left( x\right)$ in (3) as:
|
| 86 |
+
|
| 87 |
+
$$
|
| 88 |
+
L\left( x\right) = \tanh \left( {{10}^{3}x}\right) \left\lbrack {a{\left\lbrack \min \left( x \cdot \tanh \left( {10}^{3}x\right) , - b\right) + b\right\rbrack }^{2} + 1}\right\rbrack \tag{5}
|
| 89 |
+
$$
|
| 90 |
+
|
| 91 |
+
The proposed I-GELU approximation is supported by both TFLite and Edge TPU. Therefore, in the Transformers, we replace all instances of GELU with our approximation of GELU.
|
| 92 |
+
|
| 93 |
+
2) Refactoring Matrix-Matrix Dot Products for FC Layer: Many of the operations in Transformers are matrix-matrix dot products. Although the matrix-matrix dot product in the self-attention layer is supported by the Edge TPU, it cannot handle the matrix-matrix dot products in the FC layers, as described in the device documentation [16]. To perform matrix-matrix dot products in FC layers, we implement the dot product operation using convolutions. This can be done as follows: let $A$ be an $m \times n$ input matrix, $B$ be an $n \times k$ weight matrix, and $C$ be the $m \times k$ output matrix such that $A \cdot B = C$ . This is a standard matrix-matrix dot product, as shown in Fig. 3 (a). Now consider a convolution layer where we have $k$ convolution kernels, each with the size of $1 \times n$ called ${K}_{\text{ conv }}$ (shown in Fig. 3b). We can map the weights from matrix $B$ one-to-one such that ${K}_{\text{ conv }}\left\lbrack x\right\rbrack = {B}^{T}\left\lbrack x\right\rbrack$ . By convolving the kernels ${K}_{\text{ conv }}$ across the input matrix $A$ with strides of 1 and no padding, the resulting matrix will be an $m \times k$ matrix identical to the original output matrix $C$ as illustrated in Fig. 3.
|
| 94 |
+
|
| 95 |
+
TABLE I: Bert models' specifications.
|
| 96 |
+
|
| 97 |
+
max width=
|
| 98 |
+
|
| 99 |
+
Model Hidden size Attention Heads Hidden Layers Intermediate Size Parameters (millions)
|
| 100 |
+
|
| 101 |
+
1-6
|
| 102 |
+
Tiny 128 2 2 512 4.4
|
| 103 |
+
|
| 104 |
+
1-6
|
| 105 |
+
Mini 256 4 4 1024 11.2
|
| 106 |
+
|
| 107 |
+
1-6
|
| 108 |
+
Small 512 8 4 2048 28.8
|
| 109 |
+
|
| 110 |
+
1-6
|
| 111 |
+
Medium 512 8 8 2048 41.4
|
| 112 |
+
|
| 113 |
+
1-6
|
| 114 |
+
Base 768 12 12 3072 109.5
|
| 115 |
+
|
| 116 |
+
1-6
|
| 117 |
+
Large 1024 16 24 4096 335.1
|
| 118 |
+
|
| 119 |
+
1-6
|
| 120 |
+
|
| 121 |
+
< g r a p h i c s >
|
| 122 |
+
|
| 123 |
+
Fig. 4: Computational graphs for (a) $\tanh \left( x\right)$ (b) $\operatorname{ReLU}\left( x\right)$ (c) I-GELU( $x$ ) activation functions.
|
| 124 |
+
|
| 125 |
+
Using the aforementioned strategies, we can successfully compile small- and medium-sized Transformers, such as BERT-Tiny to BERT-Medium, on Edge TPU. However, the compilation still fails for larger Transformers such as BERT-Base. Unfortunately, the compiler does not provide detailed information about why the larger models cannot compile, so it is unclear whether the compilation fails due to some fundamental hardware limit in the Edge TPU or if there is an issue with the compiler itself. Regardless, in the next section, we discuss methods to identify the source of the issue and resolve it.
|
| 126 |
+
|
| 127 |
+
§ IV.THE PROPOSED METHODOLOGY TO DEPLOY LARGE TRANSFORMERS ON EDGE TPU
|
| 128 |
+
|
| 129 |
+
By comparing the architecture of BERT-Medium and BERT-Base (see Table I), we narrow down the possible cause of the compilation failure to the increased hidden size, attention heads, hidden layers, intermediate size, or some combination of these model parameters. Starting with the BERT-Medium architecture, we change one of the model parameters to match BERT-Base until we reproduce the issue. Using this strategy, we identify the two layers that cause the compilation to fail: the inner FC layer and the embedding layer.
|
| 130 |
+
|
| 131 |
+
Further, we observe that for the inner FC layer, which uses the intermediate size from Tab. I, the model compiles for BERT-Medium using a size of 2048 but does not compile for BERT-Base using a size of 3072. Motivated by this observation, we use a binary search algorithm to determine the maximum size for the inner FC layer that can be compiled on Edge TPU. We find that when followed by the I-GELU activation function, the maximum inner FC layer size for Edge TPU is equal to 2728. Moreover, we find that the maximum size supported varies depending on the type of activation function. For instance, with no activation function or using ReLU, Sigmoid, and TanH, the intermediate size can be a maximum of 5376 neurons. This could be due to the computation size of the I-GELU activation function, as it can be seen in its computational graph in Fig. 4c, which can lead to increased memory demands beyond what is available in Edge TPU's PE memory.
|
| 132 |
+
|
| 133 |
+
< g r a p h i c s >
|
| 134 |
+
|
| 135 |
+
Fig. 5: (a) standard convolution-based fully connected layer (b) fully connected layer partitioned across the output dimension.
|
| 136 |
+
|
| 137 |
+
To address the aforementioned challenge, we propose partitioning the inner FC layer into two or more equal parts to reduce the size of the operations in the layer. For the BERT-Base model, we partition the inner FC layer into two parts, as shown in Figure 5. By partitioning the layer along the output, we reduce the size of the operation by splitting it into two $m \times n/2$ dot products instead of one $m \times n$ dot product and two $n/2$ I-GELU activations. For the BERT-Base model, this leads to 1536 I-GELU neurons, which is less than the maximum 2728 neurons supported by Edge TPU. At the end, we concatenate these two layers to realize the output. This approach allows the model with the intermediate size of 3072, which is used in BERT-Base, to compile successfully.
|
| 138 |
+
|
| 139 |
+
Although partitioning the inner FC layer resolves the compilation issue for models with larger intermediate sizes, we still observe that increased size of the embedding layer causes the compilation failure for the model. Therefore, we leverage a similar partitioning mechanism for the embedding layer across the output dimension. Since the embedding layer itself cannot be mapped to the Edge TPU, the model's input for the Edge TPU is the output of the embedding layer. As discussed earlier, here we set the input sequence length to 128 ; therefore, using BERT-Base for example, the Edge TPU input includes three ${128} \times {768}$ matrices. Similar to how we partition the FC layer, we partition the embedding layers across the output dimension. This changes the Edge TPU input from three ${128} \times {768}$ matrices to six ${128} \times {384}$ matrices.
|
| 140 |
+
|
| 141 |
+
Using the aforementioned partitioning mechanisms in the inner FC layer and embedding layer, we successfully compile and deploy the BERT-Base and Bert-Large models on Edge TPU. To assess the validity of our approach for other types of transformers, we create a left-right transformer based on the model introduced in [1]. Without any modifications to the model, it fails to compile. However, leveraging our deployment methodology, we can compile and deploy this transformer model on Edge TPU as well, which exhibits the effectiveness of our approach for various architectures and sizes.
|
| 142 |
+
|
| 143 |
+
< g r a p h i c s >
|
| 144 |
+
|
| 145 |
+
Fig. 6: Experimental setup. (a) Pi + NCS2 (b) Pi + Coral TPU (c) Coral Dev board (d) Jetson Nano.
|
| 146 |
+
|
| 147 |
+
§ V. EXPERIMENTAL RESULTS
|
| 148 |
+
|
| 149 |
+
§ A. EXPERIMENTAL SETUP
|
| 150 |
+
|
| 151 |
+
After verifying the successful compilation of various-sized Transformer models on Edge TPU, we evaluate its performance against well-known edge AI accelerators existing in the market. In particular, we investigate two experimental setups: (1) USB accelerators, where we compare Intel NCS2 (Fig. 6a) with Coral TPU USB accelerator (Fig. 6b), and (2) Development Boards, in which we evaluate Coral Edge TPU Dev Board (Fig. 6c) against Nvidia Jetson Nano (Fig. 6d). The USB accelerators are integrated as a co-processor with Raspberry Pi 4. There are different settings required to run the models on each of the edge devices. For the Raspberry Pi 4 and both Coral products, we use TFLite models with fp32 and int8 precision, respectively. For the NCS2, we use OpenVino models with fp16 precision. For the Jetson, we use TensorRT models with fp16 precision. Jetson provides two different operating modes, i.e., low-power mode and Max-N or high-power mode. Here, we use six different BERT models for our experiments: Tiny, Mini, Small, Medium, Base, and Large. Due to the large size of the BERT-Base and BERT-Large models, we only use development boards to run these models. Also, since Jetson Nano could not compile BERT-Large, we only compared the Coral Dev board with Raspberry Pi for the BERT-Large model.
|
| 152 |
+
|
| 153 |
+
< g r a p h i c s >
|
| 154 |
+
|
| 155 |
+
Fig. 7: Inference latency measurements for all models and devices.
|
| 156 |
+
|
| 157 |
+
§ B. INFERENCE LATENCY MEASUREMENT
|
| 158 |
+
|
| 159 |
+
For inference latency measurements, we split the process into three parts: (1) load the model, (2) allocate the tensors depending on the platform, and (3) perform 100 inferences using a subset of the Microsoft Research Paraphrase Corpus (MRPC) dataset [19]. We measure the total time taken for 100 inferences and report the mean inference time for one input sample. Figure 7 shows the inference results for all platforms using the six BERT models. We see that all edge accelerators provide significant speedup over the baseline Raspberry Pi 4 CPU. For the smallest model, BERT-Tiny, Coral Dev board has the fastest inference speed at $4\mathrm{\;{ms}}$ per inference. For the BERT Mini, Small, Medium, and Base models, we see that order of inference speed from least to greatest is as follows: Jetson high power mode, Jetson low power mode, Coral Dev board, Coral USB, NCS2, Raspberry Pi 4. We see that the larger the model is, the bigger the difference is between the coral products and the Jetson. Although we do not report the model load and allocation times, it is important to note that the Coral Dev board, Coral USB, Jetson, and Raspberry Pi all take less than 10 seconds to load and allocate BERT-Medium, while the NCS2 takes over 10 minutes to load and allocate the same model.
|
| 160 |
+
|
| 161 |
+
1) USB Accelerators: For the USB accelerators, both NCS2 and Coral USB accelerator show improvement over the baseline Raspberry Pi 4, except for the case of NCS2 and BERT-Tiny. For BERT-Tiny there is an improvement of ${0.76} \times$ for NCS2 and ${5.2} \times$ for Coral USB accelerator. For BERT-Medium, we observe approximately $6 \times$ reduction in inference latency for Coral USB accelerator compared to NCS2.
|
| 162 |
+
|
| 163 |
+
2) Development Boards: Both development boards offer significant speedups compared to the Raspberry Pi 4. For the BERT-Tiny model, we observe ${3.2} \times$ and ${5.2} \times$ improvement over the baseline model using Jetson low and high power modes. For Coral Dev board, we have ${6.5} \times$ improvement over the baseline model. Performing the same comparison for the BERT-Base model, we have ${33} \times$ and ${48} \times$ improvement over the baseline model for Jetson with low and high power, respectively. For Coral Dev board, we observe ${11} \times$ improvement of the baseline model. For smaller models, Coral Dev board is slightly faster than the Jetson, but for larger models, Jetson is up to ${4.3} \times$ faster. The faster inference of Jetson, however, is achieved at the cost of significantly more chip resources and increased power consumption, as discussed in the next subsection.
|
| 164 |
+
|
| 165 |
+
< g r a p h i c s >
|
| 166 |
+
|
| 167 |
+
Fig. 8: Dynamic power for all models and devices.
|
| 168 |
+
|
| 169 |
+
§ C. INFERENCE POWER MEASUREMENTS
|
| 170 |
+
|
| 171 |
+
We use MakerHawk UM34C USB multimeter to measure the power dissipation of all devices, except for the Jetson, which has three internal sensors for measuring the input, CPU, and GPU powers. To obtain the average power consumption, we run each model on each platform for five minutes and record the corresponding power profiles. Figure 8 shows the dynamic power measurements for all the models and platforms. Coral Dev board has the lowest power consumption across all experiments. As shown in the figure, the power consumption remains roughly unchanged across various models for all the platforms, except for Jetson's power consumption, which grows with the model size.
|
| 172 |
+
|
| 173 |
+
1) USB Accelerators: For the BERT-Tiny and BERT-Medium models, NCS2 and Coral USB accelerator consume nearly ${1.9} \times$ , and ${1.6} \times$ more power than Raspberry Pi 4 alone. Coral USB consumes ${1.3}\mathrm{x}$ less power than NCS2.
|
| 174 |
+
|
| 175 |
+
2) Development Boards: For the BERT-Tiny model, Coral Dev board achieves ${2.1} \times$ reduction in power compared to Raspberry Pi and a ${4.8} \times$ improvement over Jetson in high-power mode. For the BERT-Base model, Coral Dev board realizes a ${9.4} \times$ and ${4.8} \times$ power reduction compared to Jetson in high-power and low-power modes, respectively. Finally, for the BERT-Large model, Coral Dev board can achieve ${2.4} \times$ reduction in power dissipation compared to Raspberry Pi.
|
| 176 |
+
|
| 177 |
+
§ D. INFERENCE ENERGY
|
| 178 |
+
|
| 179 |
+
In Fig. 9, we compare the results for inference energy. Aside from NCS2 we see that all accelerators significantly improve inference energy over baseline Raspberry Pi.
|
| 180 |
+
|
| 181 |
+
< g r a p h i c s >
|
| 182 |
+
|
| 183 |
+
Fig. 9: Inference energy for all models and devices.
|
| 184 |
+
|
| 185 |
+
1) USB Accelerators: For the USB accelerators, we compare the BERT-Medium model. For The NCS2, the inference energy is ${1.3} \times$ worse than the Raspberry Pi 4 with no acceleration. Interestingly, the Coral USB accelerator provides a ${5.9} \times$ and ${7.75} \times$ improvement in inference energy compared to the Raspberry Pi alone and Raspberry Pi with NCS2, respectively.
|
| 186 |
+
|
| 187 |
+
2) Development Boards: Compared to Raspberry Pi, Coral Dev board provides a ${12} \times$ decrease in inference energy for the BERT-Tiny model. When compared to Jetson-low and Jetson high, Coral Dev board provides $3 \times$ and $6 \times$ improvements, respectively, for the same model. For the BERT-Base model, Coral Dev board is ${1.6} \times$ and ${2.5} \times$ more efficient than Jetson-low and Jetson-high. Furthermore, Coral Dev board is ${31} \times$ more efficient than Raspberry Pi for the same model. Finally, for the BERT-Large model, Coral Dev board achieves a notable ${35} \times$ energy saving compared to Raspberry Pi.
|
| 188 |
+
|
| 189 |
+
§ VI. CONCLUSION
|
| 190 |
+
|
| 191 |
+
This paper provides a methodology to deploy Transformer models on Edge TPU accelerators by identifying the layers in Transformers that are not supported by Edge TPU and refactoring their computational graph. We provide an extensive comparison of the leading edge devices on the market for Transformer models. Our methodology can deploy various Transformer architectures on the Coral Edge TPU and achieves real-time inference while maintaining the lowest energy consumption of the edge devices. We show that by adopting our approach for the Coral USB Accelerator, inference for medium-sized Transformers can be accelerated up to nearly ${10} \times$ while consuming $6 \times$ less energy. Further, for large Transformers, our approach may be the only viable approach due to the memory constraints associated with other edge devices.
|
papers/ISCA/ISCA 2023/ISCA 2023 Workshop/ISCA 2023 Workshop ASSYST/gb6VM_pTd5E/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,293 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# ParaGAN: A Cloud Training Framework for Generative Adversarial Networks
|
| 2 |
+
|
| 3 |
+
ISCA 2023 Submission #6 - Confidential Draft - Do NOT Distribute!!
|
| 4 |
+
|
| 5 |
+
Abstract-Generative Adversarial Network (GAN) has shown tremendous success in synthesizing realistic photos and videos in recent years. However, training GAN to convergence is still a challenging task that requires significant computing power and is subject to training instability. To address these challenges, we propose ParaGAN, a cloud training framework for GAN optimized from both system and numerical perspectives. To achieve this, ParaGAN implements a congestion-aware pipeline for latency hiding, hardware-aware layout transformation for improved accelerator utilization, and an asynchronous update scheme to optimize system performance. Additionally, from a numerical perspective, we introduce an asymmetric optimization policy to stabilize training. Our preliminary experiments show that ParaGAN reduces the training time of BigGAN from 15 days to just 14 hours on 1024 TPUs, achieving 91% scaling efficiency. Moreover, we demonstrate that ParaGAN enables the generation of unprecedented high-resolution $\left( {{1024} \times {1024}}\right)$ images on BigGAN.
|
| 6 |
+
|
| 7 |
+
## I. INTRODUCTION
|
| 8 |
+
|
| 9 |
+
Last decade has witnessed the success of Generative Adversarial Networks [6], which has a wide range of applications including image super resolution [12], image translation [7], [23], photo inpainting [5], [21]. However, training GAN at scale remains challenging because of the computational demands and optimization difficulties. Unlike Convolutional Neural Networks (CNN) or Transformer-based architectures where optimization is straightforward by taking gradient descents on a single model, there are two sub-networks to optimize in GAN, namely generator and discriminator. The generator samples from the noise and produces a fake sample as close to the real sample as possible, and the discriminator evaluates the generated sample. The generator aims to fool the discriminator, and the discriminator will try to identify the fake images from the real ones. Since the two components are optimized for two contradicting goals, it has been observed that GANs are difficult to converge. Therefore, to speed-up the GAN training at large-scale, we need a framework optimized on both system and numerical perspective.
|
| 10 |
+
|
| 11 |
+
Due to the difficulty of optimizing GAN, many state-of-the-art GAN models take days or even weeks to train. For instance, BigGAN [2] took 15 days for $8\mathrm{x}$ V100 GPUs to train ${150}\mathrm{k}$ steps. Table I summarizes the reported training time of some of the state-of-the-art GAN models. This has made it difficult to quickly reproduce, evaluate, and iterate GAN experiments. Also, current GAN frameworks usually support training with very few nodes.
|
| 12 |
+
|
| 13 |
+
We argue that training speed is an important yet often ignored factor in the current GAN training landscape, and we propose to accelerate it with distributed training. But distributed GAN training has several challenges. First of all, most data centers have storage nodes and compute nodes separated for elasticity, but network congestion can happen from time to time, which prolongs the latency between nodes and affects training throughput. Secondly, there are usually different types of accelerators in the data center, but each of them has unique optimal hardware characteristics. If ignored, it can lead to the under-utilization of accelerators. Last but not least, training GAN at scale may cause a convergence problem, in which the GAN loss does not converge to a stable equilibrium. Therefore, this framework has to consider both system and numerical perspectives.
|
| 14 |
+
|
| 15 |
+

|
| 16 |
+
|
| 17 |
+
Fig. 1: ParaGAN scales to 1024 TPU accelerators at 91% scaling efficiency.
|
| 18 |
+
|
| 19 |
+
TABLE I: Training Time and Parameters Number for GANs trained on ImageNet 2012 dataset.
|
| 20 |
+
|
| 21 |
+
<table><tr><td>Model</td><td>Accelerator</td><td>Training Time</td><td>#Params</td></tr><tr><td>SNGAN [16]</td><td>$8 \times$ V100</td><td>$3\mathrm{\;d}{13.6}\mathrm{\;h}$</td><td>81.44M</td></tr><tr><td>SAGAN [22]</td><td>$8 \times$ V100</td><td>10d 18.7h</td><td>81.47M</td></tr><tr><td>BigGAN [2]</td><td>$8 \times$ V100</td><td>15 d</td><td>158.42M</td></tr><tr><td>ContraGAN [9]</td><td>$8 \times$ V100</td><td>5d 3.5h</td><td>160.78M</td></tr><tr><td>ProgressiveGAN1 [11]</td><td>$8 \times$ V100</td><td>4d</td><td>43.2M</td></tr></table>
|
| 22 |
+
|
| 23 |
+
In this work, we present ParaGAN, a distributed training framework that supports large-scale distributed training for high-resolution GAN. We identify the performance bottlenecks when training at scale and optimize them for efficiency. To stabilize the training process, ParaGAN comes up with an asynchronous update scheme and asymmetric optimization policy. ParaGAN has a simple interface for building new GAN architecture, and it supports CPU, GPU, and TPU. The main contributions of ParaGAN include:
|
| 24 |
+
|
| 25 |
+

|
| 26 |
+
|
| 27 |
+
Fig. 2: Typical GAN architecture.
|
| 28 |
+
|
| 29 |
+
- We design and implement a scalable distributed training framework for GAN with optimizations on both system and numerical perspectives. With ParaGAN, the training time of BigGAN can be shortened from 15 days to 14 hours with 1024 TPU accelerators at ${91}\%$ scaling efficiency, as shown in Fig. 1. ParaGAN also enables direct photo-realistic image generation at unprecedented ${1024} \times {1024}$ resolution, which is $4 \times$ higher than the original BigGAN model.
|
| 30 |
+
|
| 31 |
+
- From the system perspective, we use a congestion-aware data pipeline and hardware-aware layout transformation to improve the accelerator utilization.
|
| 32 |
+
|
| 33 |
+
- From the numerical perspective, to improve the convergence for distributed GAN training, we present an asynchronous update scheme and asymmetric optimization policy.
|
| 34 |
+
|
| 35 |
+
## II. BACKGROUND
|
| 36 |
+
|
| 37 |
+
As shown in Fig. 2, a GAN consists of a generator and a discriminator. The generator generates fake data samples, while the discriminator distinguishes between the generated samples and real samples as accurately as possible. The learning problem of GANs is a minimax optimization problem. The goal of the optimization is to reach an equilibrium for a two players problem:
|
| 38 |
+
|
| 39 |
+
$\mathop{\min }\limits_{G}\mathop{\max }\limits_{D}{\mathbb{E}}_{x \sim {q}_{\text{data }}\left( x\right) }\left\lbrack {\log D\left( x\right) }\right\rbrack + {\mathbb{E}}_{z \sim p\left( z\right) }\left\lbrack {\log \left( {1 - D\left( {G\left( z\right) }\right) }\right) }\right\rbrack$
|
| 40 |
+
|
| 41 |
+
where $z \in {\mathbb{R}}^{{d}_{z}}$ is a latent variable drawn from distribution $p\left( z\right)$ . The discriminator seeks to maximize the sum of the log probability of correctly predicting the real and fake samples, while the generator tries to minimize it instead. Formally, the convergence of GAN is defined as a type of Nash Equilibrium: one network does not change its loss regardless of what the other network does.
|
| 42 |
+
|
| 43 |
+
Since the two networks have contradicting goals, the training process of GAN is a zero-sum game and can be very unstable. Recent works show that i) GAN may converge to points that are not local minimax using gradient descent, in particular for a non-convex game which is common [4], [8], and ii) gradient descent on GAN exhibits strong rotation around fixed points, which requires using very small learning rates [1], [15]. Also, GANs training is sensitive to the hyperparameters and initialization [14]. Therefore, it is observed that GANs are difficult to optimize, and this is also the reason why it takes a long time to train them.
|
| 44 |
+
|
| 45 |
+
There are some existing GAN libraries [3], [10], [13], [14] to train state-of-the-art GANs. They provide standardized building blocks like network backbone and evaluation metrics, making it easy to build new models. However, they focus less on the system performance, and training GAN still takes days if not weeks. [20] proposed a GAN-optimized hardware accelerator, but we aim to build a system that can be run on the public cloud using commodity accelerators. If the training process can be massively paralleled, the GAN community will benefit from it.
|
| 46 |
+
|
| 47 |
+
In ParaGAN, we adopt a co-designed approach: on the system level, we identify that the performance bottlenecks are rooted in network congestion and low accelerator utilization when training on the cloud, and ParaGAN implements a congestion-aware data pipeline and hardware-aware layout transformation to mitigate the issues; on the optimization level, we observe that it is beneficial to decouple the training of generator and discriminator, and ParaGAN proposes an asynchronous update scheme and an asymmetric optimization policy.
|
| 48 |
+
|
| 49 |
+
## III. DESIGN AND PROTOTYPICAL IMPLEMENTATIONS
|
| 50 |
+
|
| 51 |
+
In this section, we will give an overview and discuss the design decisions of ParaGAN. We recognize that, the scalability is usually limited by the latency between nodes. Furthermore, when scaling up the GAN training, the numerical instability problem happens more often. We divide the following discussions into two folds and present our co-designed approach for system throughput and training stability.
|
| 52 |
+
|
| 53 |
+
## A. Programming Model
|
| 54 |
+
|
| 55 |
+
The design of ParaGAN is presented in Fig. 3. ParaGAN (blue region) is implemented on top of TensorFlow (green region) because TensorFlow provides the low-level APIs for model checkpointing, evaluation, and visualization. Different from TensorFlow, we provide high-level APIs for GAN which includes scaling manager, evaluation metrics, and common network backbones. Users of ParaGAN can import from Para-GAN or define their own components. ParaGAN then performs layout transformation and invokes TensorFlow, which converts the model definition into a computational graph. An optional XLA [18] pass can be performed followed by that. After that, the training starts on the CPU host and accelerators.
|
| 56 |
+
|
| 57 |
+
Listing 1: Inferface of ParaGAN
|
| 58 |
+
|
| 59 |
+
import paragan as pg
|
| 60 |
+
|
| 61 |
+
class Generator:
|
| 62 |
+
|
| 63 |
+
def model_fn(latent_var, y):
|
| 64 |
+
|
| 65 |
+
#generator model
|
| 66 |
+
|
| 67 |
+

|
| 68 |
+
|
| 69 |
+
Fig. 3: Overview of ParaGAN architecture.
|
| 70 |
+
|
| 71 |
+
---
|
| 72 |
+
|
| 73 |
+
return output
|
| 74 |
+
|
| 75 |
+
class Discriminator:
|
| 76 |
+
|
| 77 |
+
def model_fn(x, y):
|
| 78 |
+
|
| 79 |
+
#discriminator model
|
| 80 |
+
|
| 81 |
+
return output, out_logit
|
| 82 |
+
|
| 83 |
+
scale_mgr = pg.ScalingManager(config=cfg,
|
| 84 |
+
|
| 85 |
+
bs=2048, num_workers=128)
|
| 86 |
+
|
| 87 |
+
g = Generator( )
|
| 88 |
+
|
| 89 |
+
d = Discriminator( )
|
| 90 |
+
|
| 91 |
+
gan = pg. Estimator(g, d)
|
| 92 |
+
|
| 93 |
+
#train
|
| 94 |
+
|
| 95 |
+
for step in cfg.max_steps:
|
| 96 |
+
|
| 97 |
+
scale_mgr.train(gan)
|
| 98 |
+
|
| 99 |
+
## #evaluate
|
| 100 |
+
|
| 101 |
+
scale_mgr.eval(metric='fid')
|
| 102 |
+
|
| 103 |
+
---
|
| 104 |
+
|
| 105 |
+
We introduce a few concepts in ParaGAN:
|
| 106 |
+
|
| 107 |
+
1) Scaling Manager: The scaling manager is responsible for tuning the hyper-parameters that need adjustment during scaling. Users can start with the best hyper-parameters from a single worker and ParaGAN will properly scale them based on the number of workers based on the heuristics (eg. linear scaling, cosine scaling).
|
| 108 |
+
|
| 109 |
+
2) Network Backbones: It is common that one starts by building upon existing GAN architectures. We also provide some popular GAN architectures as backbone, including but not limited to:
|
| 110 |
+
|
| 111 |
+
- BigGAN [2];
|
| 112 |
+
|
| 113 |
+
- Deep Convolutional GAN (DCGAN) [17];
|
| 114 |
+
|
| 115 |
+
## Spectral Norm GAN (SNGAN) [16]
|
| 116 |
+
|
| 117 |
+
3) Evaluation Metrics: Evaluation metrics can be implemented differently across papers, and this can cause inconsistency. We provide commonly used evaluation metrics including Frechet Inception Distance (FID) and Inception Score (IS).
|
| 118 |
+
|
| 119 |
+
## B. System Optimizations
|
| 120 |
+
|
| 121 |
+
To satisfy the scalability requirement, we design ParaGAN with optimizations on $\mathrm{I}/\mathrm{O}$ and computation.
|
| 122 |
+
|
| 123 |
+
We optimize the I/O performance by building a congestion-aware data pipeline. For data centers, the compute and storage nodes are usually interconnected via Ethernet instead of high-speed InfiniBand. The network traffic between them is not always stable since the infrastructure is shared with other tenants. This could cause problems when the training scales since latency fluctuates when the number of workers increases. Therefore, we implement a congestion-aware data pipeline to reduce the impact of network jittering.
|
| 124 |
+
|
| 125 |
+
To achieve a higher accelerator utilization, we perform hardware-aware layout transformation. A data center usually has multiple types of accelerators, and different accelerators have different architectures and preferred data layouts. For example, Nvidia A100 GPUs prefer half-precision data in multiples of 64 and single-precision data in multiples of 32 , while previous generations prefer $8 \times$ . For TPU v3, the preferred data dimension should be a multiple of 128. Using the preferred data layout can increase the accelerator utilization, but it is usually up to the user to determine it. We come up with a hardware-aware layout transformation to transform the data into an accelerator-friendly format to maximize accelerator utilization.
|
| 126 |
+
|
| 127 |
+
## C. Numerical Optimizations
|
| 128 |
+
|
| 129 |
+
One of the main contributions of ParaGAN is its use of asymmetric training to improve the stability of GAN. As the number of workers increases, a larger batch size can be used to speed up the training process. However, we have noticed that the performance of large batch training for GAN is often unstable, and mode collapse occurs frequently. This issue arises because mode collapse is a type of GAN failure that occurs due to a highly coupled optimization process. To address this problem, ParaGAN introduces an asymmetric optimization policy and asynchronous update scheme, which help to decouple the optimization process and prevent mode collapse.
|
| 130 |
+
|
| 131 |
+
## IV. IMPLEMENTATION
|
| 132 |
+
|
| 133 |
+
To start, we conducted a profiling of BigGAN training using native TensorFlow [14] and the results are shown in Figure 4. As we scaled up the cluster size from 8 to 1024 TPU workers, we observed a significant increase in idle time due to the higher communication overhead. Nevertheless, convolution operations continued to take up the majority of the execution time, which suggests that training GAN is a compute-bound task. Therefore, our focus for achieving scalability in Para-GAN is on maximizing the utilization of accelerators.
|
| 134 |
+
|
| 135 |
+

|
| 136 |
+
|
| 137 |
+
Fig. 4: Operator usage profile when training at scale.
|
| 138 |
+
|
| 139 |
+
To achieve this goal, we use congestion-aware data pipelining to reduce data pipeline latency, hardware-aware layout transformation to increase accelerator utilization, and mixed-precision training with bfloat16 for reduced memory.
|
| 140 |
+
|
| 141 |
+
## A. Congestion-Aware Data Pipelining
|
| 142 |
+
|
| 143 |
+
Network jittering can have a significant impact on training throughput due to the gradient synchronization stage, where all workers synchronize the gradient at the end of each step, and the time taken to complete this step depends on the slowest worker.
|
| 144 |
+
|
| 145 |
+
Although both TensorFlow and PyTorch implements data pipelines to hide the data loading latency, when severe network jittering happens, data loading and pre-processing takes much longer than usual, and it can be a bottleneck in large-scale distributed training. As shown in Fig. 4, when the number of workers scales from 8 to 1024, it spends ${13.6}\%$ more time on idling, while data outfeeding time stays close. This indicates that the accelerators are busy waiting for data infeed and gradient synchronization, which leads to reduced utilization.
|
| 146 |
+
|
| 147 |
+
ParaGAN dynamically adjusts the number of processes and pre-processing buffer size based on the high variance network. It achieves this by using a sliding window to monitor network latency during runtime. If the current latency exceeds the threshold $\lambda$ over the window, the system increases the number of threads and buffer for pre-fetching and pre-processing. Once the latency falls below $\lambda$ , the system releases the resources for pre-processing. This may result in an increase in shared memory usage, but shared memory is not typically a bottleneck and is often underutilized.
|
| 148 |
+
|
| 149 |
+
## B. Hardware-Aware Layout Transformation
|
| 150 |
+
|
| 151 |
+
Zero-padding is used in GAN when the input cannot fit into the specified convolution dimension. For example, a matrix of ${100} \times {100}$ will need 14 zeros padded around it to run on a ${128} \times$ 128 matrix unit. However, zero-padding hinders the accelerator performance because memory is wasted by padding, leading to a lower accelerator and memory utilization rate.
|
| 152 |
+
|
| 153 |
+
We implement ParaGAN by making sure both the batch size and feature dimensions are multiples of 128 whenever suitable. In NCHW (batch_size x number of channels x height $\mathrm{x}$ width) format, we implemented ParaGAN such that $\mathrm{N}/\mathrm{H}/\mathrm{W}$ are multiple of 128 on the host side so that the accelerator memory can be efficiently utilized.
|
| 154 |
+
|
| 155 |
+
On top of the feature dimensions, ParaGAN also seeks opportunities to batch data, in order to combine the intermediate result to be a multiple of optimal layout dimension without affecting the results. Such opportunities can be found at reshape and matmul operators. For instance, if two input matrices are to multiply the same weight, we can concatenate the two input matrices first before the matrix multiplication. In some senses, this is similar to operator fusion, but the key difference here is that ParaGAN's layout transformation is dependent on the hardware, so that the fused result can confine to the optimal layout.
|
| 156 |
+
|
| 157 |
+
## V. Preliminary Evaluation
|
| 158 |
+
|
| 159 |
+
In this section, we aim to answer the following questions: 1) how is the performance of ParaGAN compared to other frameworks? 2) how much does each part of the system contribute to the overall performance? And 3) what are the effects of the numerical optimizations on convergence?
|
| 160 |
+
|
| 161 |
+
In this section, we first evaluate the end-to-end performance of ParaGAN using three metrics:
|
| 162 |
+
|
| 163 |
+
- steps per second measures the number of steps ParaGAN can train per second;
|
| 164 |
+
|
| 165 |
+
- images per second measures the throughput of ParaGAN trained with ImageNet 2012 dataset;
|
| 166 |
+
|
| 167 |
+
- time to solution measures the time it takes to reach ${150}\mathrm{k}$ steps on ImageNet at ${128} \times {128}$ resolution.
|
| 168 |
+
|
| 169 |
+
We first compare ParaGAN with other popular frameworks for end-to-end performance (Sec. V-B), and evaluate the scaling efficiency for ParaGAN (Sec. V-C).
|
| 170 |
+
|
| 171 |
+
## A. Experiment Setup
|
| 172 |
+
|
| 173 |
+
We choose BigGAN on ImageNet ILSVRC 2012 dataset as benchmark, because BigGAN has a profound impact on the high-resolution image generation, and it has a high computational requirement (Table I). On the other hand, ImageNet contains a good variety of classes (1000 classes), and it is usually challenging to train on. For the hardware backend, we first compare the performance of different backends, then we choose TPU due to accelerator availability reason.
|
| 174 |
+
|
| 175 |
+
While we use BigGAN to benchmark ParaGAN, our framework is generally applicable to other GAN architectures and dataset, and it is not tightly coupled with any specific accelerator backends.
|
| 176 |
+
|
| 177 |
+
## B. Framework-level Experiments
|
| 178 |
+
|
| 179 |
+
In Figure 5, we present a comparison of ParaGAN with StudioGAN [10] and native TensorFlow [14] in terms of GPU performance. In each experiment, we train BigGAN on ImageNet at a resolution of ${128} \times {128}$ . We utilize eight Tesla V100 GPUs for all settings, except for ParaGAN-8TPU.
|
| 180 |
+
|
| 181 |
+

|
| 182 |
+
|
| 183 |
+
Fig. 5: Throughput of different systems and hardware combinations.
|
| 184 |
+
|
| 185 |
+
We observe that ParaGAN outperforms both the native TensorFlow and StudioGAN with 8 GPUs. We conjecture that the performance gain on the GPU setting mainly attributes to the use of congestion-aware data pipeline and hardware-aware layout transformations. We also observe that the performance gap is further pronounced when switching to the TPU as the accelerator. Due to availability reasons, the following sections mainly focus on the TPU as the accelerator.
|
| 186 |
+
|
| 187 |
+
## C. Scaling Experiments
|
| 188 |
+
|
| 189 |
+
We will discuss the strong and weak scaling results in this section. In the strong scaling experiments, we keep the total workload constant and vary the number of workers to examine the speedup on time-to-solution. Whereas in the weak scaling experiments, we keep the per worker workload (batch size per worker) constant and increase the number of workers.
|
| 190 |
+
|
| 191 |
+
1) Strong Scaling: For strong scaling experiments, we fix the total batch size to be 512 and train for ${150}\mathrm{k}$ steps as target workload. Note that in order to be consistent with other experiments, we train on BigGAN at ${128} \times {128}$ resolution, with is smaller than the model trained in Fig. 1. We aim to study the effect of decreased per-worker workload when scaling.
|
| 192 |
+
|
| 193 |
+
As can be seen from Fig 6, with an increasing number of workers, the time to solution decreases from over 30 hours to 3 hours. We note that the scaling efficiency drops from 128 to 512 workers (64 to 256 TPU chips). This is because as we fix the global batch size to be 512, the per worker workload drops from 4 samples to 1 sample per batch, which under-utilizes the TPU. Thus, the time spent on communication overweights the computation when the batch size is too small. This is also verified by Fig 6, where the image per second barely improves with an increasing number of accelerator workers. However, when the workload can saturate the accelerator, the scaling efficiency can be near optimal as shown in Fig. 1.
|
| 194 |
+
|
| 195 |
+
2) Weak Scaling: In the weak scaling experiments, we fixed the batch size per worker and evaluate the performance of our framework by increasing the number of workers. Firstly, we find the largest batch size for a single accelerator that does not lead to out-of-memory error. Then, we use the batch size for each worker, therefore, the amount of workload is kept identical across workers. The weak scaling experiments examine how well ParaGAN can handle communication with an increasing number of workers. As can be seen in Fig. 7, the trend in step-per-second is relatively steady even when using 1024 workers. It shows that ParaGAN can scale out well to a large number of workers while keeping a high scaling efficiency. It is worth noting that, as the number of workers scales, the system will be more likely to suffer from network jittering and congestion. A relatively flat curve (Fig. 7a) indicates that the data pipeline optimization in ParaGAN is effective in case of congestion.
|
| 196 |
+
|
| 197 |
+

|
| 198 |
+
|
| 199 |
+
Fig. 6: Strong scaling with ParaGAN. Each TPU chip has two accelerators.
|
| 200 |
+
|
| 201 |
+

|
| 202 |
+
|
| 203 |
+
Fig. 7: Weak scaling with ParaGAN.
|
| 204 |
+
|
| 205 |
+
## D. Accelerator Utilization
|
| 206 |
+
|
| 207 |
+
The basic computing unit of TPU is MXU (matrix-multiply unit), and a higher utilization is more desirable. We compare the accelerator utilization of BigGAN 128x128 on baseline [14] and ParaGAN. Fig. 8 shows that ParaGAN clearly outperforms native implementation with higher MXU utilization across different TPU configurations. We wish to highlight that even 2% improvements can be important when scaling to thousands of workers.
|
| 208 |
+
|
| 209 |
+
It is also worth noting that, with an increasing number of accelerators, the amount of communication increases, but ParaGAN is able to maintain a relatively higher utilization than native implementation, and the gap is increasing. It indicates that computation still dominates the training time as compared to native TensorFlow, and ParaGAN is able to keep up with scaling out.
|
| 210 |
+
|
| 211 |
+
Data pipeline provides 8-15% performance improvement over the baseline. When the number of accelerators increases, network jitter caused by congestion is more likely to happen, making data loading the slowest link in the training process. In ParaGAN, we try to saturate the accelerators by dynamically adjusting the buffer/CPU budget for the data pipeline. This is generally applicable, and ParaGAN enables this feature by default.
|
| 212 |
+
|
| 213 |
+

|
| 214 |
+
|
| 215 |
+
Fig. 8: Accelerator utilization of BigGAN trained with native TensorFlow and ParaGAN.
|
| 216 |
+
|
| 217 |
+

|
| 218 |
+
|
| 219 |
+
Fig. 9: Data pipeline latency.
|
| 220 |
+
|
| 221 |
+
We compare the performance of our congestion-aware pipeline with TensorFlow's implementation. To ensure the results are comparable, they are run at the same time on the same type of machine with the same dataset directory, and latency is measured at the time taken to extract and transform a batch of data. As shown in Fig. 9, our pipeline tuner has a lower variance on latency.
|
| 222 |
+
|
| 223 |
+
Layout transformation and operator fusion combined provides $8\%$ additional improvement by increasing the accelerator utilization. Considering that they both optimize on the kernel level, it is possible that we combine them into one pass by integrating layout-awareness into XLA. We also believe it may improve by using more aggressive layout transformations on intermediate result, but it might affect the convergence. We leave it as future work.
|
| 224 |
+
|
| 225 |
+
## E. Generating High-Resolution Images
|
| 226 |
+
|
| 227 |
+
To our knowledge, we are the first to successfully train BigGAN at ${1024} \times {1024}$ resolution, which is $4 \times$ larger than the original BigGAN. Training at high resolution is particularly hard, because generator will need to use more channels and deconvolutional layers to generate more details. It is therefore more sensitive to hyperparameters and initialization. Different from ProgressGAN [11] where they use progressive growing to train low resolution images first before increasing the resolution, we directly train it on ${1024} \times {1024}$ resolution, which is more challenging, and it requires the numerical optimization techniques we discussed.
|
| 228 |
+
|
| 229 |
+

|
| 230 |
+
|
| 231 |
+
Fig. 10: Output of BigGAN at ${1024} \times {1024}$ resolution. Best viewed in colour.
|
| 232 |
+
|
| 233 |
+
The generated results achieves Inception Score (IS) [19] of 239.3 and Fréchet Inception Distance (FID) of 10.6. They are presented in Fig. 10 for visual evaluation.
|
| 234 |
+
|
| 235 |
+
## F. Cost Effectiveness Analysis
|
| 236 |
+
|
| 237 |
+
Cost effectiveness is an important aspect for cloud training. We consider the dollar value of training BigGAN to convergence on different accelerator backend in this section.
|
| 238 |
+
|
| 239 |
+
For GPU-based virtual machine on a cloud provider G, training to converge takes $8 \times$ V100 GPUs for 15 days, which could cost USD 15K. For TPU, the 1024-accelerator instance costs USD 1024 per hour. Since ParaGAN can train in 14 hours, it costs USD 14.3K. Although the training cost is comparable to the GPU instance, we can significantly reduce the training time to half a day, which can translate into improved productivity.
|
| 240 |
+
|
| 241 |
+
## VI. DISCUSSION AND FUTURE WORK
|
| 242 |
+
|
| 243 |
+
ParaGAN is a large-scale distributed GAN training framework that supports high-resolution image generation with near-linear scalability. ParaGAN is optimized with an adaptive data pipeline, hardware-aware layout transformation, and an asynchronous update scheme for high throughput. To stabilize the training process of high-resolution GAN, ParaGAN also implements an asymmetric optimizer policy.
|
| 244 |
+
|
| 245 |
+
We hope ParaGAN will advance GAN research by accelerating the training process. ParaGAN scales almost optimally to 1024 accelerators, and it can greatly reduce the time to train a GAN model from weeks to hours. We leave it as future work to evaluate the performance of different GAN and diffusion model architectures on ParaGAN.
|
| 246 |
+
|
| 247 |
+
## REFERENCES
|
| 248 |
+
|
| 249 |
+
[1] D. Balduzzi, S. Racaniere, J. Martens, J. Foerster, K. Tuyls, and T. Grae-pel, "The mechanics of n-player differentiable games," in International Conference on Machine Learning. PMLR, 2018, pp. 354-363.
|
| 250 |
+
|
| 251 |
+
[2] A. Brock, J. Donahue, and K. Simonyan, "Large scale GaN training for high fidelity natural image synthesis," 7th International Conference on Learning Representations, ICLR 2019, 9 2019. [Online]. Available: http://arxiv.org/abs/1809.11096
|
| 252 |
+
|
| 253 |
+
[3] M. Contributors, "MMGeneration: Openmmlab generative model toolbox and benchmark," https://github.com/open-mmlab/mmgeneration, 2021.
|
| 254 |
+
|
| 255 |
+
[4] C. Daskalakis and I. Panageas, "The limit points of (optimistic) gradient descent in min-max optimization," Advances in Neural Information Processing Systems, vol. 31, 2018.
|
| 256 |
+
|
| 257 |
+
[5] U. Demir and G. Unal, "Patch-based image inpainting with generative adversarial networks," arXiv preprint arXiv:1803.07422, 2018.
|
| 258 |
+
|
| 259 |
+
[6] I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, "Generative adversarial networks," arXiv preprint arXiv:1406.2661, 2014.
|
| 260 |
+
|
| 261 |
+
[7] P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros, "Image-to-image translation with conditional adversarial networks," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 1125- 1134.
|
| 262 |
+
|
| 263 |
+
[8] C. Jin, P. Netrapalli, and M. Jordan, "What is local optimality in nonconvex-nonconcave minimax optimization?" in International Conference on Machine Learning. PMLR, 2020, pp. 4880-4889.
|
| 264 |
+
|
| 265 |
+
[9] M. Kang and J. Park, "ContraGAN: Contrastive learning for conditional image generation," Tech. Rep., 2020. [Online]. Available: http://arxiv.org/abs/2006.12681
|
| 266 |
+
|
| 267 |
+
[10] M. Kang and J. Park, "ContraGAN: Contrastive Learning for Conditional Image Generation," 2020.
|
| 268 |
+
|
| 269 |
+
[11] T. Karras, T. Aila, S. Laine, and J. Lehtinen, "Progressive growing of gans for improved quality, stability, and variation," arXiv preprint arXiv:1710.10196, 2017.
|
| 270 |
+
|
| 271 |
+
[12] C. Ledig, L. Theis, F. Huszár, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, Z. Wang et al., "Photo-realistic single image super-resolution using a generative adversarial network," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 4681-4690.
|
| 272 |
+
|
| 273 |
+
[13] K. S. Lee and C. Town, "Mimicry: Towards the reproducibility of gan research," 2020.
|
| 274 |
+
|
| 275 |
+
[14] M. Lucic, K. Kurach, M. Michalski, O. Bousquet, and S. Gelly, "Are Gans created equal? A large-scale study," Tech. Rep., 2018.
|
| 276 |
+
|
| 277 |
+
[15] L. Mescheder, S. Nowozin, and A. Geiger, "The numerics of GANs," Tech. Rep., 2017. [Online]. Available: https://github.com/LMescheder/
|
| 278 |
+
|
| 279 |
+
[16] T. Miyato, T. Kataoka, M. Koyama, and Y. Yoshida, "Spectral normalization for generative adversarial networks," 2018.
|
| 280 |
+
|
| 281 |
+
[17] A. Radford, L. Metz, and S. Chintala, "Unsupervised representation learning with deep convolutional generative adversarial networks," arXiv preprint arXiv:1511.06434, 2015.
|
| 282 |
+
|
| 283 |
+
[18] A. Sabne, "Xla : Compiling machine learning for peak performance," 2020.
|
| 284 |
+
|
| 285 |
+
[19] T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford, and X. Chen, "Improved techniques for training gans," arXiv preprint arXiv:1606.03498, 2016.
|
| 286 |
+
|
| 287 |
+
[20] A. Yazdanbakhsh, K. Samadi, N. S. Kim, and H. Esmaeilzadeh, "Ganax: A unified mimd-simd acceleration for generative adversarial networks," in 2018 ACM/IEEE 45th annual international symposium on computer architecture (ISCA). IEEE, 2018, pp. 650-661.
|
| 288 |
+
|
| 289 |
+
[21] J. Yu, Z. Lin, J. Yang, X. Shen, X. Lu, and T. S. Huang, "Generative image inpainting with contextual attention," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 5505- 5514.
|
| 290 |
+
|
| 291 |
+
[22] H. Zhang, I. Goodfellow, D. Metaxas, and A. Odena, "Self-attention generative adversarial networks," 2019.
|
| 292 |
+
|
| 293 |
+
[23] J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros, "Unpaired image-to-image translation using cycle-consistent adversarial networks," in Proceedings of the IEEE international conference on computer vision, 2017, pp. ${2223} - {2232}$ .
|
papers/ISCA/ISCA 2023/ISCA 2023 Workshop/ISCA 2023 Workshop ASSYST/gb6VM_pTd5E/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,260 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ PARAGAN: A CLOUD TRAINING FRAMEWORK FOR GENERATIVE ADVERSARIAL NETWORKS
|
| 2 |
+
|
| 3 |
+
ISCA 2023 Submission #6 - Confidential Draft - Do NOT Distribute!!
|
| 4 |
+
|
| 5 |
+
Abstract-Generative Adversarial Network (GAN) has shown tremendous success in synthesizing realistic photos and videos in recent years. However, training GAN to convergence is still a challenging task that requires significant computing power and is subject to training instability. To address these challenges, we propose ParaGAN, a cloud training framework for GAN optimized from both system and numerical perspectives. To achieve this, ParaGAN implements a congestion-aware pipeline for latency hiding, hardware-aware layout transformation for improved accelerator utilization, and an asynchronous update scheme to optimize system performance. Additionally, from a numerical perspective, we introduce an asymmetric optimization policy to stabilize training. Our preliminary experiments show that ParaGAN reduces the training time of BigGAN from 15 days to just 14 hours on 1024 TPUs, achieving 91% scaling efficiency. Moreover, we demonstrate that ParaGAN enables the generation of unprecedented high-resolution $\left( {{1024} \times {1024}}\right)$ images on BigGAN.
|
| 6 |
+
|
| 7 |
+
§ I. INTRODUCTION
|
| 8 |
+
|
| 9 |
+
Last decade has witnessed the success of Generative Adversarial Networks [6], which has a wide range of applications including image super resolution [12], image translation [7], [23], photo inpainting [5], [21]. However, training GAN at scale remains challenging because of the computational demands and optimization difficulties. Unlike Convolutional Neural Networks (CNN) or Transformer-based architectures where optimization is straightforward by taking gradient descents on a single model, there are two sub-networks to optimize in GAN, namely generator and discriminator. The generator samples from the noise and produces a fake sample as close to the real sample as possible, and the discriminator evaluates the generated sample. The generator aims to fool the discriminator, and the discriminator will try to identify the fake images from the real ones. Since the two components are optimized for two contradicting goals, it has been observed that GANs are difficult to converge. Therefore, to speed-up the GAN training at large-scale, we need a framework optimized on both system and numerical perspective.
|
| 10 |
+
|
| 11 |
+
Due to the difficulty of optimizing GAN, many state-of-the-art GAN models take days or even weeks to train. For instance, BigGAN [2] took 15 days for $8\mathrm{x}$ V100 GPUs to train ${150}\mathrm{k}$ steps. Table I summarizes the reported training time of some of the state-of-the-art GAN models. This has made it difficult to quickly reproduce, evaluate, and iterate GAN experiments. Also, current GAN frameworks usually support training with very few nodes.
|
| 12 |
+
|
| 13 |
+
We argue that training speed is an important yet often ignored factor in the current GAN training landscape, and we propose to accelerate it with distributed training. But distributed GAN training has several challenges. First of all, most data centers have storage nodes and compute nodes separated for elasticity, but network congestion can happen from time to time, which prolongs the latency between nodes and affects training throughput. Secondly, there are usually different types of accelerators in the data center, but each of them has unique optimal hardware characteristics. If ignored, it can lead to the under-utilization of accelerators. Last but not least, training GAN at scale may cause a convergence problem, in which the GAN loss does not converge to a stable equilibrium. Therefore, this framework has to consider both system and numerical perspectives.
|
| 14 |
+
|
| 15 |
+
< g r a p h i c s >
|
| 16 |
+
|
| 17 |
+
Fig. 1: ParaGAN scales to 1024 TPU accelerators at 91% scaling efficiency.
|
| 18 |
+
|
| 19 |
+
TABLE I: Training Time and Parameters Number for GANs trained on ImageNet 2012 dataset.
|
| 20 |
+
|
| 21 |
+
max width=
|
| 22 |
+
|
| 23 |
+
Model Accelerator Training Time #Params
|
| 24 |
+
|
| 25 |
+
1-4
|
| 26 |
+
SNGAN [16] $8 \times$ V100 $3\mathrm{\;d}{13.6}\mathrm{\;h}$ 81.44M
|
| 27 |
+
|
| 28 |
+
1-4
|
| 29 |
+
SAGAN [22] $8 \times$ V100 10d 18.7h 81.47M
|
| 30 |
+
|
| 31 |
+
1-4
|
| 32 |
+
BigGAN [2] $8 \times$ V100 15 d 158.42M
|
| 33 |
+
|
| 34 |
+
1-4
|
| 35 |
+
ContraGAN [9] $8 \times$ V100 5d 3.5h 160.78M
|
| 36 |
+
|
| 37 |
+
1-4
|
| 38 |
+
ProgressiveGAN1 [11] $8 \times$ V100 4d 43.2M
|
| 39 |
+
|
| 40 |
+
1-4
|
| 41 |
+
|
| 42 |
+
In this work, we present ParaGAN, a distributed training framework that supports large-scale distributed training for high-resolution GAN. We identify the performance bottlenecks when training at scale and optimize them for efficiency. To stabilize the training process, ParaGAN comes up with an asynchronous update scheme and asymmetric optimization policy. ParaGAN has a simple interface for building new GAN architecture, and it supports CPU, GPU, and TPU. The main contributions of ParaGAN include:
|
| 43 |
+
|
| 44 |
+
< g r a p h i c s >
|
| 45 |
+
|
| 46 |
+
Fig. 2: Typical GAN architecture.
|
| 47 |
+
|
| 48 |
+
* We design and implement a scalable distributed training framework for GAN with optimizations on both system and numerical perspectives. With ParaGAN, the training time of BigGAN can be shortened from 15 days to 14 hours with 1024 TPU accelerators at ${91}\%$ scaling efficiency, as shown in Fig. 1. ParaGAN also enables direct photo-realistic image generation at unprecedented ${1024} \times {1024}$ resolution, which is $4 \times$ higher than the original BigGAN model.
|
| 49 |
+
|
| 50 |
+
* From the system perspective, we use a congestion-aware data pipeline and hardware-aware layout transformation to improve the accelerator utilization.
|
| 51 |
+
|
| 52 |
+
* From the numerical perspective, to improve the convergence for distributed GAN training, we present an asynchronous update scheme and asymmetric optimization policy.
|
| 53 |
+
|
| 54 |
+
§ II. BACKGROUND
|
| 55 |
+
|
| 56 |
+
As shown in Fig. 2, a GAN consists of a generator and a discriminator. The generator generates fake data samples, while the discriminator distinguishes between the generated samples and real samples as accurately as possible. The learning problem of GANs is a minimax optimization problem. The goal of the optimization is to reach an equilibrium for a two players problem:
|
| 57 |
+
|
| 58 |
+
$\mathop{\min }\limits_{G}\mathop{\max }\limits_{D}{\mathbb{E}}_{x \sim {q}_{\text{ data }}\left( x\right) }\left\lbrack {\log D\left( x\right) }\right\rbrack + {\mathbb{E}}_{z \sim p\left( z\right) }\left\lbrack {\log \left( {1 - D\left( {G\left( z\right) }\right) }\right) }\right\rbrack$
|
| 59 |
+
|
| 60 |
+
where $z \in {\mathbb{R}}^{{d}_{z}}$ is a latent variable drawn from distribution $p\left( z\right)$ . The discriminator seeks to maximize the sum of the log probability of correctly predicting the real and fake samples, while the generator tries to minimize it instead. Formally, the convergence of GAN is defined as a type of Nash Equilibrium: one network does not change its loss regardless of what the other network does.
|
| 61 |
+
|
| 62 |
+
Since the two networks have contradicting goals, the training process of GAN is a zero-sum game and can be very unstable. Recent works show that i) GAN may converge to points that are not local minimax using gradient descent, in particular for a non-convex game which is common [4], [8], and ii) gradient descent on GAN exhibits strong rotation around fixed points, which requires using very small learning rates [1], [15]. Also, GANs training is sensitive to the hyperparameters and initialization [14]. Therefore, it is observed that GANs are difficult to optimize, and this is also the reason why it takes a long time to train them.
|
| 63 |
+
|
| 64 |
+
There are some existing GAN libraries [3], [10], [13], [14] to train state-of-the-art GANs. They provide standardized building blocks like network backbone and evaluation metrics, making it easy to build new models. However, they focus less on the system performance, and training GAN still takes days if not weeks. [20] proposed a GAN-optimized hardware accelerator, but we aim to build a system that can be run on the public cloud using commodity accelerators. If the training process can be massively paralleled, the GAN community will benefit from it.
|
| 65 |
+
|
| 66 |
+
In ParaGAN, we adopt a co-designed approach: on the system level, we identify that the performance bottlenecks are rooted in network congestion and low accelerator utilization when training on the cloud, and ParaGAN implements a congestion-aware data pipeline and hardware-aware layout transformation to mitigate the issues; on the optimization level, we observe that it is beneficial to decouple the training of generator and discriminator, and ParaGAN proposes an asynchronous update scheme and an asymmetric optimization policy.
|
| 67 |
+
|
| 68 |
+
§ III. DESIGN AND PROTOTYPICAL IMPLEMENTATIONS
|
| 69 |
+
|
| 70 |
+
In this section, we will give an overview and discuss the design decisions of ParaGAN. We recognize that, the scalability is usually limited by the latency between nodes. Furthermore, when scaling up the GAN training, the numerical instability problem happens more often. We divide the following discussions into two folds and present our co-designed approach for system throughput and training stability.
|
| 71 |
+
|
| 72 |
+
§ A. PROGRAMMING MODEL
|
| 73 |
+
|
| 74 |
+
The design of ParaGAN is presented in Fig. 3. ParaGAN (blue region) is implemented on top of TensorFlow (green region) because TensorFlow provides the low-level APIs for model checkpointing, evaluation, and visualization. Different from TensorFlow, we provide high-level APIs for GAN which includes scaling manager, evaluation metrics, and common network backbones. Users of ParaGAN can import from Para-GAN or define their own components. ParaGAN then performs layout transformation and invokes TensorFlow, which converts the model definition into a computational graph. An optional XLA [18] pass can be performed followed by that. After that, the training starts on the CPU host and accelerators.
|
| 75 |
+
|
| 76 |
+
Listing 1: Inferface of ParaGAN
|
| 77 |
+
|
| 78 |
+
import paragan as pg
|
| 79 |
+
|
| 80 |
+
class Generator:
|
| 81 |
+
|
| 82 |
+
def model_fn(latent_var, y):
|
| 83 |
+
|
| 84 |
+
#generator model
|
| 85 |
+
|
| 86 |
+
< g r a p h i c s >
|
| 87 |
+
|
| 88 |
+
Fig. 3: Overview of ParaGAN architecture.
|
| 89 |
+
|
| 90 |
+
return output
|
| 91 |
+
|
| 92 |
+
class Discriminator:
|
| 93 |
+
|
| 94 |
+
def model_fn(x, y):
|
| 95 |
+
|
| 96 |
+
#discriminator model
|
| 97 |
+
|
| 98 |
+
return output, out_logit
|
| 99 |
+
|
| 100 |
+
scale_mgr = pg.ScalingManager(config=cfg,
|
| 101 |
+
|
| 102 |
+
bs=2048, num_workers=128)
|
| 103 |
+
|
| 104 |
+
g = Generator( )
|
| 105 |
+
|
| 106 |
+
d = Discriminator( )
|
| 107 |
+
|
| 108 |
+
gan = pg. Estimator(g, d)
|
| 109 |
+
|
| 110 |
+
#train
|
| 111 |
+
|
| 112 |
+
for step in cfg.max_steps:
|
| 113 |
+
|
| 114 |
+
scale_mgr.train(gan)
|
| 115 |
+
|
| 116 |
+
§ #EVALUATE
|
| 117 |
+
|
| 118 |
+
scale_mgr.eval(metric='fid')
|
| 119 |
+
|
| 120 |
+
We introduce a few concepts in ParaGAN:
|
| 121 |
+
|
| 122 |
+
1) Scaling Manager: The scaling manager is responsible for tuning the hyper-parameters that need adjustment during scaling. Users can start with the best hyper-parameters from a single worker and ParaGAN will properly scale them based on the number of workers based on the heuristics (eg. linear scaling, cosine scaling).
|
| 123 |
+
|
| 124 |
+
2) Network Backbones: It is common that one starts by building upon existing GAN architectures. We also provide some popular GAN architectures as backbone, including but not limited to:
|
| 125 |
+
|
| 126 |
+
* BigGAN [2];
|
| 127 |
+
|
| 128 |
+
* Deep Convolutional GAN (DCGAN) [17];
|
| 129 |
+
|
| 130 |
+
§ SPECTRAL NORM GAN (SNGAN) [16]
|
| 131 |
+
|
| 132 |
+
3) Evaluation Metrics: Evaluation metrics can be implemented differently across papers, and this can cause inconsistency. We provide commonly used evaluation metrics including Frechet Inception Distance (FID) and Inception Score (IS).
|
| 133 |
+
|
| 134 |
+
§ B. SYSTEM OPTIMIZATIONS
|
| 135 |
+
|
| 136 |
+
To satisfy the scalability requirement, we design ParaGAN with optimizations on $\mathrm{I}/\mathrm{O}$ and computation.
|
| 137 |
+
|
| 138 |
+
We optimize the I/O performance by building a congestion-aware data pipeline. For data centers, the compute and storage nodes are usually interconnected via Ethernet instead of high-speed InfiniBand. The network traffic between them is not always stable since the infrastructure is shared with other tenants. This could cause problems when the training scales since latency fluctuates when the number of workers increases. Therefore, we implement a congestion-aware data pipeline to reduce the impact of network jittering.
|
| 139 |
+
|
| 140 |
+
To achieve a higher accelerator utilization, we perform hardware-aware layout transformation. A data center usually has multiple types of accelerators, and different accelerators have different architectures and preferred data layouts. For example, Nvidia A100 GPUs prefer half-precision data in multiples of 64 and single-precision data in multiples of 32, while previous generations prefer $8 \times$ . For TPU v3, the preferred data dimension should be a multiple of 128. Using the preferred data layout can increase the accelerator utilization, but it is usually up to the user to determine it. We come up with a hardware-aware layout transformation to transform the data into an accelerator-friendly format to maximize accelerator utilization.
|
| 141 |
+
|
| 142 |
+
§ C. NUMERICAL OPTIMIZATIONS
|
| 143 |
+
|
| 144 |
+
One of the main contributions of ParaGAN is its use of asymmetric training to improve the stability of GAN. As the number of workers increases, a larger batch size can be used to speed up the training process. However, we have noticed that the performance of large batch training for GAN is often unstable, and mode collapse occurs frequently. This issue arises because mode collapse is a type of GAN failure that occurs due to a highly coupled optimization process. To address this problem, ParaGAN introduces an asymmetric optimization policy and asynchronous update scheme, which help to decouple the optimization process and prevent mode collapse.
|
| 145 |
+
|
| 146 |
+
§ IV. IMPLEMENTATION
|
| 147 |
+
|
| 148 |
+
To start, we conducted a profiling of BigGAN training using native TensorFlow [14] and the results are shown in Figure 4. As we scaled up the cluster size from 8 to 1024 TPU workers, we observed a significant increase in idle time due to the higher communication overhead. Nevertheless, convolution operations continued to take up the majority of the execution time, which suggests that training GAN is a compute-bound task. Therefore, our focus for achieving scalability in Para-GAN is on maximizing the utilization of accelerators.
|
| 149 |
+
|
| 150 |
+
< g r a p h i c s >
|
| 151 |
+
|
| 152 |
+
Fig. 4: Operator usage profile when training at scale.
|
| 153 |
+
|
| 154 |
+
To achieve this goal, we use congestion-aware data pipelining to reduce data pipeline latency, hardware-aware layout transformation to increase accelerator utilization, and mixed-precision training with bfloat16 for reduced memory.
|
| 155 |
+
|
| 156 |
+
§ A. CONGESTION-AWARE DATA PIPELINING
|
| 157 |
+
|
| 158 |
+
Network jittering can have a significant impact on training throughput due to the gradient synchronization stage, where all workers synchronize the gradient at the end of each step, and the time taken to complete this step depends on the slowest worker.
|
| 159 |
+
|
| 160 |
+
Although both TensorFlow and PyTorch implements data pipelines to hide the data loading latency, when severe network jittering happens, data loading and pre-processing takes much longer than usual, and it can be a bottleneck in large-scale distributed training. As shown in Fig. 4, when the number of workers scales from 8 to 1024, it spends ${13.6}\%$ more time on idling, while data outfeeding time stays close. This indicates that the accelerators are busy waiting for data infeed and gradient synchronization, which leads to reduced utilization.
|
| 161 |
+
|
| 162 |
+
ParaGAN dynamically adjusts the number of processes and pre-processing buffer size based on the high variance network. It achieves this by using a sliding window to monitor network latency during runtime. If the current latency exceeds the threshold $\lambda$ over the window, the system increases the number of threads and buffer for pre-fetching and pre-processing. Once the latency falls below $\lambda$ , the system releases the resources for pre-processing. This may result in an increase in shared memory usage, but shared memory is not typically a bottleneck and is often underutilized.
|
| 163 |
+
|
| 164 |
+
§ B. HARDWARE-AWARE LAYOUT TRANSFORMATION
|
| 165 |
+
|
| 166 |
+
Zero-padding is used in GAN when the input cannot fit into the specified convolution dimension. For example, a matrix of ${100} \times {100}$ will need 14 zeros padded around it to run on a ${128} \times$ 128 matrix unit. However, zero-padding hinders the accelerator performance because memory is wasted by padding, leading to a lower accelerator and memory utilization rate.
|
| 167 |
+
|
| 168 |
+
We implement ParaGAN by making sure both the batch size and feature dimensions are multiples of 128 whenever suitable. In NCHW (batch_size x number of channels x height $\mathrm{x}$ width) format, we implemented ParaGAN such that $\mathrm{N}/\mathrm{H}/\mathrm{W}$ are multiple of 128 on the host side so that the accelerator memory can be efficiently utilized.
|
| 169 |
+
|
| 170 |
+
On top of the feature dimensions, ParaGAN also seeks opportunities to batch data, in order to combine the intermediate result to be a multiple of optimal layout dimension without affecting the results. Such opportunities can be found at reshape and matmul operators. For instance, if two input matrices are to multiply the same weight, we can concatenate the two input matrices first before the matrix multiplication. In some senses, this is similar to operator fusion, but the key difference here is that ParaGAN's layout transformation is dependent on the hardware, so that the fused result can confine to the optimal layout.
|
| 171 |
+
|
| 172 |
+
§ V. PRELIMINARY EVALUATION
|
| 173 |
+
|
| 174 |
+
In this section, we aim to answer the following questions: 1) how is the performance of ParaGAN compared to other frameworks? 2) how much does each part of the system contribute to the overall performance? And 3) what are the effects of the numerical optimizations on convergence?
|
| 175 |
+
|
| 176 |
+
In this section, we first evaluate the end-to-end performance of ParaGAN using three metrics:
|
| 177 |
+
|
| 178 |
+
* steps per second measures the number of steps ParaGAN can train per second;
|
| 179 |
+
|
| 180 |
+
* images per second measures the throughput of ParaGAN trained with ImageNet 2012 dataset;
|
| 181 |
+
|
| 182 |
+
* time to solution measures the time it takes to reach ${150}\mathrm{k}$ steps on ImageNet at ${128} \times {128}$ resolution.
|
| 183 |
+
|
| 184 |
+
We first compare ParaGAN with other popular frameworks for end-to-end performance (Sec. V-B), and evaluate the scaling efficiency for ParaGAN (Sec. V-C).
|
| 185 |
+
|
| 186 |
+
§ A. EXPERIMENT SETUP
|
| 187 |
+
|
| 188 |
+
We choose BigGAN on ImageNet ILSVRC 2012 dataset as benchmark, because BigGAN has a profound impact on the high-resolution image generation, and it has a high computational requirement (Table I). On the other hand, ImageNet contains a good variety of classes (1000 classes), and it is usually challenging to train on. For the hardware backend, we first compare the performance of different backends, then we choose TPU due to accelerator availability reason.
|
| 189 |
+
|
| 190 |
+
While we use BigGAN to benchmark ParaGAN, our framework is generally applicable to other GAN architectures and dataset, and it is not tightly coupled with any specific accelerator backends.
|
| 191 |
+
|
| 192 |
+
§ B. FRAMEWORK-LEVEL EXPERIMENTS
|
| 193 |
+
|
| 194 |
+
In Figure 5, we present a comparison of ParaGAN with StudioGAN [10] and native TensorFlow [14] in terms of GPU performance. In each experiment, we train BigGAN on ImageNet at a resolution of ${128} \times {128}$ . We utilize eight Tesla V100 GPUs for all settings, except for ParaGAN-8TPU.
|
| 195 |
+
|
| 196 |
+
< g r a p h i c s >
|
| 197 |
+
|
| 198 |
+
Fig. 5: Throughput of different systems and hardware combinations.
|
| 199 |
+
|
| 200 |
+
We observe that ParaGAN outperforms both the native TensorFlow and StudioGAN with 8 GPUs. We conjecture that the performance gain on the GPU setting mainly attributes to the use of congestion-aware data pipeline and hardware-aware layout transformations. We also observe that the performance gap is further pronounced when switching to the TPU as the accelerator. Due to availability reasons, the following sections mainly focus on the TPU as the accelerator.
|
| 201 |
+
|
| 202 |
+
§ C. SCALING EXPERIMENTS
|
| 203 |
+
|
| 204 |
+
We will discuss the strong and weak scaling results in this section. In the strong scaling experiments, we keep the total workload constant and vary the number of workers to examine the speedup on time-to-solution. Whereas in the weak scaling experiments, we keep the per worker workload (batch size per worker) constant and increase the number of workers.
|
| 205 |
+
|
| 206 |
+
1) Strong Scaling: For strong scaling experiments, we fix the total batch size to be 512 and train for ${150}\mathrm{k}$ steps as target workload. Note that in order to be consistent with other experiments, we train on BigGAN at ${128} \times {128}$ resolution, with is smaller than the model trained in Fig. 1. We aim to study the effect of decreased per-worker workload when scaling.
|
| 207 |
+
|
| 208 |
+
As can be seen from Fig 6, with an increasing number of workers, the time to solution decreases from over 30 hours to 3 hours. We note that the scaling efficiency drops from 128 to 512 workers (64 to 256 TPU chips). This is because as we fix the global batch size to be 512, the per worker workload drops from 4 samples to 1 sample per batch, which under-utilizes the TPU. Thus, the time spent on communication overweights the computation when the batch size is too small. This is also verified by Fig 6, where the image per second barely improves with an increasing number of accelerator workers. However, when the workload can saturate the accelerator, the scaling efficiency can be near optimal as shown in Fig. 1.
|
| 209 |
+
|
| 210 |
+
2) Weak Scaling: In the weak scaling experiments, we fixed the batch size per worker and evaluate the performance of our framework by increasing the number of workers. Firstly, we find the largest batch size for a single accelerator that does not lead to out-of-memory error. Then, we use the batch size for each worker, therefore, the amount of workload is kept identical across workers. The weak scaling experiments examine how well ParaGAN can handle communication with an increasing number of workers. As can be seen in Fig. 7, the trend in step-per-second is relatively steady even when using 1024 workers. It shows that ParaGAN can scale out well to a large number of workers while keeping a high scaling efficiency. It is worth noting that, as the number of workers scales, the system will be more likely to suffer from network jittering and congestion. A relatively flat curve (Fig. 7a) indicates that the data pipeline optimization in ParaGAN is effective in case of congestion.
|
| 211 |
+
|
| 212 |
+
< g r a p h i c s >
|
| 213 |
+
|
| 214 |
+
Fig. 6: Strong scaling with ParaGAN. Each TPU chip has two accelerators.
|
| 215 |
+
|
| 216 |
+
< g r a p h i c s >
|
| 217 |
+
|
| 218 |
+
Fig. 7: Weak scaling with ParaGAN.
|
| 219 |
+
|
| 220 |
+
§ D. ACCELERATOR UTILIZATION
|
| 221 |
+
|
| 222 |
+
The basic computing unit of TPU is MXU (matrix-multiply unit), and a higher utilization is more desirable. We compare the accelerator utilization of BigGAN 128x128 on baseline [14] and ParaGAN. Fig. 8 shows that ParaGAN clearly outperforms native implementation with higher MXU utilization across different TPU configurations. We wish to highlight that even 2% improvements can be important when scaling to thousands of workers.
|
| 223 |
+
|
| 224 |
+
It is also worth noting that, with an increasing number of accelerators, the amount of communication increases, but ParaGAN is able to maintain a relatively higher utilization than native implementation, and the gap is increasing. It indicates that computation still dominates the training time as compared to native TensorFlow, and ParaGAN is able to keep up with scaling out.
|
| 225 |
+
|
| 226 |
+
Data pipeline provides 8-15% performance improvement over the baseline. When the number of accelerators increases, network jitter caused by congestion is more likely to happen, making data loading the slowest link in the training process. In ParaGAN, we try to saturate the accelerators by dynamically adjusting the buffer/CPU budget for the data pipeline. This is generally applicable, and ParaGAN enables this feature by default.
|
| 227 |
+
|
| 228 |
+
< g r a p h i c s >
|
| 229 |
+
|
| 230 |
+
Fig. 8: Accelerator utilization of BigGAN trained with native TensorFlow and ParaGAN.
|
| 231 |
+
|
| 232 |
+
< g r a p h i c s >
|
| 233 |
+
|
| 234 |
+
Fig. 9: Data pipeline latency.
|
| 235 |
+
|
| 236 |
+
We compare the performance of our congestion-aware pipeline with TensorFlow's implementation. To ensure the results are comparable, they are run at the same time on the same type of machine with the same dataset directory, and latency is measured at the time taken to extract and transform a batch of data. As shown in Fig. 9, our pipeline tuner has a lower variance on latency.
|
| 237 |
+
|
| 238 |
+
Layout transformation and operator fusion combined provides $8\%$ additional improvement by increasing the accelerator utilization. Considering that they both optimize on the kernel level, it is possible that we combine them into one pass by integrating layout-awareness into XLA. We also believe it may improve by using more aggressive layout transformations on intermediate result, but it might affect the convergence. We leave it as future work.
|
| 239 |
+
|
| 240 |
+
§ E. GENERATING HIGH-RESOLUTION IMAGES
|
| 241 |
+
|
| 242 |
+
To our knowledge, we are the first to successfully train BigGAN at ${1024} \times {1024}$ resolution, which is $4 \times$ larger than the original BigGAN. Training at high resolution is particularly hard, because generator will need to use more channels and deconvolutional layers to generate more details. It is therefore more sensitive to hyperparameters and initialization. Different from ProgressGAN [11] where they use progressive growing to train low resolution images first before increasing the resolution, we directly train it on ${1024} \times {1024}$ resolution, which is more challenging, and it requires the numerical optimization techniques we discussed.
|
| 243 |
+
|
| 244 |
+
< g r a p h i c s >
|
| 245 |
+
|
| 246 |
+
Fig. 10: Output of BigGAN at ${1024} \times {1024}$ resolution. Best viewed in colour.
|
| 247 |
+
|
| 248 |
+
The generated results achieves Inception Score (IS) [19] of 239.3 and Fréchet Inception Distance (FID) of 10.6. They are presented in Fig. 10 for visual evaluation.
|
| 249 |
+
|
| 250 |
+
§ F. COST EFFECTIVENESS ANALYSIS
|
| 251 |
+
|
| 252 |
+
Cost effectiveness is an important aspect for cloud training. We consider the dollar value of training BigGAN to convergence on different accelerator backend in this section.
|
| 253 |
+
|
| 254 |
+
For GPU-based virtual machine on a cloud provider G, training to converge takes $8 \times$ V100 GPUs for 15 days, which could cost USD 15K. For TPU, the 1024-accelerator instance costs USD 1024 per hour. Since ParaGAN can train in 14 hours, it costs USD 14.3K. Although the training cost is comparable to the GPU instance, we can significantly reduce the training time to half a day, which can translate into improved productivity.
|
| 255 |
+
|
| 256 |
+
§ VI. DISCUSSION AND FUTURE WORK
|
| 257 |
+
|
| 258 |
+
ParaGAN is a large-scale distributed GAN training framework that supports high-resolution image generation with near-linear scalability. ParaGAN is optimized with an adaptive data pipeline, hardware-aware layout transformation, and an asynchronous update scheme for high throughput. To stabilize the training process of high-resolution GAN, ParaGAN also implements an asymmetric optimizer policy.
|
| 259 |
+
|
| 260 |
+
We hope ParaGAN will advance GAN research by accelerating the training process. ParaGAN scales almost optimally to 1024 accelerators, and it can greatly reduce the time to train a GAN model from weeks to hours. We leave it as future work to evaluate the performance of different GAN and diffusion model architectures on ParaGAN.
|
papers/ISCA/ISCA 2023/ISCA 2023 Workshop/ISCA 2023 Workshop ASSYST/nfmfqzQ4Mwl/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,209 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Efficient DNN Training with Mixed-Precision Block Floating Point
|
| 2 |
+
|
| 3 |
+
ISCA 2023 Submission #NaN - Confidential Draft - Do NOT Distribute!!
|
| 4 |
+
|
| 5 |
+
Abstract-The unprecedented growth in DNN model complexity, size, and amount of training data has led to a commensurate increase in demand for computing and a search for minimal encoding. Recent research advocates Hybrid Block Floating-Point (HBFP) as a technique that minimizes silicon provisioning in accelerators by converting the majority of arithmetic operations in training to 8-bit fixed-point. In this paper, we perform a full-scale exploration of the HBFP design space, including minimal mantissa encoding, varying block sizes, and mixed mantissa bit-width across layers and epochs. We propose Accuracy Boosters, an epoch-driven mixed-mantissa HBFP that uses 6-bit mantissa only in the last epoch and converts ${99.7}\%$ of all arithmetic operations in training to 4-bit mantissas. Accuracy Boosters enable reducing power consumption for an HBFP training accelerator by ${38} \times$ as compared to FP32, while preserving or outperforming FP32 accuracy.
|
| 6 |
+
|
| 7 |
+
## I. INTRODUCTION
|
| 8 |
+
|
| 9 |
+
Over the past decade, improvements in Deep Neural Network (DNN) algorithms have led to unprecedented growth in model complexity and dataset size and, consequently, the required computational resources to train DNN models. One of the largest DNN models (GPT-3) [2] has 175 billion parameters and requires ${3.14} \times {10}^{23}\mathrm{{FLOPs}}$ to train. With the slowdown in Moore's law, researchers and vendors have begun to search for alternate ways to improve the arithmetic density of the underlying hardware platforms. Narrower bit-width (with lower precision) number formats [24], [25], [31], [32], [35] have emerged as a promising approach to increase arithmetic density, as well as, reduce the required operand storage and communication bandwidth while maintaining high training accuracy.
|
| 10 |
+
|
| 11 |
+
Recently there have been several proposals for block floating point [7], [20], [38], a numerical encoding that groups a block of mantissas which rely on only fixed-point arithmetic with a single exponent. Block floating point asymptotically approaches the arithmetic density of fixed point with larger block sizes and naturally lends itself well to mixed-precision hardware where a block with the same number of exponent bits can have a fixed-point datapath which is bitsliced for various multiples of mantissa bit encodings (e.g., the same way as today's CPU cores implement SIMD). While block floating point has been promising in use for inference (e.g., Microsoft Floating Point [6]), most proposals to train with block floating point have either failed to reach its full potential by requiring small blocks and/or just fall short of reaching FP32 accuracy.
|
| 12 |
+
|
| 13 |
+
One specific proposal, Hybrid Block Floating Point (HBFP) [10], uses a mixed-precision format where the dominant fraction of training which are the dot products, happen in block floating point (e.g., convolutions, matrix multiplications), and FP32 is used for other less frequent operations requiring larger numerical ranges (e.g., activations, regular-izations). HBFP simultaneously offers the high accuracy of floating point and the superior hardware density of fixed point, delivering up to ${8.5} \times$ higher throughput than FP16 with $2 \times$ more compact models [11]. Prior work on HBFP only presented a preliminary design space analysis for power-of-two mantissa bit widths (e.g., 2, 4, 8 bits).
|
| 14 |
+
|
| 15 |
+
In this paper, we make the observation that the parameter space for HBFP is quite rich, presenting several opportunities for further improving efficiency and density in hardware platforms. First, custom accelerators can support non-power-of-two numerical formats, and minimizing the number of bits improves operand storage and communication linearly and arithmetic logic quadratically. Second, there is an interplay between the block size and the number of mantissa bits, allowing for an overall denser numerical format with smaller blocks while maintaining accuracy. Finally, HBFP allows for mixed-mantissa block floating point encodings. Prior work studies training with various HBFP formats in isolation; however, the design space of mixed-mantissa HBFP is yet to be explored.
|
| 16 |
+
|
| 17 |
+
We fully explore the parameter space of HBFP and show the boundaries of block floating point by studying the interplay between the block size and the number of mantissa bits. To the best of our knowledge, this is the first paper conducting a full design space exploration for training DNNs with block floating point. We show that HBFP6 (HBFP with 6 bits of mantissa) is the smallest HBFP format achieving competitive accuracies with no sensitivity to block size. Our main contribution is the design of Accuracy Boosters, a DNN training mechanism performing a large fraction of epochs in low precision, i.e. HBFP4. Our method improves epoch-wise mixed-precision training by introducing high precision, i.e. HBFP6, to the training process only at the last epoch. Accuracy Boosters enable reducing power consumption by up to ${38} \times$ as compared to FP32, while preserving or outperforming FP32 accuracy.
|
| 18 |
+
|
| 19 |
+
## II. HBFP PARAMETER SPACE
|
| 20 |
+
|
| 21 |
+
HBFP is a mixed-precision DNN training technique that uses block floating point for all dot product operations and FP32 for the rest of the operations, enabling accurate training with dense fixed-point arithmetic. We observe that HBFP is also suitable for inference for the popular CNN and Transformer models without any accuracy loss, in line with prior work on inference with block floating point [6], showing that HBFP is a versatile technique for both training and inference. Prior work on HBFP shows that the area and energy expenditure of HBFP8 is around an order of magnitude lower than bfloat16 [11]. Exploring the parameter space of HBFP and pushing its boundaries can increase this ratio dramatically.
|
| 22 |
+
|
| 23 |
+
HBFP has a rich parameter space, including the number of mantissa bits, block size, and the number of exponent bits. The hardware area and energy expenditure of HBFP accelerators are determined by the number of mantissa bits and the block size because the overhead of the exponent bits is negligible due to blocking ${}^{1}$ . One of the key advantages of HBFP is that we can conservatively find a lower bound for the number of exponent bits that covers all of the design space exploration for block size and the number of mantissa bits. Therefore, we work with 10-bit exponents as in prior work [10] and explore the HBFP design space by varying the mantissa bit width and the block size. Once we fix the number of exponent bits, we can vary other parameters, which enables a reconfigurable microarchitecture and gives rise to mixed-mantissa HBFP.
|
| 24 |
+
|
| 25 |
+
Employing smaller mantissa bit widths and larger block sizes are key to improving block-floating-point hardware efficiency due to the increasing fraction of fixed-point operations [6]. There is an interplay between the number of mantissa bits and block sizes, allowing for an overall denser numerical format with smaller blocks while maintaining accuracy. This interplay is the result of how the block floating point conversion works. Block floating point shares a single exponent across a block of values using the exponent of the largest element. Since block floating point format does not apply normalization (It is calculated as ${2}^{\text{exponent }} \times 0$ .mantissa instead of ${2}^{\text{exponent }} \times 1$ .mantissa), the precision within a block is highly dependent on the largest element in that block, which decides the exponent value. The interval between two consecutive representable numbers is calculated as in Equation 1.
|
| 26 |
+
|
| 27 |
+
$$
|
| 28 |
+
\text{ interval } = \frac{{2}^{\text{largest exponent }}}{{2}^{\# \text{ of mantissa bits }}} \tag{1}
|
| 29 |
+
$$
|
| 30 |
+
|
| 31 |
+
As the number of elements sharing the same exponent increases, the likelihood of disparity in the magnitude of elements also increases, leading to a precision loss for the small elements in the block. As the number of mantissa bits decreases, the model's sensitivity to the block size increases with the corresponding increase in the interval leading to a higher quantization error. More mantissa bits make the distribution more resilient to the quantization error and larger block size, as each element can be represented more accurately.
|
| 32 |
+
|
| 33 |
+
The power expenditure of HBFP is not only a function of the HBFP parameters but also a function of outside factors. Mixed-precision training has emerged as a popular technique to increase the fraction of leaner arithmetic formats within the training process, motivating us to explore the design space of mixed-mantissa HBFP; because HBFP provides the opportunity to fix the exponent and vary the number of mantissa bits. This mixed-mantissa training technique can be applied between layers and epochs.
|
| 34 |
+
|
| 35 |
+
For CNN models, prior work indicates that the first convolution layer and the last fully connected layer have a larger impact on the final model accuracy, and keeping these layers in high precision allows for reducing precision in the rest of the layers [3], [24], [34], [39]. The first layer takes the input images and filters the images with several convolution kernels and returns feature maps. Thus, it is critical for the final model to keep the input information fully accurate and to preserve the data in the initial feature map. The last layer of a CNN returns a probability distribution over the possible outcomes for image classification, making it crucial to retain information better for this layer.
|
| 36 |
+
|
| 37 |
+
In addition to layers, each training epoch has a different effect on the final model's accuracy [13], [14]. [28] and [36] show that DNNs first learn low-frequency components, where the frequency is defined for the coordinates of the input space. [36] also empirically show that for CNN models, the high-frequency components have higher complexities and are learned in the last epochs. In light of these findings, we hypothesize that high-frequency functional components are more sensitive to quantization errors. Thus, higher precision is required for the last stage of DNN training, where the optimization occurs after an appropriate amount of generalization in the network. After reaching a certain loss value in low-precision training, switching the tensors to high precision enable the sensitive fine-tuning performed in the final epochs and help increase the accuracy even more.
|
| 38 |
+
|
| 39 |
+
## III. Minimizing HBFP
|
| 40 |
+
|
| 41 |
+
Our goal is to minimize HBFP to increase the hardware efficiency of training without losing accuracy. For a block size of 576, even though HBFP4 incurs a ${2.4} \times$ improvement in area/power relative to HBFP8, it lacks the precision to reach FP32 accuracy. As prior work on HBFP [10], [11] only investigated power-of-two mantissa bits and focused mostly on the design space of HBFP8, the interplay between the number of mantissa bits and the block size is left unexplored. While power-of-two-bit numbers align naturally with the memory structure and encode matrices in a tightly-packed way, nonpower-of-two-bit mantissas can improve the arithmetic density even further, as studied by [6] and can be easily integrated into custom accelerators. We investigate the whole design space of HBFP by varying both parameters and claim that reducing the block size will enable reducing the number of mantissa bits, and thus improve hardware efficiency. In this section, we show how to minimize HBFP step by step, give an explanation of the limitations of HBFP and propose a new mixed-precision schema to minimize HBFP further.
|
| 42 |
+
|
| 43 |
+
To study the relationship between model accuracy and HBFP parameters, we measure the similarity between block-floating-point and FP32 distributions of various tensors using Wasserstein distance, mathematically defined as in Equation 2.
|
| 44 |
+
|
| 45 |
+
$$
|
| 46 |
+
W\left( {P, Q}\right) = \mathop{\inf }\limits_{{\gamma \in \Pi \left( {P, Q}\right) }}{\mathbb{E}}_{\left( {x, y}\right) \sim \gamma }\left\lbrack {\parallel x - y\parallel }\right\rbrack \tag{2}
|
| 47 |
+
$$
|
| 48 |
+
|
| 49 |
+
where $\Pi \left( {P, Q}\right)$ is the set of all joint distributions $\gamma \left( {x, y}\right)$ whose marginal distributions are equal to $P$ and $Q.\gamma \left( {x, y}\right)$ can be interpreted as the amount of mass that must be transported from $x$ to $y$ to transform $P$ to $Q$ [1]. Unlike KL-Divergence, which is commonly used to compare quantized tensors to their full-precision counterparts [6], [26], Wasserstein distance is symmetric, and thus is mathematically a metric. Moreover, because DNNs often deal with distributions where KL Divergence is not defined (or infinite), we need to add a noise term to the model distribution to be able to use KL Divergence, which causes disturbance in the results.
|
| 50 |
+
|
| 51 |
+
---
|
| 52 |
+
|
| 53 |
+
${}^{1}$ Even for the block size of 4, HBFP4 with 5 -bit exponent is only ${1.03} \times$ more area-efficient than HBFP4 with 10-bit exponent
|
| 54 |
+
|
| 55 |
+
---
|
| 56 |
+
|
| 57 |
+
We observe that the tensor distribution is preserved when the elements are converted to block floating point format with 6 bits of mantissa and wider for reasonably large block sizes ${}^{2}$ . Figure 1 shows Wasserstein distances between FP32 and HBFP6 and HBFP4 with various block sizes for the weight tensors of four different layers of ResNet20 trained on CIFAR10. For all the tensors, HBFP6 has a much smaller distance to FP32, and the distances are fairly close to each other for a given tensor across all block sizes. However, the Wasserstein distance of HBFP4 is more than ${3.5} \times$ higher than HBFP6 across all block sizes, and the distances dramatically increase with the block size. Indeed, the R-squared (the strength of the relationship between two data sets) values between the model accuracy and various Wasserstein distances are around 0.99 , validating the strength of our metric.
|
| 58 |
+
|
| 59 |
+

|
| 60 |
+
|
| 61 |
+
Fig. 1: Wasserstein distance between FP32 and HBFP with various block sizes for various layers.
|
| 62 |
+
|
| 63 |
+
Even though reducing the block size incurs smaller Wasser-stein distances and helps increase the accuracy, HBFP4 still fails to reach FP32 accuracy because HBFP4 is good at generalization but does not have enough precision to minimize the loss. [22] introduce a methodology to visualize loss landscapes in order to better understand the effect of loss landscapes on generalization. Figure 2 shows log-scale loss landscapes for various configurations, sliced along the $\mathrm{x}$ -axis $\left( {\mathrm{y} = 0}\right)$ for simplicity. The center of each plot corresponds to the current state of the minimizer, and the two axes parameterize two random directions with filter-wise normalization. When we compare the curves for HBFP4 and HBFP6, the flatness of the curve indicates that HBFP4 generalizes better. However, HBFP4 converges to a much worse local minima compared to HBFP6 and FP32, thus failing to optimize.
|
| 64 |
+
|
| 65 |
+

|
| 66 |
+
|
| 67 |
+
Fig. 2: Loss landscapes of ResNet20 on CIFAR10 for various configurations, sliced along the $\mathrm{x}$ -axis.
|
| 68 |
+
|
| 69 |
+
Following the insights from prior work, we study the effect of the first and last layers of CNNs on model accuracy. The dotted and solid red lines in Figure 1 show the first and last layers, respectively, and it is clear that these layers are the most affected by lowering the precision, especially for HBFP4. Thus, we keep the first and last layers of CNNs in HBFP6 during HBFP4 training to increase its accuracy. However, the increase in precision HBFP6 provides for the first and last layers still does not achieve enough optimization to reach FP32 accuracy. In Figure 2, the red dashed curve shows this configuration, and the curve gets sharper and lower compared to HBFP4-only training. However, the generalization and optimization power of the model is still unbalanced, leading to convergence to another bad local minima.
|
| 70 |
+
|
| 71 |
+
We introduce Accuracy Boosters, an epoch-driven mixed-mantissa HBFP that uses HBFP6 only in the last epoch and converts ${99.7}\%$ of all arithmetic operations in training to HBFP4. We hypothesize that using HBFP6 for the last epoch is sufficient to boost the accuracy, while the rest of the epochs are trained using HBFP4. We leverage the insight that last epochs have more effect on the final model's accuracy [13], [14], [28], [36]. We claim that training with 4-bit mantissas helps the model generalize and reach a certain loss value. Afterward, switching to 6-bit mantissas helps the model optimize and fine-tune in the final epochs and increase accuracy to the FP32 level. The loss landscape for Accuracy Boosters (the red solid curve in Figure 2) supports our hypothesis. We see that the curve gets really close (note that the plot is in log scale, thus -2 is closer to -4 compared to 0 ) to HBFP6 and FP32 curves and finally achieves FP32 accuracy.
|
| 72 |
+
|
| 73 |
+
## IV. EXPERIMENTAL RESULTS
|
| 74 |
+
|
| 75 |
+
We experiment on the state-of-the-art models and datasets for various DNN tasks to test our hypotheses. We train ResNet20/74 [15], and DenseNet40 [16] on CIFAR10 and CIFAR100 [19] datasets for image classification. We also train a Transformer-Base [33] on the WMT16 English-German dataset for machine translation. Models trained on CIFAR10 are trained for 160 epochs, whereas for CIFAR100, the total number of epochs for all models is 300 . The transformer is trained for 35 epochs and Accuracy Boosters are applied in the last 1 and 5 epochs. We use FP32 as the baseline for both model accuracies and hardware comparisons.
|
| 76 |
+
|
| 77 |
+
---
|
| 78 |
+
|
| 79 |
+
${}^{2}$ Block sizes of up to 256 already achieves more than 95% of the maximum hardware benefit for HBFP6
|
| 80 |
+
|
| 81 |
+
---
|
| 82 |
+
|
| 83 |
+
## A. Minimizing HBFP
|
| 84 |
+
|
| 85 |
+
Table I shows the Top1 validation accuracies for ResNet20, ResNet74, and DenseNet40 on CIFAR10 and CIFAR100 datasets trained with various HBFP configurations. We observe that HBFP6 is the smallest HBFP configuration that gives accuracies within $2\%$ of FP32 accuracy for block sizes up to 256. Larger blocks will contain a larger variety of values in terms of magnitude (affected e.g., by outliers), so it will result in larger approximation errors than smaller blocks and lower accuracy in training.
|
| 86 |
+
|
| 87 |
+
TABLE I: Top-1 validation accuracies of various CNN models for various HBFP configurations
|
| 88 |
+
|
| 89 |
+
<table><tr><td rowspan="3">Number Format</td><td rowspan="3">Block Size</td><td colspan="4">Models and Datasets</td></tr><tr><td colspan="2">CIFAR10</td><td colspan="2">CIFAR100</td></tr><tr><td>ResNet20</td><td>ResNet74</td><td>ResNet74</td><td>DenseNet40</td></tr><tr><td>FP32</td><td>-</td><td>91.72</td><td>93.57</td><td>74.55</td><td>72.42</td></tr><tr><td>HBFP8</td><td>576</td><td>91.52</td><td>93.36</td><td>74.32</td><td>73.73</td></tr><tr><td rowspan="7">HBFP6</td><td>16</td><td>91.12</td><td>93.38</td><td>73.51</td><td>72.08</td></tr><tr><td>25</td><td>91.09</td><td>92.54</td><td>73.20</td><td>71.77</td></tr><tr><td>36</td><td>91.29</td><td>92.61</td><td>72.87</td><td>71.83</td></tr><tr><td>49</td><td>91.33</td><td>92.93</td><td>72.40</td><td>71.87</td></tr><tr><td>64</td><td>91.12</td><td>92.93</td><td>72.40</td><td>71.81</td></tr><tr><td>256</td><td>91.38</td><td>92.79</td><td>72.53</td><td>71.50</td></tr><tr><td>576</td><td>90.65</td><td>92.19</td><td>72.51</td><td>71.02</td></tr><tr><td rowspan="7">HBFP4</td><td>16</td><td>82.59</td><td>76.85</td><td>-</td><td>63.70</td></tr><tr><td>25</td><td>81.82</td><td>78.62</td><td>-</td><td>64.25</td></tr><tr><td>36</td><td>80.84</td><td>76.64</td><td>-</td><td>63.34</td></tr><tr><td>49</td><td>79.32</td><td>71.19</td><td>-</td><td>65.55</td></tr><tr><td>64</td><td>80.18</td><td>74.35</td><td>-</td><td>62.37</td></tr><tr><td>256</td><td>76.96</td><td>60.65</td><td>-</td><td>60.02</td></tr><tr><td>576</td><td>75.33</td><td>66.70</td><td>-</td><td>59.77</td></tr><tr><td colspan="2">Total Number of FLOPs required to train the model</td><td>41M</td><td>174M</td><td>326M</td><td>542M</td></tr></table>
|
| 90 |
+
|
| 91 |
+
We also report HBFP4 accuracies to show the limitations of HBFP. Even for the small models like ResNet20, with a block size of 16, the accuracy drops more than 9%. As the accuracy drop for ResNet74 and DenseNet40 on CIFAR100 is considerably high even with $\operatorname{HBFP}\left( {5,{16}}\right)$ , we did not train these models with HBFP4. We observe that for HBFP4, the sensitivity to the block size increases for all the models because the distortions in the tensor distributions increase (see Section II).
|
| 92 |
+
|
| 93 |
+
## B. Accuracy Boosters
|
| 94 |
+
|
| 95 |
+
Considering HBFP hardware model, a block size of 64 is within ${95}\%$ of the maximum area/power gain while achieving accuracies with less than 1% degradation for HBFP6. Thus, we choose block size of 64 as the sweet spot and test Accuracy Boosters using this block size. We perform the last epoch of the training in $\mathrm{{HBFP}}\left( {6,{64}}\right)$ and the rest in $\mathrm{{HBFP}}\left( {4,{64}}\right)$ for all the experimental settings. We also trained by keeping the last 10 epochs in $\operatorname{HBFP}\left( {6,{64}}\right)$ to observe the improvement in accuracy for the CNN models. We keep all CNN models' first and last layers in $\operatorname{HBFP}\left( {6,{64}}\right)$ . The first and last layers of the CNN models account for a negligible amount of computation; thus, keeping them in slightly higher precision during HBFP training does not result in a significant increase in the hardware area or energy consumption. We can see that for most of the CNN models, Accuracy Boosters outperforms FP32. When we keep the last 10 epochs in HBFP6, we observe that the accuracies slightly increase (see Table II).
|
| 96 |
+
|
| 97 |
+
TABLE II: Top-1 validation accuracies of various CNN models for Accuracy Boosters
|
| 98 |
+
|
| 99 |
+
<table><tr><td rowspan="3">Epochs using HBFP6</td><td colspan="4">Models and Datasets</td></tr><tr><td colspan="2">CIFAR10</td><td colspan="2">CIFAR 100</td></tr><tr><td>ResNet20</td><td>ResNet74</td><td>ResNet74</td><td>DenseNet40</td></tr><tr><td>Only last</td><td>91.24</td><td>92.62</td><td>73.74</td><td>73.61</td></tr><tr><td>Last 10</td><td>91.36</td><td>93.02</td><td>74.28</td><td>74.10</td></tr><tr><td>FP32</td><td>91.72</td><td>93.57</td><td>74.55</td><td>72.42</td></tr></table>
|
| 100 |
+
|
| 101 |
+
For Transformer, Accuracy Boosters achieves a BLEU Score of 25.08 when used only for the last epoch and 25.40 when used for the last 10 epochs, while the FP32 score is 26.09. We also observe that for Transformer, while HBFP6 outperforms FP32, HBFP4 does not incur high amounts of accuracy loss, but Accuracy Boosters still increase the accuracy to the level close to FP32 (see Table III).
|
| 102 |
+
|
| 103 |
+
TABLE III: BLEU Scores for Transformer-Base trained on WMT16 English-German dataset with various training techniques
|
| 104 |
+
|
| 105 |
+
<table><tr><td>FP32</td><td>HBFP(6,49)</td><td>HBFP(4,49)</td><td>Booster (last)</td><td>Booster (last)</td></tr><tr><td>26.09</td><td>26.16</td><td>24.73</td><td>25.08</td><td>25.40</td></tr></table>
|
| 106 |
+
|
| 107 |
+
TABLE IV: Power consumption ratios between FP32 and various Accuracy Booster configurations.
|
| 108 |
+
|
| 109 |
+
<table><tr><td>ResNet20 CIFAR10</td><td>ResNet74 CIFAR10</td><td>ResNet74 CIFAR100</td><td>Transformer (last)</td><td>Transformer (last 5)</td></tr><tr><td>28.2</td><td>36.2</td><td>36.5</td><td>38.0</td><td>30.0</td></tr></table>
|
| 110 |
+
|
| 111 |
+
We also provide the power consumption ratios between FP32 and various Accuracy Booster configurations (Table IV). Accuracy Boosters provide up to ${36.5} \times$ power gain over FP32 for CNNs and up to ${38} \times$ gain for Transformer.
|
| 112 |
+
|
| 113 |
+
## V. RELATED WORK
|
| 114 |
+
|
| 115 |
+
In recent years, there has been a significant amount of research on inference and training [4], [5], [8], [17], [21], [23], [29], [39] with narrow numerical representations. Google Brain's bfloat16 [35], NVIDIA's mixed-precision training with FP16 [25], and another mixed-precision scheme using FP8 [31] are the most commonly-used ones. Recent research advocates the use of Block Floating-Point for DNN training [11] and inference [6]. Flexpoint [20] and Dynamic Fixed-Point [7] propose block-floating-point formats for training with a 16-bit mantissa and a shared exponent. Prior work proposed a novel format for training DNNs with BFP, called Hybrid Block Floating-Point (HBFP) [10]. In this paper, we argue that reducing the mantissa bit width in HBFP significantly improves silicon efficiency while designing hardware for DNN training.
|
| 116 |
+
|
| 117 |
+
Many have proposed techniques to compensate for the data loss introduced by narrower numerical representations [12], [24], [31], [32]. Mixed-precision training has emerged as a popular technique to recover the information loss caused by quantization. Several techniques vary the precision layer-wise by using higher precision arithmetic for layers with greater significance [18], [30], [37]. Specifically, [3], [24], [34], [39] use FP32 for the first and last layers. [13] employ fixed-point arithmetic with different bit widths epoch-wise over the course of training. Combining the layer-wise and epoch-wise approaches, [14], [27], [38] vary the precision adaptively per epoch and layer at the same time using control mechanisms. While all the aforementioned studies employ leaner arithmetic for a fraction of the training process, they fail to make leaner arithmetic the common case of the training process.
|
| 118 |
+
|
| 119 |
+
Recent work [9] suggests that during mixed-precision FP16 training, the optimizer states can be reduced to 8 bits by using a block-wise quantization method. This observation is in line with our work that applies quantization by extracting the largest exponent per block. Similarly, FAST [38] uses a block-floating-point-based layer-wise mixed precision approach using 2 and 4-bit mantissa. Unlike our work, FAST requires fine-tuning several additional hyperparameters for its training algorithm, making it difficult to apply to other DNN models. Another block-floating-point-based work, FlexBlock [27], uses 4 and 8-bit mantissa with various block sizes and also needs higher-precision block-floating-point formats only for weight gradient calculations that suffer more from quantization errors.
|
| 120 |
+
|
| 121 |
+
## VI. CONCLUSION
|
| 122 |
+
|
| 123 |
+
Several low-precision training techniques and specialized numerical formats have been introduced over the past decade to increase the arithmetic density of the DNN accelerators. One such format, Hybrid Block Floating-Point (HBFP), which allows for a majority of the DNN's arithmetic operations (i.e., dot products) to be performed using fixed-point arithmetic has been shown to achieve FP32 accuracy with 8-bit mantissa. However, a smaller number of mantissa bits allow for exceptional improvements in hardware (e.g., up to ${17.5} \times$ gain over FP32 silicon area). In this paper, we perform a full-scale exploration of the HBFP design space for emerging models and datasets. We show that HBFP6 is the smallest HBFP format achieving FP32 accuracy for all block sizes. We propose the Accuracy Boosters technique to bring HBFP4 into training, using HBFP6 in the last epoch, leveraging the insight that each epoch has a different effect on training. We show that the last stage of training requires more precision than the rest. Our method achieves up to ${38} \times$ power gain over FP32, while outperforming FP32 accuracy.
|
| 124 |
+
|
| 125 |
+
## REFERENCES
|
| 126 |
+
|
| 127 |
+
[1] M. Arjovsky, S. Chintala, and L. Bottou, "Wasserstein GAN," CoRR, vol. abs/1701.07875, 2017. [Online]. Available: http://arxiv.org/abs/ 1701.07875
|
| 128 |
+
|
| 129 |
+
[2] T. B. Brown, B. Mann, N. Ryder, M. Subbiah, J. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell, S. Agarwal, A. Herbert-Voss, G. Krueger, T. Henighan, R. Child, A. Ramesh, D. M. Ziegler, J. Wu, C. Winter, C. Hesse, M. Chen, E. Sigler, M. Litwin, S. Gray, B. Chess, J. Clark, C. Berner, S. McCandlish, A. Radford, I. Sutskever, and D. Amodei, "Language models are few-shot learners," in Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual, H. Larochelle, M. Ranzato, R. Hadsell, M. Balcan, and H. Lin, Eds., 2020. [Online]. Available: https://proceedings.neurips.cc/paper/2020/hash/1457c0d6bfcb4967418bfb8ac142f64a-Abstract.html
|
| 130 |
+
|
| 131 |
+
[3] J. Choi, Z. Wang, S. Venkataramani, P. I. Chuang, V. Srinivasan, and K. Gopalakrishnan, "PACT: parameterized clipping activation for quantized neural networks," CoRR, vol. abs/1805.06085, 2018. [Online]. Available: http://arxiv.org/abs/1805.06085
|
| 132 |
+
|
| 133 |
+
[4] M. Courbariaux, Y. Bengio, and J. David, "Binaryconnect: Training deep neural networks with binary weights during propagations," in Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems 2015, December 7-12, 2015, Montreal, Quebec, Canada, C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, Eds., 2015, pp. 3123- 3131. [Online]. Available: https://proceedings.neurips.cc/paper/2015/ hash/3e15cc11f979ed25912dff5b0669f2cd-Abstract.html
|
| 134 |
+
|
| 135 |
+
[5] M. Courbariaux, Y. Bengio, and J. David, "Low precision arithmetic for deep learning," in 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Workshop Track Proceedings, Y. Bengio and Y. LeCun, Eds., 2015. [Online]. Available: http://arxiv.org/abs/1412.7024
|
| 136 |
+
|
| 137 |
+
[6] B. Darvish Rouhani, D. Lo, R. Zhao, M. Liu, J. Fowers, K. Ovtcharov, A. Vinogradsky, S. Massengill, L. Yang, R. Bittner, A. Forin, H. Zhu, T. Na, P. Patel, S. Che, L. Chand Koppaka, X. SONG, S. Som, K. Das, S. T, S. Reinhardt, S. Lanka, E. Chung, and D. Burger, "Pushing the limits of narrow precision inferencing at cloud scale with microsoft floating point," in Advances in Neural Information Processing Systems, H. Larochelle, M. Ranzato, R. Hadsell, M. F. Balcan, and H. Lin, Eds., vol. 33. Curran Associates, Inc., 2020, pp. 10271-10281. [Online]. Available: https://proceedings.neurips.cc/ paper/2020/file/747e32ab0fea7fbd2ad9ec03daa3f840-Paper.pdf
|
| 138 |
+
|
| 139 |
+
[7] D. Das, N. Mellempudi, D. Mudigere, D. D. Kalamkar, S. Avancha, K. Banerjee, S. Sridharan, K. Vaidyanathan, B. Kaul, E. Georganas, A. Heinecke, P. Dubey, J. Corbal, N. Shustrov, R. Dubtsov, E. Fomenko, and V. O. Pirogov, "Mixed precision training of convolutional neural networks using integer operations," in 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. OpenReview.net, 2018. [Online]. Available: https://openreview.net/forum?id=H135uzZ0-
|
| 140 |
+
|
| 141 |
+
[8] T. Dettmers, M. Lewis, Y. Belkada, and L. Zettlemoyer, "Llm.int8( ): 8-bit matrix multiplication for transformers at scale," CoRR, vol. abs/2208.07339, 2022. [Online]. Available: https://doi.org/10.48550/ arXiv.2208.07339
|
| 142 |
+
|
| 143 |
+
[9] T. Dettmers, M. Lewis, S. Shleifer, and L. Zettlemoyer, "8-bit optimizers via block-wise quantization," in The Tenth International Conference on Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022. OpenReview.net, 2022. [Online]. Available:
|
| 144 |
+
|
| 145 |
+
https://openreview.net/forum?id=shpkpVXzo3h
|
| 146 |
+
|
| 147 |
+
[10] M. Drumond, T. Lin, M. Jaggi, and B. Falsafi, "Training DNNs with Hybrid Block Floating Point," arXiv:1804.01526 [cs, stat], Dec. 2018, arXiv: 1804.01526. [Online]. Available: http://arxiv.org/abs/1804.01526
|
| 148 |
+
|
| 149 |
+
[11] M. P. Drumond, "Coltrain: Co-located dnn training and inference," p. 115, 2020. [Online]. Available: http://infoscience.epfl.ch/record/280118
|
| 150 |
+
|
| 151 |
+
[12] S. Fox, S. Rasoulinezhad, J. Faraone, D. Boland, and P. H. W. Leong, "A block minifloat representation for training deep neural networks," in 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net, 2021. [Online]. Available: https://openreview.net/forum?id=6zaTwpNSsQ2
|
| 152 |
+
|
| 153 |
+
[13] Y. Fu, H. Guo, M. Li, X. Yang, Y. Ding, V. Chandra, and Y. Lin, "CPT: efficient deep neural network training via cyclic precision," in 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net, 2021. [Online]. Available: https://openreview.net/forum?id=87ZwsaQNHPZ
|
| 154 |
+
|
| 155 |
+
[14] Y. Fu, H. You, Y. Zhao, Y. Wang, C. Li, K. Gopalakrishnan, Z. Wang, and Y. Lin, "Fractrain: Fractionally squeezing bit savings both temporally and spatially for efficient DNN training," in Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual, H. Larochelle, M. Ranzato, R. Hadsell, M. Balcan, and H. Lin, Eds., 2020. [Online]. Available: https://proceedings.neurips.cc/paper/2020/hash/8dc5983b8c4ef1d8fcd5f325f9a65511-Abstract.html
|
| 156 |
+
|
| 157 |
+
[15] K. He, X. Zhang, S. Ren, and J. Sun, "Deep residual learning for image recognition," in 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, Las Vegas, NV, USA, June 27-30, 2016. IEEE Computer Society, 2016, pp. 770-778. [Online]. Available: https://doi.org/10.1109/CVPR.2016.90
|
| 158 |
+
|
| 159 |
+
[16] G. Huang, Z. Liu, and K. Q. Weinberger, "Densely connected convolutional networks," CoRR, vol. abs/1608.06993, 2016. [Online]. Available: http://arxiv.org/abs/1608.06993
|
| 160 |
+
|
| 161 |
+
[17] I. Hubara, M. Courbariaux, D. Soudry, R. El-Yaniv, and Y. Bengio, "Binarized neural networks," in Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016, December 5-10, 2016, Barcelona, Spain, D. D. Lee, M. Sugiyama, U. von Luxburg, I. Guyon, and R. Garnett, Eds., 2016, pp. 4107- 4115. [Online]. Available: https://proceedings.neurips.cc/paper/2016/ hash/d8330f857a17c53d217014ee776bfd50-Abstract.html
|
| 162 |
+
|
| 163 |
+
[18] S. Khoram and J. Li, "Adaptive quantization of neural networks," p. 13, 2018.
|
| 164 |
+
|
| 165 |
+
[19] A. Krizhevsky and G. Hinton, "Learning multiple layers of features from tiny images," University of Toronto, Toronto, Ontario, Tech. Rep. 0, 2009.
|
| 166 |
+
|
| 167 |
+
[20] U. Köster, T. Webb, X. Wang, M. Nassar, A. K. Bansal, W. Constable, O. Elibol, S. Gray, S. Hall, L. Hornof, A. Khosrowshahi, C. Kloss, R. J. Pai, and N. Rao, "Flexpoint: An Adaptive Numerical Format for Efficient Training of Deep Neural Networks," in Advances in Neural Information Processing Systems, vol. 30. Curran Associates, Inc., 2017. [Online]. Available: https://papers.nips.cc/paper/2017/hash/ a0160709701140704575d499c997b6ca-Abstract.html
|
| 168 |
+
|
| 169 |
+
[21] F. Li and B. Liu, "Ternary weight networks," CoRR, vol. abs/1605.04711, 2016. [Online]. Available: http://arxiv.org/abs/1605.04711
|
| 170 |
+
|
| 171 |
+
[22] H. Li, Z. Xu, G. Taylor, C. Studer, and T. Goldstein, "Visualizing the loss landscape of neural nets," in Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, December 3-8, 2018, Montréal, Canada, S. Bengio, H. M. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, Eds., 2018, pp. 6391-6401. [Online]. Available: https://proceedings.neurips.cc/paper/ 2018/hash/a41b3bb3e6b050b6c9067c67f663b915-Abstract.html
|
| 172 |
+
|
| 173 |
+
[23] D. D. Lin, S. S. Talathi, and V. S. Annapureddy, "Fixed point quantization of deep convolutional networks," in Proceedings of the 33nd International Conference on Machine Learning, ICML 2016, New York City, NY, USA, June 19-24, 2016, ser. JMLR Workshop and Conference Proceedings, M. Balcan and K. Q. Weinberger, Eds., vol. 48. JMLR.org, 2016, pp. 2849-2858. [Online]. Available: http://proceedings.mlr.press/v48/linb16.html
|
| 174 |
+
|
| 175 |
+
[24] N. Mellempudi, S. Srinivasan, D. Das, and B. Kaul, "Mixed Precision Training With 8-bit Floating Point," arXiv:1905.12334 [cs, stat], May 2019, arXiv: 1905.12334. [Online]. Available: http://arxiv.org/abs/1905.12334
|
| 176 |
+
|
| 177 |
+
[25] P. Micikevicius, S. Narang, J. Alben, G. Diamos, E. Elsen, D. Garcia,
|
| 178 |
+
|
| 179 |
+
B. Ginsburg, M. Houston, O. Kuchaiev, G. Venkatesh, and H. Wu, "Mixed Precision Training," arXiv:1710.03740 [cs, stat], Feb. 2018, arXiv: 1710.03740. [Online]. Available: http://arxiv.org/abs/1710.03740
|
| 180 |
+
|
| 181 |
+
[26] S. Migacz, "8-bit Inference with TensorRT," May 2017. [Online]. Available: https://on-demand.gputechconf.com/gtc/2017/presentation/s7310- 8-bit-inference-with-tensorrt.pdf
|
| 182 |
+
|
| 183 |
+
[27] S.-H. Noh, J. Koo, S. Lee, J. Park, and J. Kung, "Flexblock: A flexible dnn training accelerator with multi-mode block floating point support," 2022. [Online]. Available: https://arxiv.org/abs/2203.06673
|
| 184 |
+
|
| 185 |
+
[28] N. Rahaman, A. Baratin, D. Arpit, F. Draxler, M. Lin, F. A. Hamprecht, Y. Bengio, and A. C. Courville, "On the spectral bias of neural networks," in Proceedings of the 36th International Conference on Machine Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA, ser. Proceedings of Machine Learning Research, K. Chaudhuri and R. Salakhutdinov, Eds., vol. 97. PMLR, 2019, pp. 5301-5310. [Online]. Available: http://proceedings.mlr.press/v97/ rahaman19a.html
|
| 186 |
+
|
| 187 |
+
[29] M. Rastegari, V. Ordonez, J. Redmon, and A. Farhadi, "Xnor-net: Imagenet classification using binary convolutional neural networks," in Computer Vision - ECCV 2016 - 14th European Conference, Amsterdam, The Netherlands, October 11-14, 2016, Proceedings, Part ${IV}$ , ser. Lecture Notes in Computer Science, B. Leibe, J. Matas, N. Sebe, and M. Welling, Eds., vol. 9908. Springer, 2016, pp. 525-542. [Online]. Available: https://doi.org/10.1007/978-3-319-46493-0\\_32
|
| 188 |
+
|
| 189 |
+
[30] J. Shen, Y. Wang, P. Xu, Y. Fu, Z. Wang, and Y. Lin, "Fractional skipping: Towards finer-grained dynamic CNN inference," in The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020. AAAI Press, 2020, pp. 5700-5708. [Online]. Available: https://ojs.aaai.org/index.php/AAAI/article/view/6025
|
| 190 |
+
|
| 191 |
+
[31] X. Sun, J. Choi, C.-Y. Chen, N. Wang, S. Venkataramani, V. V. Srinivasan, X. Cui, W. Zhang, and K. Gopalakrishnan, "Hybrid 8-bit Floating Point (HFP8) Training and Inference for Deep Neural Networks," in Advances in Neural Information Processing Systems, vol. 32. Curran Associates, Inc., 2019. [Online]. Available: https://proceedings.neurips.cc/paper/2019/hash/ 65fc9fb4897a89789352e211ca2d398f-Abstract.html
|
| 192 |
+
|
| 193 |
+
[32] X. Sun, N. Wang, C.-Y. Chen, J. Ni, A. Agrawal, X. Cui, S. Venkataramani, K. El Maghraoui, V. V. Srinivasan, and K. Gopalakrishnan, "Ultra-Low Precision 4-bit Training of Deep Neural Networks," in Advances in Neural Information Processing Systems, H. Larochelle, M. Ranzato, R. Hadsell, M. F. Balcan, and H. Lin, Eds., vol. 33. Curran Associates, Inc., 2020, pp. 1796-1807. [Online]. Available: https://proceedings.neurips.cc/paper/ 2020/file/13b919438259814cd5be8cb45877d577-Paper.pdf
|
| 194 |
+
|
| 195 |
+
[33] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin, "Attention is all you need," in Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, I. Guyon, U. von Luxburg, S. Bengio, H. M. Wallach, R. Fergus, S. V. N. Vishwanathan, and R. Garnett, Eds., 2017, pp. 5998-6008. [Online]. Available: https://proceedings.neurips.cc/paper/2017/hash/3f5ee243547dee91fbd053c1c4a845aa-Abstract.html
|
| 196 |
+
|
| 197 |
+
[34] N. Wang, J. Choi, D. Brand, C.-Y. Chen, and K. Gopalakrishnan, "Training deep neural networks with 8-bit floating point numbers," in Proceedings of the 32nd International Conference on Neural Information Processing Systems, ser. NIPS'18. Red Hook, NY, USA: Curran Associates Inc., 2018, p. 7686-7695.
|
| 198 |
+
|
| 199 |
+
[35] S. Wang and P. Kanwar, "BFloat16: The secret to high performance on Cloud TPUs," Aug. 2019.
|
| 200 |
+
|
| 201 |
+
[36] Z. J. Xu, Y. Zhang, T. Luo, Y. Xiao, and Z. Ma, "Frequency principle: Fourier analysis sheds light on deep neural networks," CoRR, vol. abs/1901.06523, 2019. [Online]. Available: http://arxiv.org/abs/1901.06523
|
| 202 |
+
|
| 203 |
+
[37] L. Yang and Q. Jin, "Fracbits: Mixed precision quantization via fractional bit-widths," in Thirty-Fifth AAAI Conference on Artificial Intelligence, AAAI 2021, Thirty-Third Conference on Innovative
|
| 204 |
+
|
| 205 |
+
Applications of Artificial Intelligence, IAAI 2021, The Eleventh Symposium on Educational Advances in Artificial Intelligence, EAAI 2021, Virtual Event, February 2-9, 2021. AAAI Press, 2021, pp. 10612-10620. [Online]. Available: https://ojs.aaai.org/index.php/ AAAI/article/view/17269
|
| 206 |
+
|
| 207 |
+
[38] S. Q. Zhang, B. McDanel, and H. T. Kung, "FAST: DNN Training Under Variable Precision Block Floating Point with Stochastic Rounding," arXiv:2110.15456 [cs], Oct. 2021, arXiv: 2110.15456. [Online]. Available: http://arxiv.org/abs/2110.15456
|
| 208 |
+
|
| 209 |
+
[39] S. Zhou, Y. Wu, Z. Ni, X. Zhou, H. Wen, and Y. Zou, "DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients," arXiv:1606.06160 [cs], Feb. 2018, arXiv: 1606.06160.
|
papers/ISCA/ISCA 2023/ISCA 2023 Workshop/ISCA 2023 Workshop ASSYST/nfmfqzQ4Mwl/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,209 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ EFFICIENT DNN TRAINING WITH MIXED-PRECISION BLOCK FLOATING POINT
|
| 2 |
+
|
| 3 |
+
ISCA 2023 Submission #NaN - Confidential Draft - Do NOT Distribute!!
|
| 4 |
+
|
| 5 |
+
Abstract-The unprecedented growth in DNN model complexity, size, and amount of training data has led to a commensurate increase in demand for computing and a search for minimal encoding. Recent research advocates Hybrid Block Floating-Point (HBFP) as a technique that minimizes silicon provisioning in accelerators by converting the majority of arithmetic operations in training to 8-bit fixed-point. In this paper, we perform a full-scale exploration of the HBFP design space, including minimal mantissa encoding, varying block sizes, and mixed mantissa bit-width across layers and epochs. We propose Accuracy Boosters, an epoch-driven mixed-mantissa HBFP that uses 6-bit mantissa only in the last epoch and converts ${99.7}\%$ of all arithmetic operations in training to 4-bit mantissas. Accuracy Boosters enable reducing power consumption for an HBFP training accelerator by ${38} \times$ as compared to FP32, while preserving or outperforming FP32 accuracy.
|
| 6 |
+
|
| 7 |
+
§ I. INTRODUCTION
|
| 8 |
+
|
| 9 |
+
Over the past decade, improvements in Deep Neural Network (DNN) algorithms have led to unprecedented growth in model complexity and dataset size and, consequently, the required computational resources to train DNN models. One of the largest DNN models (GPT-3) [2] has 175 billion parameters and requires ${3.14} \times {10}^{23}\mathrm{{FLOPs}}$ to train. With the slowdown in Moore's law, researchers and vendors have begun to search for alternate ways to improve the arithmetic density of the underlying hardware platforms. Narrower bit-width (with lower precision) number formats [24], [25], [31], [32], [35] have emerged as a promising approach to increase arithmetic density, as well as, reduce the required operand storage and communication bandwidth while maintaining high training accuracy.
|
| 10 |
+
|
| 11 |
+
Recently there have been several proposals for block floating point [7], [20], [38], a numerical encoding that groups a block of mantissas which rely on only fixed-point arithmetic with a single exponent. Block floating point asymptotically approaches the arithmetic density of fixed point with larger block sizes and naturally lends itself well to mixed-precision hardware where a block with the same number of exponent bits can have a fixed-point datapath which is bitsliced for various multiples of mantissa bit encodings (e.g., the same way as today's CPU cores implement SIMD). While block floating point has been promising in use for inference (e.g., Microsoft Floating Point [6]), most proposals to train with block floating point have either failed to reach its full potential by requiring small blocks and/or just fall short of reaching FP32 accuracy.
|
| 12 |
+
|
| 13 |
+
One specific proposal, Hybrid Block Floating Point (HBFP) [10], uses a mixed-precision format where the dominant fraction of training which are the dot products, happen in block floating point (e.g., convolutions, matrix multiplications), and FP32 is used for other less frequent operations requiring larger numerical ranges (e.g., activations, regular-izations). HBFP simultaneously offers the high accuracy of floating point and the superior hardware density of fixed point, delivering up to ${8.5} \times$ higher throughput than FP16 with $2 \times$ more compact models [11]. Prior work on HBFP only presented a preliminary design space analysis for power-of-two mantissa bit widths (e.g., 2, 4, 8 bits).
|
| 14 |
+
|
| 15 |
+
In this paper, we make the observation that the parameter space for HBFP is quite rich, presenting several opportunities for further improving efficiency and density in hardware platforms. First, custom accelerators can support non-power-of-two numerical formats, and minimizing the number of bits improves operand storage and communication linearly and arithmetic logic quadratically. Second, there is an interplay between the block size and the number of mantissa bits, allowing for an overall denser numerical format with smaller blocks while maintaining accuracy. Finally, HBFP allows for mixed-mantissa block floating point encodings. Prior work studies training with various HBFP formats in isolation; however, the design space of mixed-mantissa HBFP is yet to be explored.
|
| 16 |
+
|
| 17 |
+
We fully explore the parameter space of HBFP and show the boundaries of block floating point by studying the interplay between the block size and the number of mantissa bits. To the best of our knowledge, this is the first paper conducting a full design space exploration for training DNNs with block floating point. We show that HBFP6 (HBFP with 6 bits of mantissa) is the smallest HBFP format achieving competitive accuracies with no sensitivity to block size. Our main contribution is the design of Accuracy Boosters, a DNN training mechanism performing a large fraction of epochs in low precision, i.e. HBFP4. Our method improves epoch-wise mixed-precision training by introducing high precision, i.e. HBFP6, to the training process only at the last epoch. Accuracy Boosters enable reducing power consumption by up to ${38} \times$ as compared to FP32, while preserving or outperforming FP32 accuracy.
|
| 18 |
+
|
| 19 |
+
§ II. HBFP PARAMETER SPACE
|
| 20 |
+
|
| 21 |
+
HBFP is a mixed-precision DNN training technique that uses block floating point for all dot product operations and FP32 for the rest of the operations, enabling accurate training with dense fixed-point arithmetic. We observe that HBFP is also suitable for inference for the popular CNN and Transformer models without any accuracy loss, in line with prior work on inference with block floating point [6], showing that HBFP is a versatile technique for both training and inference. Prior work on HBFP shows that the area and energy expenditure of HBFP8 is around an order of magnitude lower than bfloat16 [11]. Exploring the parameter space of HBFP and pushing its boundaries can increase this ratio dramatically.
|
| 22 |
+
|
| 23 |
+
HBFP has a rich parameter space, including the number of mantissa bits, block size, and the number of exponent bits. The hardware area and energy expenditure of HBFP accelerators are determined by the number of mantissa bits and the block size because the overhead of the exponent bits is negligible due to blocking ${}^{1}$ . One of the key advantages of HBFP is that we can conservatively find a lower bound for the number of exponent bits that covers all of the design space exploration for block size and the number of mantissa bits. Therefore, we work with 10-bit exponents as in prior work [10] and explore the HBFP design space by varying the mantissa bit width and the block size. Once we fix the number of exponent bits, we can vary other parameters, which enables a reconfigurable microarchitecture and gives rise to mixed-mantissa HBFP.
|
| 24 |
+
|
| 25 |
+
Employing smaller mantissa bit widths and larger block sizes are key to improving block-floating-point hardware efficiency due to the increasing fraction of fixed-point operations [6]. There is an interplay between the number of mantissa bits and block sizes, allowing for an overall denser numerical format with smaller blocks while maintaining accuracy. This interplay is the result of how the block floating point conversion works. Block floating point shares a single exponent across a block of values using the exponent of the largest element. Since block floating point format does not apply normalization (It is calculated as ${2}^{\text{ exponent }} \times 0$ .mantissa instead of ${2}^{\text{ exponent }} \times 1$ .mantissa), the precision within a block is highly dependent on the largest element in that block, which decides the exponent value. The interval between two consecutive representable numbers is calculated as in Equation 1.
|
| 26 |
+
|
| 27 |
+
$$
|
| 28 |
+
\text{ interval } = \frac{{2}^{\text{ largest exponent }}}{{2}^{\# \text{ of mantissa bits }}} \tag{1}
|
| 29 |
+
$$
|
| 30 |
+
|
| 31 |
+
As the number of elements sharing the same exponent increases, the likelihood of disparity in the magnitude of elements also increases, leading to a precision loss for the small elements in the block. As the number of mantissa bits decreases, the model's sensitivity to the block size increases with the corresponding increase in the interval leading to a higher quantization error. More mantissa bits make the distribution more resilient to the quantization error and larger block size, as each element can be represented more accurately.
|
| 32 |
+
|
| 33 |
+
The power expenditure of HBFP is not only a function of the HBFP parameters but also a function of outside factors. Mixed-precision training has emerged as a popular technique to increase the fraction of leaner arithmetic formats within the training process, motivating us to explore the design space of mixed-mantissa HBFP; because HBFP provides the opportunity to fix the exponent and vary the number of mantissa bits. This mixed-mantissa training technique can be applied between layers and epochs.
|
| 34 |
+
|
| 35 |
+
For CNN models, prior work indicates that the first convolution layer and the last fully connected layer have a larger impact on the final model accuracy, and keeping these layers in high precision allows for reducing precision in the rest of the layers [3], [24], [34], [39]. The first layer takes the input images and filters the images with several convolution kernels and returns feature maps. Thus, it is critical for the final model to keep the input information fully accurate and to preserve the data in the initial feature map. The last layer of a CNN returns a probability distribution over the possible outcomes for image classification, making it crucial to retain information better for this layer.
|
| 36 |
+
|
| 37 |
+
In addition to layers, each training epoch has a different effect on the final model's accuracy [13], [14]. [28] and [36] show that DNNs first learn low-frequency components, where the frequency is defined for the coordinates of the input space. [36] also empirically show that for CNN models, the high-frequency components have higher complexities and are learned in the last epochs. In light of these findings, we hypothesize that high-frequency functional components are more sensitive to quantization errors. Thus, higher precision is required for the last stage of DNN training, where the optimization occurs after an appropriate amount of generalization in the network. After reaching a certain loss value in low-precision training, switching the tensors to high precision enable the sensitive fine-tuning performed in the final epochs and help increase the accuracy even more.
|
| 38 |
+
|
| 39 |
+
§ III. MINIMIZING HBFP
|
| 40 |
+
|
| 41 |
+
Our goal is to minimize HBFP to increase the hardware efficiency of training without losing accuracy. For a block size of 576, even though HBFP4 incurs a ${2.4} \times$ improvement in area/power relative to HBFP8, it lacks the precision to reach FP32 accuracy. As prior work on HBFP [10], [11] only investigated power-of-two mantissa bits and focused mostly on the design space of HBFP8, the interplay between the number of mantissa bits and the block size is left unexplored. While power-of-two-bit numbers align naturally with the memory structure and encode matrices in a tightly-packed way, nonpower-of-two-bit mantissas can improve the arithmetic density even further, as studied by [6] and can be easily integrated into custom accelerators. We investigate the whole design space of HBFP by varying both parameters and claim that reducing the block size will enable reducing the number of mantissa bits, and thus improve hardware efficiency. In this section, we show how to minimize HBFP step by step, give an explanation of the limitations of HBFP and propose a new mixed-precision schema to minimize HBFP further.
|
| 42 |
+
|
| 43 |
+
To study the relationship between model accuracy and HBFP parameters, we measure the similarity between block-floating-point and FP32 distributions of various tensors using Wasserstein distance, mathematically defined as in Equation 2.
|
| 44 |
+
|
| 45 |
+
$$
|
| 46 |
+
W\left( {P,Q}\right) = \mathop{\inf }\limits_{{\gamma \in \Pi \left( {P,Q}\right) }}{\mathbb{E}}_{\left( {x,y}\right) \sim \gamma }\left\lbrack {\parallel x - y\parallel }\right\rbrack \tag{2}
|
| 47 |
+
$$
|
| 48 |
+
|
| 49 |
+
where $\Pi \left( {P,Q}\right)$ is the set of all joint distributions $\gamma \left( {x,y}\right)$ whose marginal distributions are equal to $P$ and $Q.\gamma \left( {x,y}\right)$ can be interpreted as the amount of mass that must be transported from $x$ to $y$ to transform $P$ to $Q$ [1]. Unlike KL-Divergence, which is commonly used to compare quantized tensors to their full-precision counterparts [6], [26], Wasserstein distance is symmetric, and thus is mathematically a metric. Moreover, because DNNs often deal with distributions where KL Divergence is not defined (or infinite), we need to add a noise term to the model distribution to be able to use KL Divergence, which causes disturbance in the results.
|
| 50 |
+
|
| 51 |
+
${}^{1}$ Even for the block size of 4, HBFP4 with 5 -bit exponent is only ${1.03} \times$ more area-efficient than HBFP4 with 10-bit exponent
|
| 52 |
+
|
| 53 |
+
We observe that the tensor distribution is preserved when the elements are converted to block floating point format with 6 bits of mantissa and wider for reasonably large block sizes ${}^{2}$ . Figure 1 shows Wasserstein distances between FP32 and HBFP6 and HBFP4 with various block sizes for the weight tensors of four different layers of ResNet20 trained on CIFAR10. For all the tensors, HBFP6 has a much smaller distance to FP32, and the distances are fairly close to each other for a given tensor across all block sizes. However, the Wasserstein distance of HBFP4 is more than ${3.5} \times$ higher than HBFP6 across all block sizes, and the distances dramatically increase with the block size. Indeed, the R-squared (the strength of the relationship between two data sets) values between the model accuracy and various Wasserstein distances are around 0.99, validating the strength of our metric.
|
| 54 |
+
|
| 55 |
+
< g r a p h i c s >
|
| 56 |
+
|
| 57 |
+
Fig. 1: Wasserstein distance between FP32 and HBFP with various block sizes for various layers.
|
| 58 |
+
|
| 59 |
+
Even though reducing the block size incurs smaller Wasser-stein distances and helps increase the accuracy, HBFP4 still fails to reach FP32 accuracy because HBFP4 is good at generalization but does not have enough precision to minimize the loss. [22] introduce a methodology to visualize loss landscapes in order to better understand the effect of loss landscapes on generalization. Figure 2 shows log-scale loss landscapes for various configurations, sliced along the $\mathrm{x}$ -axis $\left( {\mathrm{y} = 0}\right)$ for simplicity. The center of each plot corresponds to the current state of the minimizer, and the two axes parameterize two random directions with filter-wise normalization. When we compare the curves for HBFP4 and HBFP6, the flatness of the curve indicates that HBFP4 generalizes better. However, HBFP4 converges to a much worse local minima compared to HBFP6 and FP32, thus failing to optimize.
|
| 60 |
+
|
| 61 |
+
< g r a p h i c s >
|
| 62 |
+
|
| 63 |
+
Fig. 2: Loss landscapes of ResNet20 on CIFAR10 for various configurations, sliced along the $\mathrm{x}$ -axis.
|
| 64 |
+
|
| 65 |
+
Following the insights from prior work, we study the effect of the first and last layers of CNNs on model accuracy. The dotted and solid red lines in Figure 1 show the first and last layers, respectively, and it is clear that these layers are the most affected by lowering the precision, especially for HBFP4. Thus, we keep the first and last layers of CNNs in HBFP6 during HBFP4 training to increase its accuracy. However, the increase in precision HBFP6 provides for the first and last layers still does not achieve enough optimization to reach FP32 accuracy. In Figure 2, the red dashed curve shows this configuration, and the curve gets sharper and lower compared to HBFP4-only training. However, the generalization and optimization power of the model is still unbalanced, leading to convergence to another bad local minima.
|
| 66 |
+
|
| 67 |
+
We introduce Accuracy Boosters, an epoch-driven mixed-mantissa HBFP that uses HBFP6 only in the last epoch and converts ${99.7}\%$ of all arithmetic operations in training to HBFP4. We hypothesize that using HBFP6 for the last epoch is sufficient to boost the accuracy, while the rest of the epochs are trained using HBFP4. We leverage the insight that last epochs have more effect on the final model's accuracy [13], [14], [28], [36]. We claim that training with 4-bit mantissas helps the model generalize and reach a certain loss value. Afterward, switching to 6-bit mantissas helps the model optimize and fine-tune in the final epochs and increase accuracy to the FP32 level. The loss landscape for Accuracy Boosters (the red solid curve in Figure 2) supports our hypothesis. We see that the curve gets really close (note that the plot is in log scale, thus -2 is closer to -4 compared to 0 ) to HBFP6 and FP32 curves and finally achieves FP32 accuracy.
|
| 68 |
+
|
| 69 |
+
§ IV. EXPERIMENTAL RESULTS
|
| 70 |
+
|
| 71 |
+
We experiment on the state-of-the-art models and datasets for various DNN tasks to test our hypotheses. We train ResNet20/74 [15], and DenseNet40 [16] on CIFAR10 and CIFAR100 [19] datasets for image classification. We also train a Transformer-Base [33] on the WMT16 English-German dataset for machine translation. Models trained on CIFAR10 are trained for 160 epochs, whereas for CIFAR100, the total number of epochs for all models is 300 . The transformer is trained for 35 epochs and Accuracy Boosters are applied in the last 1 and 5 epochs. We use FP32 as the baseline for both model accuracies and hardware comparisons.
|
| 72 |
+
|
| 73 |
+
${}^{2}$ Block sizes of up to 256 already achieves more than 95% of the maximum hardware benefit for HBFP6
|
| 74 |
+
|
| 75 |
+
§ A. MINIMIZING HBFP
|
| 76 |
+
|
| 77 |
+
Table I shows the Top1 validation accuracies for ResNet20, ResNet74, and DenseNet40 on CIFAR10 and CIFAR100 datasets trained with various HBFP configurations. We observe that HBFP6 is the smallest HBFP configuration that gives accuracies within $2\%$ of FP32 accuracy for block sizes up to 256. Larger blocks will contain a larger variety of values in terms of magnitude (affected e.g., by outliers), so it will result in larger approximation errors than smaller blocks and lower accuracy in training.
|
| 78 |
+
|
| 79 |
+
TABLE I: Top-1 validation accuracies of various CNN models for various HBFP configurations
|
| 80 |
+
|
| 81 |
+
max width=
|
| 82 |
+
|
| 83 |
+
3*Number Format 3*Block Size 4|c|Models and Datasets
|
| 84 |
+
|
| 85 |
+
3-6
|
| 86 |
+
2|c|CIFAR10 2|c|CIFAR100
|
| 87 |
+
|
| 88 |
+
3-6
|
| 89 |
+
ResNet20 ResNet74 ResNet74 DenseNet40
|
| 90 |
+
|
| 91 |
+
1-6
|
| 92 |
+
FP32 - 91.72 93.57 74.55 72.42
|
| 93 |
+
|
| 94 |
+
1-6
|
| 95 |
+
HBFP8 576 91.52 93.36 74.32 73.73
|
| 96 |
+
|
| 97 |
+
1-6
|
| 98 |
+
7*HBFP6 16 91.12 93.38 73.51 72.08
|
| 99 |
+
|
| 100 |
+
2-6
|
| 101 |
+
25 91.09 92.54 73.20 71.77
|
| 102 |
+
|
| 103 |
+
2-6
|
| 104 |
+
36 91.29 92.61 72.87 71.83
|
| 105 |
+
|
| 106 |
+
2-6
|
| 107 |
+
49 91.33 92.93 72.40 71.87
|
| 108 |
+
|
| 109 |
+
2-6
|
| 110 |
+
64 91.12 92.93 72.40 71.81
|
| 111 |
+
|
| 112 |
+
2-6
|
| 113 |
+
256 91.38 92.79 72.53 71.50
|
| 114 |
+
|
| 115 |
+
2-6
|
| 116 |
+
576 90.65 92.19 72.51 71.02
|
| 117 |
+
|
| 118 |
+
1-6
|
| 119 |
+
7*HBFP4 16 82.59 76.85 - 63.70
|
| 120 |
+
|
| 121 |
+
2-6
|
| 122 |
+
25 81.82 78.62 - 64.25
|
| 123 |
+
|
| 124 |
+
2-6
|
| 125 |
+
36 80.84 76.64 - 63.34
|
| 126 |
+
|
| 127 |
+
2-6
|
| 128 |
+
49 79.32 71.19 - 65.55
|
| 129 |
+
|
| 130 |
+
2-6
|
| 131 |
+
64 80.18 74.35 - 62.37
|
| 132 |
+
|
| 133 |
+
2-6
|
| 134 |
+
256 76.96 60.65 - 60.02
|
| 135 |
+
|
| 136 |
+
2-6
|
| 137 |
+
576 75.33 66.70 - 59.77
|
| 138 |
+
|
| 139 |
+
1-6
|
| 140 |
+
2|c|Total Number of FLOPs required to train the model 41M 174M 326M 542M
|
| 141 |
+
|
| 142 |
+
1-6
|
| 143 |
+
|
| 144 |
+
We also report HBFP4 accuracies to show the limitations of HBFP. Even for the small models like ResNet20, with a block size of 16, the accuracy drops more than 9%. As the accuracy drop for ResNet74 and DenseNet40 on CIFAR100 is considerably high even with $\operatorname{HBFP}\left( {5,{16}}\right)$ , we did not train these models with HBFP4. We observe that for HBFP4, the sensitivity to the block size increases for all the models because the distortions in the tensor distributions increase (see Section II).
|
| 145 |
+
|
| 146 |
+
§ B. ACCURACY BOOSTERS
|
| 147 |
+
|
| 148 |
+
Considering HBFP hardware model, a block size of 64 is within ${95}\%$ of the maximum area/power gain while achieving accuracies with less than 1% degradation for HBFP6. Thus, we choose block size of 64 as the sweet spot and test Accuracy Boosters using this block size. We perform the last epoch of the training in $\mathrm{{HBFP}}\left( {6,{64}}\right)$ and the rest in $\mathrm{{HBFP}}\left( {4,{64}}\right)$ for all the experimental settings. We also trained by keeping the last 10 epochs in $\operatorname{HBFP}\left( {6,{64}}\right)$ to observe the improvement in accuracy for the CNN models. We keep all CNN models' first and last layers in $\operatorname{HBFP}\left( {6,{64}}\right)$ . The first and last layers of the CNN models account for a negligible amount of computation; thus, keeping them in slightly higher precision during HBFP training does not result in a significant increase in the hardware area or energy consumption. We can see that for most of the CNN models, Accuracy Boosters outperforms FP32. When we keep the last 10 epochs in HBFP6, we observe that the accuracies slightly increase (see Table II).
|
| 149 |
+
|
| 150 |
+
TABLE II: Top-1 validation accuracies of various CNN models for Accuracy Boosters
|
| 151 |
+
|
| 152 |
+
max width=
|
| 153 |
+
|
| 154 |
+
3*Epochs using HBFP6 4|c|Models and Datasets
|
| 155 |
+
|
| 156 |
+
2-5
|
| 157 |
+
2|c|CIFAR10 2|c|CIFAR 100
|
| 158 |
+
|
| 159 |
+
2-5
|
| 160 |
+
ResNet20 ResNet74 ResNet74 DenseNet40
|
| 161 |
+
|
| 162 |
+
1-5
|
| 163 |
+
Only last 91.24 92.62 73.74 73.61
|
| 164 |
+
|
| 165 |
+
1-5
|
| 166 |
+
Last 10 91.36 93.02 74.28 74.10
|
| 167 |
+
|
| 168 |
+
1-5
|
| 169 |
+
FP32 91.72 93.57 74.55 72.42
|
| 170 |
+
|
| 171 |
+
1-5
|
| 172 |
+
|
| 173 |
+
For Transformer, Accuracy Boosters achieves a BLEU Score of 25.08 when used only for the last epoch and 25.40 when used for the last 10 epochs, while the FP32 score is 26.09. We also observe that for Transformer, while HBFP6 outperforms FP32, HBFP4 does not incur high amounts of accuracy loss, but Accuracy Boosters still increase the accuracy to the level close to FP32 (see Table III).
|
| 174 |
+
|
| 175 |
+
TABLE III: BLEU Scores for Transformer-Base trained on WMT16 English-German dataset with various training techniques
|
| 176 |
+
|
| 177 |
+
max width=
|
| 178 |
+
|
| 179 |
+
FP32 HBFP(6,49) HBFP(4,49) Booster (last) Booster (last)
|
| 180 |
+
|
| 181 |
+
1-5
|
| 182 |
+
26.09 26.16 24.73 25.08 25.40
|
| 183 |
+
|
| 184 |
+
1-5
|
| 185 |
+
|
| 186 |
+
TABLE IV: Power consumption ratios between FP32 and various Accuracy Booster configurations.
|
| 187 |
+
|
| 188 |
+
max width=
|
| 189 |
+
|
| 190 |
+
ResNet20 CIFAR10 ResNet74 CIFAR10 ResNet74 CIFAR100 Transformer (last) Transformer (last 5)
|
| 191 |
+
|
| 192 |
+
1-5
|
| 193 |
+
28.2 36.2 36.5 38.0 30.0
|
| 194 |
+
|
| 195 |
+
1-5
|
| 196 |
+
|
| 197 |
+
We also provide the power consumption ratios between FP32 and various Accuracy Booster configurations (Table IV). Accuracy Boosters provide up to ${36.5} \times$ power gain over FP32 for CNNs and up to ${38} \times$ gain for Transformer.
|
| 198 |
+
|
| 199 |
+
§ V. RELATED WORK
|
| 200 |
+
|
| 201 |
+
In recent years, there has been a significant amount of research on inference and training [4], [5], [8], [17], [21], [23], [29], [39] with narrow numerical representations. Google Brain's bfloat16 [35], NVIDIA's mixed-precision training with FP16 [25], and another mixed-precision scheme using FP8 [31] are the most commonly-used ones. Recent research advocates the use of Block Floating-Point for DNN training [11] and inference [6]. Flexpoint [20] and Dynamic Fixed-Point [7] propose block-floating-point formats for training with a 16-bit mantissa and a shared exponent. Prior work proposed a novel format for training DNNs with BFP, called Hybrid Block Floating-Point (HBFP) [10]. In this paper, we argue that reducing the mantissa bit width in HBFP significantly improves silicon efficiency while designing hardware for DNN training.
|
| 202 |
+
|
| 203 |
+
Many have proposed techniques to compensate for the data loss introduced by narrower numerical representations [12], [24], [31], [32]. Mixed-precision training has emerged as a popular technique to recover the information loss caused by quantization. Several techniques vary the precision layer-wise by using higher precision arithmetic for layers with greater significance [18], [30], [37]. Specifically, [3], [24], [34], [39] use FP32 for the first and last layers. [13] employ fixed-point arithmetic with different bit widths epoch-wise over the course of training. Combining the layer-wise and epoch-wise approaches, [14], [27], [38] vary the precision adaptively per epoch and layer at the same time using control mechanisms. While all the aforementioned studies employ leaner arithmetic for a fraction of the training process, they fail to make leaner arithmetic the common case of the training process.
|
| 204 |
+
|
| 205 |
+
Recent work [9] suggests that during mixed-precision FP16 training, the optimizer states can be reduced to 8 bits by using a block-wise quantization method. This observation is in line with our work that applies quantization by extracting the largest exponent per block. Similarly, FAST [38] uses a block-floating-point-based layer-wise mixed precision approach using 2 and 4-bit mantissa. Unlike our work, FAST requires fine-tuning several additional hyperparameters for its training algorithm, making it difficult to apply to other DNN models. Another block-floating-point-based work, FlexBlock [27], uses 4 and 8-bit mantissa with various block sizes and also needs higher-precision block-floating-point formats only for weight gradient calculations that suffer more from quantization errors.
|
| 206 |
+
|
| 207 |
+
§ VI. CONCLUSION
|
| 208 |
+
|
| 209 |
+
Several low-precision training techniques and specialized numerical formats have been introduced over the past decade to increase the arithmetic density of the DNN accelerators. One such format, Hybrid Block Floating-Point (HBFP), which allows for a majority of the DNN's arithmetic operations (i.e., dot products) to be performed using fixed-point arithmetic has been shown to achieve FP32 accuracy with 8-bit mantissa. However, a smaller number of mantissa bits allow for exceptional improvements in hardware (e.g., up to ${17.5} \times$ gain over FP32 silicon area). In this paper, we perform a full-scale exploration of the HBFP design space for emerging models and datasets. We show that HBFP6 is the smallest HBFP format achieving FP32 accuracy for all block sizes. We propose the Accuracy Boosters technique to bring HBFP4 into training, using HBFP6 in the last epoch, leveraging the insight that each epoch has a different effect on training. We show that the last stage of training requires more precision than the rest. Our method achieves up to ${38} \times$ power gain over FP32, while outperforming FP32 accuracy.
|
papers/ISCA/ISCA 2023/ISCA 2023 Workshop/ISCA 2023 Workshop ASSYST/rqn2v1Ltgn0/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,177 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Scaling Infrastructure to Support Multi-Trillion Parameter LLM Training
|
| 2 |
+
|
| 3 |
+
ISCA 2023 Submission #NaN - Confidential Draft - Do NOT Distribute!!
|
| 4 |
+
|
| 5 |
+
Abstract-This paper discusses efficient system designs for Large Language Model (LLM) scaling to up to 128 trillion parameters. We use a comprehensive analytical performance model to analyze how such models could be trained on current systems while maintaining 75% Model FLOPS Utilization (MFU). We first show how tensor offloading alone can be used to dramatically increase the size of trainable LLMs. We analyze performance bottlenecks when scaling on systems up to 16,384 GPUs and with models up to 128T parameters. Our findings suggest that current H100 GPUs with 80 GiB of HBM enabled with ${512}\mathrm{{GiB}}$ of tensor offloading can scale efficiency to ${11}\mathrm{T}$ parameters and getting to 128T parameters requires ${120}\mathrm{{GiB}}$ of HBM and 2 TiB of offloading memory, yielding 75% + MFU, which is uncommon even when training much smaller LLMs today.
|
| 6 |
+
|
| 7 |
+
## I. INTRODUCTION
|
| 8 |
+
|
| 9 |
+
We wish to consider what software and system configurations might permit existing Large Language Models (LLMs), now at about 1 trillion parameters [9], to scale with greater efficiency to even larger model sizes. Our analysis is driven by the continued success and efficacy of LLMs in a variety of applications [1], [2], [5], [9], [12], [14], [19] and motivated by the observation that Model FLOPS Utilization (MFU)-a common metric of efficiency for assessing how well specialized Artificial Intelligence (AI) accelerators are utilized during model training-can be as low as ${50}\%$ or less [13].
|
| 10 |
+
|
| 11 |
+
A significant improvement to MFU will be necessary to increase model sizes by ${10} \times ({10}$ trillion parameters) or higher on architectures similar to current systems. With a space requirement of 20 bytes per parameter, to store just the model's weights and optimizer step we would need more than ${200}\mathrm{\;{TB}}$ of memory. For a system based on NVIDIA H100 [11] Graphics Processing Unit (GPU) with ${80}\mathrm{{GiB}}$ of high bandwidth memory (HBM) memory each, we would need 2,500 GPUs and a fully model-parallel implementation to train such a model. No known model-parallelism technique at this scale would be able to provide anywhere near ${50}\%$ MFU.
|
| 12 |
+
|
| 13 |
+
Motivated by this example, we aim to establish the system limitations that prevent us from training multi-trillion parameter models on large systems built using clusters of 8 interconnected GPUs, similar to NVIDIA DGX and HGX. We start by presenting a methodology for choosing well structured multi-trillion parameter LLMs. Then, using our own fast analytical performance model of transformer-based LLM training, we search a space of billions of system configurations and execution strategies. This paper explains a few of our findings, which may be summarized as follows.
|
| 14 |
+
|
| 15 |
+
1) Training a hundred-trillion parameter LLM is feasible but requires an secondary memory pool up to $1\mathrm{{TiB}}$ per GPU with a bandwidth of ${100}\mathrm{{GB}}/\mathrm{s}$ bidirectionally.
|
| 16 |
+
|
| 17 |
+
2) Strong scaling for a 1T model stalls around 12,288 GPUs, as matrix multiply becomes small and inefficient and cannot overlap communication.
|
| 18 |
+
|
| 19 |
+
3) Scaling beyond ${10}\mathrm{\;T}$ models requires more first-level memory, with HBM size scaling with model size.
|
| 20 |
+
|
| 21 |
+
4) Growing model and system size beyond ${10}\mathrm{\;T}$ parameters and ${10}\mathrm{k}$ GPUs demands a larger fast-network domain and more targeted software optimizations.
|
| 22 |
+
|
| 23 |
+
Overall, we find it will be critical to co-design the LLM, software, and hardware to attain high performance and efficiency.
|
| 24 |
+
|
| 25 |
+
## II. EXPERIMENTS METHODOLOGY
|
| 26 |
+
|
| 27 |
+
For performance estimation we use Calculon, a fast analytical model of LLM training performance that we developed. Calculon can estimate the time and resource usage for a given LLM, system configuration, and software execution strategy in about 1 millisecond, allowing us to explore large design spaces consisting of many billions of such configurations. Calculon models LLM training with tensor parallelism (TP), pipeline parallelism (PP), and data parallelism (DP), allowing searches to determine optimal split-parallelism configurations. We perform experiments that vary system size, model size, memory capacity, bandwidth, and NVLink domain sizes, working with the FP8 data format supported by H100. For each system, we pick an execution strategy that considers multiple state-of-the-art software optimizations [6], [9], [15]-[17] and picks the best-performing one. Given the large search spaces, we cannot present our experiments fully and instead focus on a few of the most important trends we have discovered.
|
| 28 |
+
|
| 29 |
+

|
| 30 |
+
|
| 31 |
+
Fig. 1. Node architecture used for system modeling.
|
| 32 |
+
|
| 33 |
+
Our analysis assumes a networked system of compute nodes whose node-architecture is depicted in Fig. 1. It is similar to DGX or HGX in structure and connectivity. The only difference is the addition of offload memory attached to GPU in addition to HBM. Such memory can be connected via compute express link (CXL), or be hosted by Central Processing Unit (CPU) and made directly accessible from GPU, similarly to Grace-Hopper [10].
|
| 34 |
+
|
| 35 |
+
## III. MODEL CO-DESIGN DISCUSSION
|
| 36 |
+
|
| 37 |
+
We consider scaling LLMs from the perspective of finding roadblocks for efficient LLM training. An important parameter to consider while scaling LLMs is the ratio between the hidden dimension of the transformer block and the number of blocks (a.k.a., transformer layers). This is referred to as an LLM's aspect ratio. Some recent research claims the ideal aspect ratio is a constant 128 [4], while others claim that the aspect ratio should increase exponentially with the number of blocks [7]. Both of these analyses were performed on LLMs 2 to 5 orders of magnitude smaller than today's production LLMs. As we don't see consensus among the LLM experts, we follow the apparent current practice suggested by Table I, which is to extrapolate aspect ratios linearly with the number of transformer blocks. Nevertheless, our methodology of analysis will work for any aspect ratio scaling function.
|
| 38 |
+
|
| 39 |
+
TABLE I
|
| 40 |
+
|
| 41 |
+
ASPECT RATIOS OF CURRENT LLMS.
|
| 42 |
+
|
| 43 |
+
<table><tr><td>$\mathbf{{Name}}$</td><td>Hidden</td><td>#Blocks</td><td>Aspect Ratio</td></tr><tr><td>GPT2-1.5B [14]</td><td>1600</td><td>48</td><td>33.3</td></tr><tr><td>Jurassic-6.5B [8]</td><td>4096</td><td>32</td><td>128</td></tr><tr><td>PaLM-8B [2]</td><td>4096</td><td>32</td><td>128</td></tr><tr><td>GPT3-13B [1]</td><td>5140</td><td>40</td><td>128.5</td></tr><tr><td>Megatron-40B [9]</td><td>6144</td><td>40</td><td>153.6</td></tr><tr><td>PaLM-62B [2</td><td>8192</td><td>64</td><td>128</td></tr><tr><td>Chinchilla-64B [3]</td><td>8192</td><td>80</td><td>102.4</td></tr><tr><td>GPT3-175B [1]</td><td>12288</td><td>96</td><td>128</td></tr><tr><td>Jurassic-175B [8]</td><td>13824</td><td>76</td><td>181.9</td></tr><tr><td>Megatron-309B [9]</td><td>16384</td><td>96</td><td>170.7</td></tr><tr><td>TuringNLG-530B [18]</td><td>20480</td><td>105</td><td>195</td></tr><tr><td>PaLM-540B [2]</td><td>18432</td><td>118</td><td>156</td></tr><tr><td>Megatron-1T [9]</td><td>25600</td><td>128</td><td>200</td></tr></table>
|
| 44 |
+
|
| 45 |
+

|
| 46 |
+
|
| 47 |
+
Fig. 2. Performance comparison of two 11T models.
|
| 48 |
+
|
| 49 |
+
One of the challenges in scaling LLMs is mapping the models onto the available computing hardware. Some models, such as GPT-3 [1] with 175 billion parameters across 96 blocks, are designed with many dimensions as powers of two or multiples of powers of two, making them well-suited for typical system designs, which are a commonly built in powers of two. However, other models are not as easy to map. Turing-NLG [18] has 530 billion parameters across 105 blocks, an odd number which results less possible mapping solutions. PaLM [2] has 540 billion parameters across 118 blocks, a prime number multiplied by 2 which results in even less possible mapping solutions.
|
| 50 |
+
|
| 51 |
+
Fig. 2 compares two similarly sized models of about 11 trillion parameters. One has a nice power of two number of blocks (256) and the other has a prime number multiplied by two (254). When mapped onto 4,096 processors, the 256 block model yields 15,612,832 possible mapping solutions while the 254 block model yields only ${842},{080},{18.5} \times$ less. The 256 block model yields ${36}\%$ higher execution performance. While the 254 block model yields a 54% MFU, the 256 block model yields ${75}\%$ .
|
| 52 |
+
|
| 53 |
+
Model ratio and trillions of parameters for optimal scaling
|
| 54 |
+
|
| 55 |
+

|
| 56 |
+
|
| 57 |
+
Fig. 3. Linear scaling of the hidden size with number of transformer blocks, in steps of 8,192 for hidden size and 32 for number of blocks. Each cell contains model size and hidden to blocks ratio. Red color represents narrower models, blue color represents wider ones. Optimal choices are represented by white color in the frame, model size and ratio in bold.
|
| 58 |
+
|
| 59 |
+
To address this challenge, we propose scaling the number of blocks and attention heads with a step size that is a power of two which makes it easier to scale such models over tensor and pipeline parallelism and providing better overall performance. Fig. 3 scales in steps of 32 for models up to 128 trillion parameters. These models all result in many millions of mapping solutions on various common system designs and across many system sizes.
|
| 60 |
+
|
| 61 |
+
The hidden step size shown in Fig. 3 is 8,192, however, when finding the most optimal (closest to ideal aspect ratio), we use a step size of 1,024 . For the remainder of this paper we use the model configurations found in Table II. All models have a sequence size of8,192, the feed forward size is fixed to $4 \times$ the hidden size, and the number of attention heads is equal to the number of blocks. For all experiments we limited the maximum batch size to 3,072.
|
| 62 |
+
|
| 63 |
+
TABLE II
|
| 64 |
+
|
| 65 |
+
TWELVE MULTI-TRILLION PARAMETER LLMS, FROM 1T TO 128T.
|
| 66 |
+
|
| 67 |
+
<table><tr><td>$\mathbf{{Name}}$</td><td>Hidden</td><td>Attn Size</td><td>#Blocks</td><td>Aspect Ratio</td></tr><tr><td>1T</td><td>24,576</td><td>192</td><td>128</td><td>192</td></tr><tr><td>2T</td><td>32,768</td><td>205</td><td>160</td><td>204.8</td></tr><tr><td>4T</td><td>40,960</td><td>213</td><td>192</td><td>213.3</td></tr><tr><td>7T</td><td>50,176</td><td>224</td><td>224</td><td>224</td></tr><tr><td>11T</td><td>60,416</td><td>236</td><td>256</td><td>236</td></tr><tr><td>18T</td><td>70,656</td><td>245</td><td>288</td><td>245</td></tr><tr><td>26T</td><td>81,920</td><td>256</td><td>320</td><td>256</td></tr><tr><td>37T</td><td>94,208</td><td>268</td><td>352</td><td>267.6</td></tr><tr><td>53T</td><td>106,496</td><td>277</td><td>384</td><td>277.3</td></tr><tr><td>72T</td><td>119,808</td><td>288</td><td>416</td><td>288</td></tr><tr><td>96T</td><td>134,144</td><td>299</td><td>448</td><td>299.4</td></tr><tr><td>128T</td><td>148,480</td><td>309</td><td>480</td><td>309.3</td></tr></table>
|
| 68 |
+
|
| 69 |
+
IV. TENSOR OFFLOADING FOR LLM SCALING
|
| 70 |
+
|
| 71 |
+

|
| 72 |
+
|
| 73 |
+
Fig. 4. Comparison of LLM scaling on 4,096 GPUs with and without offload memory. Such memory enables high training efficiency beyond 100T models.
|
| 74 |
+
|
| 75 |
+
While scaling out LLMs using standard DGX/HGX H100s with 8 NVLink-connected GPUs is possible, achieving high performance is not trivial. Fig. 4a shows the training efficiency while scaling up model size on a fixed system size of 4,096 GPUs. Even the smallest model size, 1T, reaches only 60% efficiency and rapidly decays until 18T where it can no longer run. The main scalability issue is the lack of memory to store weights and activations during training. This in turn forces the use of activation recomputation and higher model parallelism. A large pipeline parallelism with a lack of spare memory forces an excessive time overhead in the form of a pipeline bubble. A large tensor parallelism beyond the NVLink size of 8 increases communication time due to a lack of bandwidth.
|
| 76 |
+
|
| 77 |
+
These issues can be addressed by a secondary memory pool, where unused tensors from inactive transformer blocks can be transferred and retrieved as needed [17]. This could be implemented as CPU host memory, an array of PCIe attached SSDs, or CXL-attached memory. Fig. 4b shows the training efficiency when using tensor offloading with a capacity of 1 TiB per GPU at infinite bandwidth. Thus, with enough offloading capacity and infinite offloading bandwidth, we could train models at least up to ${128}\mathrm{\;T}$ parameters.
|
| 78 |
+
|
| 79 |
+

|
| 80 |
+
|
| 81 |
+
Fig. 5. Efficiency of offload memory compared to infinite offload bandwidth.
|
| 82 |
+
|
| 83 |
+
Fig. 5 inspects offloading capacities of ${256}\mathrm{{GiB}},{512}\mathrm{{GiB}}$ , $1\mathrm{{TiB}}$ , and $2\mathrm{{TiB}}$ . It compares the relative slowdown of using ${50}\mathrm{{GB}}/\mathrm{s}$ and ${100}\mathrm{{GB}}/\mathrm{s}$ of offloading bandwidth per direction compared to infinite bandwidth. We see that ${50}\mathrm{{GB}}/\mathrm{s}$ experiences significant slowdown for larger model sizes on 4,096 GPUs system. With ${100}\mathrm{{GB}}/\mathrm{s}$ offloading memory, the majority of the systems have nearly the same performance as infinite bandwidth, which makes it an excellent target bandwidth. While not a trivial amount of bandwidth, ${100}\mathrm{{GB}}/\mathrm{s}$ could be implemented by 32 lanes of PCI-E 5.0 or CXL; it is also well below the speed of the CPU-to-GPU link of the NVIDIA Grace-Hopper package [10]. Utilizing offload memory is not only efficient but within reach of current technology.
|
| 84 |
+
|
| 85 |
+

|
| 86 |
+
|
| 87 |
+
Fig. 6. Training efficiency with model and system scaling using offloading memory with infinite bandwidth. Green dash line indicates ${75}\%$ MFU, red dash line indicates ${50}\%$ MFU.
|
| 88 |
+
|
| 89 |
+
Fig. 6 shows the training efficiency when using offloading running at ${100}\mathrm{{GB}}/\mathrm{s}$ with capacities of ${256}\mathrm{{GiB}},{512}\mathrm{{GiB}}$ , 1 TiB, and 2 TiB across 4,096, 8,192, 12,288, and 16,384 GPUs. The major trends shown are:
|
| 90 |
+
|
| 91 |
+
- Small models on large systems leads to low efficiency.
|
| 92 |
+
|
| 93 |
+
- Large models on small systems leads to low efficiency.
|
| 94 |
+
|
| 95 |
+
- 256 GiB rarely produces good efficiency.
|
| 96 |
+
|
| 97 |
+
- For $8\mathrm{k},{12}\mathrm{k}$ , and ${16}\mathrm{k}$ GPUs, ${512}\mathrm{{GiB}}$ is mostly sufficient.
|
| 98 |
+
|
| 99 |
+
- 1 TiB is nearly identical to $2\mathrm{{TiB}}$ . V. Model Strong Scaling
|
| 100 |
+
|
| 101 |
+

|
| 102 |
+
|
| 103 |
+
Fig. 7. Batch time and memory consumption break down for 1T model strong scaling from 4,096 to 16,384 GPUs.
|
| 104 |
+
|
| 105 |
+
In this section we analyze the strong scaling of the 1T parameter model from 4,096 to 16,384 GPUs inspecting NVLink domain sizes of 8 and 16. Fig. 7 shows that scaling of training up to 12,288 GPUs fares well but suffers at 16,384 GPUs. NVLink size 8 is sufficient up to 12,228 GPUs but NVLink size 16 is needed for higher efficiency at 16,384 GPUs. Adding extra processors requires assigning them to tensor, pipeline, or data parallelism and none of them are free. We identified several reasons for a lack of scaling at 16,384 GPUs.
|
| 106 |
+
|
| 107 |
+
1) When increasing TP the tensor may be divided too small to maintain a high compute efficiency in the GPU.
|
| 108 |
+
|
| 109 |
+
2) When increasing TP the size of each message may become small enough to become latency dominated.
|
| 110 |
+
|
| 111 |
+
3) When attempting to overlap TP communication and computation, increasing TP reduces the computation size but communication size remains the same. At particular FLOPs/bandwidth ratios, the communication will be exposed leading to low efficiency.
|
| 112 |
+
|
| 113 |
+
4) When overlapping TP communication and computation, to sustain the high bandwidth of NVLink the GPU must dedicate many cores to communication reducing its computation speed. Adding a specialized direct memory access (DMA)-like engine for communication would eliminate this overhead allowing optimal overlap.
|
| 114 |
+
|
| 115 |
+
5) Increasing PP either increases the pipeline bubble overhead or requires more memory for higher levels of interleaving to reduce the pipeline bubble.
|
| 116 |
+
|
| 117 |
+
6) Increasing DP requires more memory due to replication.
|
| 118 |
+
|
| 119 |
+
7) We constrain our models to have a maximum batch size of 3,072 to conserve the convergence properties of prior studies. This means that maximum available DP is 3,072, the rest must be either TP or PP.
|
| 120 |
+
|
| 121 |
+
## VI. Scaling Models Beyond 10T Parameters
|
| 122 |
+
|
| 123 |
+
In this section we test weak scaling by growing the model sizes to ${128}\mathrm{\;T}$ parameters. Fig. 8 shows model scaling on 4,096 GPUs enabled with ${80}\mathrm{{GiB}}$ and ${120}\mathrm{{GiB}}$ of $\mathrm{{HBM}}$ and ${256}\mathrm{{GiB}},{512}\mathrm{{GiB}},1\mathrm{{TiB}}$ , and $2\mathrm{{TiB}}$ of offloading capacity. This figure demonstrates that while scaling model training on 4,096 GPUs works well with ${80}\mathrm{{GiB}}$ of HBM for models up to 11T parameters, HBM size must increase to 120 GiB for scaling further, even when given extra offloading memory. This happens because even when offloading is used, there must be enough HBM memory to hold two transformer blocks, the one used in computation and the one needed for offloading and prefetching. During model scaling, the transformer block size grows mostly due to weights and activations. Unsurprisingly, offload memory capacity also needs to scale aaccordingly.
|
| 124 |
+
|
| 125 |
+

|
| 126 |
+
|
| 127 |
+
Fig. 8. Efficiency and memory consumption for LLM training on 4,096 GPUs. Green dash line indicates ${75}\%$ MFU, red dash line indicates ${50}\%$ . Memory consumption presented for the ${120}\mathrm{{GiB}}\mathrm{{HBM}}$ and $2\mathrm{{TiB}}$ offload memory.
|
| 128 |
+
|
| 129 |
+
Our experiments indicate that growing the HBM size to ${120}\mathrm{{GiB}}$ and offload memory to $2\mathrm{{TiB}}$ is enough to maintain the further model scaling to ${100}\mathrm{\;T}$ parameters. We can see that past 11T parameter, models occupy most of the available memories. This indicates that further efficiency improvements are possible, either by providing more memory, or by increasing the size of the NVLink domain to reduce per-GPU weight space and increase local microbatch size. These experiments show that the proposed LLMs can scale up to 128T parameters while maintaining an MFU above 75%, more than typically seen on current systems for much smaller LLMs.
|
| 130 |
+
|
| 131 |
+
## VII. CONCLUSION
|
| 132 |
+
|
| 133 |
+
Our co-design analysis reveals that well structured multi-trillion parameter LLMs are able to train efficiently at 75%+ MFU when supplied with optimal settings and a secondary memory pool for tensor offloading. We analyzed strong scaling of models, finding optimal configuration strategies and quantitatively revealing fundamental limitations. We analyzed weak scaling of models, showing successful scaling up to 11T parameters with only tensor offloading, and showing successful scaling up to ${128}\mathrm{\;T}$ parameters using a ${120}\mathrm{{GiB}}$ HBM and a 2 TiB offloading memory.
|
| 134 |
+
|
| 135 |
+
## REFERENCES
|
| 136 |
+
|
| 137 |
+
[1] T. B. Brown, B. Mann, N. Ryder, M. Subbiah, J. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell, S. Agarwal, A. Herbert-Voss, G. Krueger, T. Henighan, R. Child, A. Ramesh, D. M. Ziegler,
|
| 138 |
+
|
| 139 |
+
J. Wu, C. Winter, C. Hesse, M. Chen, E. Sigler, M. Litwin, S. Gray, B. Chess, J. Clark, C. Berner, S. McCandlish, A. Radford, I. Sutskever, and D. Amodei, "Language models are few-shot learners," in Proceedings of the 34th International Conference on Neural Information Processing Systems, ser. NIPS'20. Red Hook, NY, USA: Curran Associates Inc., 2020.
|
| 140 |
+
|
| 141 |
+
[2] A. Chowdhery, S. Narang, J. Devlin, M. Bosma, G. Mishra, A. Roberts, P. Barham, H. W. Chung, C. Sutton, S. Gehrmann, P. Schuh, K. Shi, S. Tsvyashchenko, J. Maynez, A. Rao, P. Barnes, Y. Tay, N. Shazeer, V. Prabhakaran, E. Reif, N. Du, B. Hutchinson, R. Pope, J. Bradbury, J. Austin, M. Isard, G. Gur-Ari, P. Yin, T. Duke, A. Levskaya, S. Ghe-mawat, S. Dev, H. Michalewski, X. Garcia, V. Misra, K. Robinson, L. Fe-dus, D. Zhou, D. Ippolito, D. Luan, H. Lim, B. Zoph, A. Spiridonov, R. Sepassi, D. Dohan, S. Agrawal, M. Omernick, A. M. Dai, T. S. Pillai, M. Pellat, A. Lewkowycz, E. Moreira, R. Child, O. Polozov, K. Lee, Z. Zhou, X. Wang, B. Saeta, M. Diaz, O. Firat, M. Catasta, J. Wei, K. Meier-Hellstern, D. Eck, J. Dean, S. Petrov, and N. Fiedel, "Palm: Scaling language modeling with pathways," 2022.
|
| 142 |
+
|
| 143 |
+
[3] J. Hoffmann, S. Borgeaud, A. Mensch, E. Buchatskaya, T. Cai, E. Rutherford, D. de Las Casas, L. A. Hendricks, J. Welbl, A. Clark, T. Hennigan, E. Noland, K. Millican, G. van den Driessche, B. Damoc, A. Guy, S. Osindero, K. Simonyan, E. Elsen, J. W. Rae, O. Vinyals, and L. Sifre, "Training compute-optimal large language models," 2022.
|
| 144 |
+
|
| 145 |
+
[4] J. Kaplan, S. McCandlish, T. Henighan, T. B. Brown, B. Chess, R. Child, S. Gray, A. Radford, J. Wu, and D. Amodei, "Scaling laws for neural language models," 2020.
|
| 146 |
+
|
| 147 |
+
[5] D. M. Katz, M. J. Bommarito, S. Gao, and P. Arredondo, "Gpt-4 passes the bar exam," SSRN Electronic Journal, 2023. [Online]. Available: https://ssrn.com/abstract=4389233
|
| 148 |
+
|
| 149 |
+
[6] V. Korthikanti, J. Casper, S. Lym, L. McAfee, M. Andersch, M. Shoeybi, and B. Catanzaro, "Reducing activation recomputation in large transformer models," 2022. [Online]. Available: https: //arxiv.org/abs/2205.05198
|
| 150 |
+
|
| 151 |
+
[7] Y. Levine, N. Wies, O. Sharir, H. Bata, and A. Shashua, "Limits to depth efficiencies of self-attention," in Advances in Neural Information Processing Systems, H. Larochelle, M. Ranzato, R. Hadsell, M. Balcan, and H. Lin, Eds., vol. 33. Curran Associates, Inc., 2020, pp. 22640- 22651. [Online]. Available: https://proceedings.neurips.cc/paper_files/ paper/2020/file/ff4dfdf5904e920ce52b48c1cef97829-Paper.pdf
|
| 152 |
+
|
| 153 |
+
[8] O. Lieber, O. Sharir, B. Lenz, and Y. Shoham, "Jurassic-1: Technical details and evaluation," AI21 Labs, Tech. Rep., Aug. 2021.
|
| 154 |
+
|
| 155 |
+
[9] D. Narayanan, M. Shoeybi, J. Casper, P. LeGresley, M. Patwary, V. Korthikanti, D. Vainbrand, P. Kashinkunti, J. Bernauer, B. Catanzaro, A. Phanishayee, and M. Zaharia, "Efficient large-scale language model training on gpu clusters using megatron-lm," in Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, ser. SC '21. New York, NY, USA: Association for Computing Machinery, 2021. [Online]. Available: https://doi.org/10.1145/3458817.3476209
|
| 156 |
+
|
| 157 |
+
[10] NVIDIA, "Nvidia grace hopper superchip architecture," 2022. [Online]. Available: https://resources.nvidia.com/en-us-grace-cpu/nvidia-grace-hopper
|
| 158 |
+
|
| 159 |
+
[11] NVIDIA, "Nvidia h100 tensor core gpu architecture," 2022. [Online]. Available: https://images.nvidia.com/aem-dam/en-zz/Solutions/data-center/nvidia-ampere-architecture-whitepaper.pdf
|
| 160 |
+
|
| 161 |
+
[12] OpenAI, "Gpt-4 technical report," 2023.
|
| 162 |
+
|
| 163 |
+
[13] D. Patterson, J. Gonzalez, Q. Le, C. Liang, L.-M. Munguia, D. Rothchild, D. So, M. Texier, and J. Dean, "Carbon emissions and large neural network training," 2021. [Online]. Available: https://arxiv.org/abs/2104.10350
|
| 164 |
+
|
| 165 |
+
[14] A. Radford, J. Wu, R. Child, D. Luan, D. Amodei, and I. Sutskever, "Language models are unsupervised multitask learners," 2019.
|
| 166 |
+
|
| 167 |
+
[15] S. Rajbhandari, J. Rasley, O. Ruwase, and Y. He, "Zero: Memory optimizations toward training trillion parameter models," in Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, ser. SC '20. IEEE Press, 2020.
|
| 168 |
+
|
| 169 |
+
[16] J. Rasley, S. Rajbhandari, O. Ruwase, and Y. He, "Deepspeed: System optimizations enable training deep learning models with over 100 billion parameters," in Proceedings of the 26th ACM
|
| 170 |
+
|
| 171 |
+
SIGKDD International Conference on Knowledge Discovery and Data Mining, ser. KDD '20. New York, NY, USA: Association for Computing Machinery, 2020, p. 3505-3506. [Online]. Available: https://doi.org/10.1145/3394486.3406703
|
| 172 |
+
|
| 173 |
+
[17] J. Ren, S. Rajbhandari, R. Y. Aminabadi, O. Ruwase, S. Yang, M. Zhang, D. Li, and Y. He, "Zero-offload: Democratizing billion-scale model training," in 2021 USENIX Annual Technical Conference, USENIX ATC 2021, July 14-16, 2021, I. Calciu and G. Kuenning, Eds. USENIX Association, 2021, pp. 551-564. [Online]. Available: https://www.usenix.org/conference/atc21/presentation/ren-jie
|
| 174 |
+
|
| 175 |
+
[18] S. Smith, M. Patwary, B. Norick, P. LeGresley, S. Rajbhandari, J. Casper, Z. Liu, S. Prabhumoye, G. Zerveas, V. Korthikanti, E. Zhang, R. Child, R. Y. Aminabadi, J. Bernauer, X. Song, M. Shoeybi, Y. He, M. Houston, S. Tiwary, and B. Catanzaro, "Using deepspeed and megatron to train megatron-turing nlg 530b, a large-scale generative language model," 2022. [Online]. Available: https://arxiv.org/abs/2201.11990
|
| 176 |
+
|
| 177 |
+
[19] H. Touvron, T. Lavril, G. Izacard, X. Martinet, M.-A. Lachaux, T. Lacroix, B. Rozière, N. Goyal, E. Hambro, F. Azhar, A. Rodriguez, A. Joulin, E. Grave, and G. Lample, "Llama: Open and efficient foundation language models," 2023.
|
papers/ISCA/ISCA 2023/ISCA 2023 Workshop/ISCA 2023 Workshop ASSYST/rqn2v1Ltgn0/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,216 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ SCALING INFRASTRUCTURE TO SUPPORT MULTI-TRILLION PARAMETER LLM TRAINING
|
| 2 |
+
|
| 3 |
+
ISCA 2023 Submission #NaN - Confidential Draft - Do NOT Distribute!!
|
| 4 |
+
|
| 5 |
+
Abstract-This paper discusses efficient system designs for Large Language Model (LLM) scaling to up to 128 trillion parameters. We use a comprehensive analytical performance model to analyze how such models could be trained on current systems while maintaining 75% Model FLOPS Utilization (MFU). We first show how tensor offloading alone can be used to dramatically increase the size of trainable LLMs. We analyze performance bottlenecks when scaling on systems up to 16,384 GPUs and with models up to 128T parameters. Our findings suggest that current H100 GPUs with 80 GiB of HBM enabled with ${512}\mathrm{{GiB}}$ of tensor offloading can scale efficiency to ${11}\mathrm{T}$ parameters and getting to 128T parameters requires ${120}\mathrm{{GiB}}$ of HBM and 2 TiB of offloading memory, yielding 75% + MFU, which is uncommon even when training much smaller LLMs today.
|
| 6 |
+
|
| 7 |
+
§ I. INTRODUCTION
|
| 8 |
+
|
| 9 |
+
We wish to consider what software and system configurations might permit existing Large Language Models (LLMs), now at about 1 trillion parameters [9], to scale with greater efficiency to even larger model sizes. Our analysis is driven by the continued success and efficacy of LLMs in a variety of applications [1], [2], [5], [9], [12], [14], [19] and motivated by the observation that Model FLOPS Utilization (MFU)-a common metric of efficiency for assessing how well specialized Artificial Intelligence (AI) accelerators are utilized during model training-can be as low as ${50}\%$ or less [13].
|
| 10 |
+
|
| 11 |
+
A significant improvement to MFU will be necessary to increase model sizes by ${10} \times ({10}$ trillion parameters) or higher on architectures similar to current systems. With a space requirement of 20 bytes per parameter, to store just the model's weights and optimizer step we would need more than ${200}\mathrm{\;{TB}}$ of memory. For a system based on NVIDIA H100 [11] Graphics Processing Unit (GPU) with ${80}\mathrm{{GiB}}$ of high bandwidth memory (HBM) memory each, we would need 2,500 GPUs and a fully model-parallel implementation to train such a model. No known model-parallelism technique at this scale would be able to provide anywhere near ${50}\%$ MFU.
|
| 12 |
+
|
| 13 |
+
Motivated by this example, we aim to establish the system limitations that prevent us from training multi-trillion parameter models on large systems built using clusters of 8 interconnected GPUs, similar to NVIDIA DGX and HGX. We start by presenting a methodology for choosing well structured multi-trillion parameter LLMs. Then, using our own fast analytical performance model of transformer-based LLM training, we search a space of billions of system configurations and execution strategies. This paper explains a few of our findings, which may be summarized as follows.
|
| 14 |
+
|
| 15 |
+
1) Training a hundred-trillion parameter LLM is feasible but requires an secondary memory pool up to $1\mathrm{{TiB}}$ per GPU with a bandwidth of ${100}\mathrm{{GB}}/\mathrm{s}$ bidirectionally.
|
| 16 |
+
|
| 17 |
+
2) Strong scaling for a 1T model stalls around 12,288 GPUs, as matrix multiply becomes small and inefficient and cannot overlap communication.
|
| 18 |
+
|
| 19 |
+
3) Scaling beyond ${10}\mathrm{\;T}$ models requires more first-level memory, with HBM size scaling with model size.
|
| 20 |
+
|
| 21 |
+
4) Growing model and system size beyond ${10}\mathrm{\;T}$ parameters and ${10}\mathrm{k}$ GPUs demands a larger fast-network domain and more targeted software optimizations.
|
| 22 |
+
|
| 23 |
+
Overall, we find it will be critical to co-design the LLM, software, and hardware to attain high performance and efficiency.
|
| 24 |
+
|
| 25 |
+
§ II. EXPERIMENTS METHODOLOGY
|
| 26 |
+
|
| 27 |
+
For performance estimation we use Calculon, a fast analytical model of LLM training performance that we developed. Calculon can estimate the time and resource usage for a given LLM, system configuration, and software execution strategy in about 1 millisecond, allowing us to explore large design spaces consisting of many billions of such configurations. Calculon models LLM training with tensor parallelism (TP), pipeline parallelism (PP), and data parallelism (DP), allowing searches to determine optimal split-parallelism configurations. We perform experiments that vary system size, model size, memory capacity, bandwidth, and NVLink domain sizes, working with the FP8 data format supported by H100. For each system, we pick an execution strategy that considers multiple state-of-the-art software optimizations [6], [9], [15]-[17] and picks the best-performing one. Given the large search spaces, we cannot present our experiments fully and instead focus on a few of the most important trends we have discovered.
|
| 28 |
+
|
| 29 |
+
< g r a p h i c s >
|
| 30 |
+
|
| 31 |
+
Fig. 1. Node architecture used for system modeling.
|
| 32 |
+
|
| 33 |
+
Our analysis assumes a networked system of compute nodes whose node-architecture is depicted in Fig. 1. It is similar to DGX or HGX in structure and connectivity. The only difference is the addition of offload memory attached to GPU in addition to HBM. Such memory can be connected via compute express link (CXL), or be hosted by Central Processing Unit (CPU) and made directly accessible from GPU, similarly to Grace-Hopper [10].
|
| 34 |
+
|
| 35 |
+
§ III. MODEL CO-DESIGN DISCUSSION
|
| 36 |
+
|
| 37 |
+
We consider scaling LLMs from the perspective of finding roadblocks for efficient LLM training. An important parameter to consider while scaling LLMs is the ratio between the hidden dimension of the transformer block and the number of blocks (a.k.a., transformer layers). This is referred to as an LLM's aspect ratio. Some recent research claims the ideal aspect ratio is a constant 128 [4], while others claim that the aspect ratio should increase exponentially with the number of blocks [7]. Both of these analyses were performed on LLMs 2 to 5 orders of magnitude smaller than today's production LLMs. As we don't see consensus among the LLM experts, we follow the apparent current practice suggested by Table I, which is to extrapolate aspect ratios linearly with the number of transformer blocks. Nevertheless, our methodology of analysis will work for any aspect ratio scaling function.
|
| 38 |
+
|
| 39 |
+
TABLE I
|
| 40 |
+
|
| 41 |
+
ASPECT RATIOS OF CURRENT LLMS.
|
| 42 |
+
|
| 43 |
+
max width=
|
| 44 |
+
|
| 45 |
+
$\mathbf{{Name}}$ Hidden #Blocks Aspect Ratio
|
| 46 |
+
|
| 47 |
+
1-4
|
| 48 |
+
GPT2-1.5B [14] 1600 48 33.3
|
| 49 |
+
|
| 50 |
+
1-4
|
| 51 |
+
Jurassic-6.5B [8] 4096 32 128
|
| 52 |
+
|
| 53 |
+
1-4
|
| 54 |
+
PaLM-8B [2] 4096 32 128
|
| 55 |
+
|
| 56 |
+
1-4
|
| 57 |
+
GPT3-13B [1] 5140 40 128.5
|
| 58 |
+
|
| 59 |
+
1-4
|
| 60 |
+
Megatron-40B [9] 6144 40 153.6
|
| 61 |
+
|
| 62 |
+
1-4
|
| 63 |
+
PaLM-62B [2 8192 64 128
|
| 64 |
+
|
| 65 |
+
1-4
|
| 66 |
+
Chinchilla-64B [3] 8192 80 102.4
|
| 67 |
+
|
| 68 |
+
1-4
|
| 69 |
+
GPT3-175B [1] 12288 96 128
|
| 70 |
+
|
| 71 |
+
1-4
|
| 72 |
+
Jurassic-175B [8] 13824 76 181.9
|
| 73 |
+
|
| 74 |
+
1-4
|
| 75 |
+
Megatron-309B [9] 16384 96 170.7
|
| 76 |
+
|
| 77 |
+
1-4
|
| 78 |
+
TuringNLG-530B [18] 20480 105 195
|
| 79 |
+
|
| 80 |
+
1-4
|
| 81 |
+
PaLM-540B [2] 18432 118 156
|
| 82 |
+
|
| 83 |
+
1-4
|
| 84 |
+
Megatron-1T [9] 25600 128 200
|
| 85 |
+
|
| 86 |
+
1-4
|
| 87 |
+
|
| 88 |
+
< g r a p h i c s >
|
| 89 |
+
|
| 90 |
+
Fig. 2. Performance comparison of two 11T models.
|
| 91 |
+
|
| 92 |
+
One of the challenges in scaling LLMs is mapping the models onto the available computing hardware. Some models, such as GPT-3 [1] with 175 billion parameters across 96 blocks, are designed with many dimensions as powers of two or multiples of powers of two, making them well-suited for typical system designs, which are a commonly built in powers of two. However, other models are not as easy to map. Turing-NLG [18] has 530 billion parameters across 105 blocks, an odd number which results less possible mapping solutions. PaLM [2] has 540 billion parameters across 118 blocks, a prime number multiplied by 2 which results in even less possible mapping solutions.
|
| 93 |
+
|
| 94 |
+
Fig. 2 compares two similarly sized models of about 11 trillion parameters. One has a nice power of two number of blocks (256) and the other has a prime number multiplied by two (254). When mapped onto 4,096 processors, the 256 block model yields 15,612,832 possible mapping solutions while the 254 block model yields only ${842},{080},{18.5} \times$ less. The 256 block model yields ${36}\%$ higher execution performance. While the 254 block model yields a 54% MFU, the 256 block model yields ${75}\%$ .
|
| 95 |
+
|
| 96 |
+
Model ratio and trillions of parameters for optimal scaling
|
| 97 |
+
|
| 98 |
+
< g r a p h i c s >
|
| 99 |
+
|
| 100 |
+
Fig. 3. Linear scaling of the hidden size with number of transformer blocks, in steps of 8,192 for hidden size and 32 for number of blocks. Each cell contains model size and hidden to blocks ratio. Red color represents narrower models, blue color represents wider ones. Optimal choices are represented by white color in the frame, model size and ratio in bold.
|
| 101 |
+
|
| 102 |
+
To address this challenge, we propose scaling the number of blocks and attention heads with a step size that is a power of two which makes it easier to scale such models over tensor and pipeline parallelism and providing better overall performance. Fig. 3 scales in steps of 32 for models up to 128 trillion parameters. These models all result in many millions of mapping solutions on various common system designs and across many system sizes.
|
| 103 |
+
|
| 104 |
+
The hidden step size shown in Fig. 3 is 8,192, however, when finding the most optimal (closest to ideal aspect ratio), we use a step size of 1,024 . For the remainder of this paper we use the model configurations found in Table II. All models have a sequence size of8,192, the feed forward size is fixed to $4 \times$ the hidden size, and the number of attention heads is equal to the number of blocks. For all experiments we limited the maximum batch size to 3,072.
|
| 105 |
+
|
| 106 |
+
TABLE II
|
| 107 |
+
|
| 108 |
+
TWELVE MULTI-TRILLION PARAMETER LLMS, FROM 1T TO 128T.
|
| 109 |
+
|
| 110 |
+
max width=
|
| 111 |
+
|
| 112 |
+
$\mathbf{{Name}}$ Hidden Attn Size #Blocks Aspect Ratio
|
| 113 |
+
|
| 114 |
+
1-5
|
| 115 |
+
1T 24,576 192 128 192
|
| 116 |
+
|
| 117 |
+
1-5
|
| 118 |
+
2T 32,768 205 160 204.8
|
| 119 |
+
|
| 120 |
+
1-5
|
| 121 |
+
4T 40,960 213 192 213.3
|
| 122 |
+
|
| 123 |
+
1-5
|
| 124 |
+
7T 50,176 224 224 224
|
| 125 |
+
|
| 126 |
+
1-5
|
| 127 |
+
11T 60,416 236 256 236
|
| 128 |
+
|
| 129 |
+
1-5
|
| 130 |
+
18T 70,656 245 288 245
|
| 131 |
+
|
| 132 |
+
1-5
|
| 133 |
+
26T 81,920 256 320 256
|
| 134 |
+
|
| 135 |
+
1-5
|
| 136 |
+
37T 94,208 268 352 267.6
|
| 137 |
+
|
| 138 |
+
1-5
|
| 139 |
+
53T 106,496 277 384 277.3
|
| 140 |
+
|
| 141 |
+
1-5
|
| 142 |
+
72T 119,808 288 416 288
|
| 143 |
+
|
| 144 |
+
1-5
|
| 145 |
+
96T 134,144 299 448 299.4
|
| 146 |
+
|
| 147 |
+
1-5
|
| 148 |
+
128T 148,480 309 480 309.3
|
| 149 |
+
|
| 150 |
+
1-5
|
| 151 |
+
|
| 152 |
+
IV. TENSOR OFFLOADING FOR LLM SCALING
|
| 153 |
+
|
| 154 |
+
< g r a p h i c s >
|
| 155 |
+
|
| 156 |
+
Fig. 4. Comparison of LLM scaling on 4,096 GPUs with and without offload memory. Such memory enables high training efficiency beyond 100T models.
|
| 157 |
+
|
| 158 |
+
While scaling out LLMs using standard DGX/HGX H100s with 8 NVLink-connected GPUs is possible, achieving high performance is not trivial. Fig. 4a shows the training efficiency while scaling up model size on a fixed system size of 4,096 GPUs. Even the smallest model size, 1T, reaches only 60% efficiency and rapidly decays until 18T where it can no longer run. The main scalability issue is the lack of memory to store weights and activations during training. This in turn forces the use of activation recomputation and higher model parallelism. A large pipeline parallelism with a lack of spare memory forces an excessive time overhead in the form of a pipeline bubble. A large tensor parallelism beyond the NVLink size of 8 increases communication time due to a lack of bandwidth.
|
| 159 |
+
|
| 160 |
+
These issues can be addressed by a secondary memory pool, where unused tensors from inactive transformer blocks can be transferred and retrieved as needed [17]. This could be implemented as CPU host memory, an array of PCIe attached SSDs, or CXL-attached memory. Fig. 4b shows the training efficiency when using tensor offloading with a capacity of 1 TiB per GPU at infinite bandwidth. Thus, with enough offloading capacity and infinite offloading bandwidth, we could train models at least up to ${128}\mathrm{\;T}$ parameters.
|
| 161 |
+
|
| 162 |
+
< g r a p h i c s >
|
| 163 |
+
|
| 164 |
+
Fig. 5. Efficiency of offload memory compared to infinite offload bandwidth.
|
| 165 |
+
|
| 166 |
+
Fig. 5 inspects offloading capacities of ${256}\mathrm{{GiB}},{512}\mathrm{{GiB}}$ , $1\mathrm{{TiB}}$ , and $2\mathrm{{TiB}}$ . It compares the relative slowdown of using ${50}\mathrm{{GB}}/\mathrm{s}$ and ${100}\mathrm{{GB}}/\mathrm{s}$ of offloading bandwidth per direction compared to infinite bandwidth. We see that ${50}\mathrm{{GB}}/\mathrm{s}$ experiences significant slowdown for larger model sizes on 4,096 GPUs system. With ${100}\mathrm{{GB}}/\mathrm{s}$ offloading memory, the majority of the systems have nearly the same performance as infinite bandwidth, which makes it an excellent target bandwidth. While not a trivial amount of bandwidth, ${100}\mathrm{{GB}}/\mathrm{s}$ could be implemented by 32 lanes of PCI-E 5.0 or CXL; it is also well below the speed of the CPU-to-GPU link of the NVIDIA Grace-Hopper package [10]. Utilizing offload memory is not only efficient but within reach of current technology.
|
| 167 |
+
|
| 168 |
+
< g r a p h i c s >
|
| 169 |
+
|
| 170 |
+
Fig. 6. Training efficiency with model and system scaling using offloading memory with infinite bandwidth. Green dash line indicates ${75}\%$ MFU, red dash line indicates ${50}\%$ MFU.
|
| 171 |
+
|
| 172 |
+
Fig. 6 shows the training efficiency when using offloading running at ${100}\mathrm{{GB}}/\mathrm{s}$ with capacities of ${256}\mathrm{{GiB}},{512}\mathrm{{GiB}}$ , 1 TiB, and 2 TiB across 4,096, 8,192, 12,288, and 16,384 GPUs. The major trends shown are:
|
| 173 |
+
|
| 174 |
+
* Small models on large systems leads to low efficiency.
|
| 175 |
+
|
| 176 |
+
* Large models on small systems leads to low efficiency.
|
| 177 |
+
|
| 178 |
+
* 256 GiB rarely produces good efficiency.
|
| 179 |
+
|
| 180 |
+
* For $8\mathrm{k},{12}\mathrm{k}$ , and ${16}\mathrm{k}$ GPUs, ${512}\mathrm{{GiB}}$ is mostly sufficient.
|
| 181 |
+
|
| 182 |
+
* 1 TiB is nearly identical to $2\mathrm{{TiB}}$ . V. Model Strong Scaling
|
| 183 |
+
|
| 184 |
+
< g r a p h i c s >
|
| 185 |
+
|
| 186 |
+
Fig. 7. Batch time and memory consumption break down for 1T model strong scaling from 4,096 to 16,384 GPUs.
|
| 187 |
+
|
| 188 |
+
In this section we analyze the strong scaling of the 1T parameter model from 4,096 to 16,384 GPUs inspecting NVLink domain sizes of 8 and 16. Fig. 7 shows that scaling of training up to 12,288 GPUs fares well but suffers at 16,384 GPUs. NVLink size 8 is sufficient up to 12,228 GPUs but NVLink size 16 is needed for higher efficiency at 16,384 GPUs. Adding extra processors requires assigning them to tensor, pipeline, or data parallelism and none of them are free. We identified several reasons for a lack of scaling at 16,384 GPUs.
|
| 189 |
+
|
| 190 |
+
1) When increasing TP the tensor may be divided too small to maintain a high compute efficiency in the GPU.
|
| 191 |
+
|
| 192 |
+
2) When increasing TP the size of each message may become small enough to become latency dominated.
|
| 193 |
+
|
| 194 |
+
3) When attempting to overlap TP communication and computation, increasing TP reduces the computation size but communication size remains the same. At particular FLOPs/bandwidth ratios, the communication will be exposed leading to low efficiency.
|
| 195 |
+
|
| 196 |
+
4) When overlapping TP communication and computation, to sustain the high bandwidth of NVLink the GPU must dedicate many cores to communication reducing its computation speed. Adding a specialized direct memory access (DMA)-like engine for communication would eliminate this overhead allowing optimal overlap.
|
| 197 |
+
|
| 198 |
+
5) Increasing PP either increases the pipeline bubble overhead or requires more memory for higher levels of interleaving to reduce the pipeline bubble.
|
| 199 |
+
|
| 200 |
+
6) Increasing DP requires more memory due to replication.
|
| 201 |
+
|
| 202 |
+
7) We constrain our models to have a maximum batch size of 3,072 to conserve the convergence properties of prior studies. This means that maximum available DP is 3,072, the rest must be either TP or PP.
|
| 203 |
+
|
| 204 |
+
§ VI. SCALING MODELS BEYOND 10T PARAMETERS
|
| 205 |
+
|
| 206 |
+
In this section we test weak scaling by growing the model sizes to ${128}\mathrm{\;T}$ parameters. Fig. 8 shows model scaling on 4,096 GPUs enabled with ${80}\mathrm{{GiB}}$ and ${120}\mathrm{{GiB}}$ of $\mathrm{{HBM}}$ and ${256}\mathrm{{GiB}},{512}\mathrm{{GiB}},1\mathrm{{TiB}}$ , and $2\mathrm{{TiB}}$ of offloading capacity. This figure demonstrates that while scaling model training on 4,096 GPUs works well with ${80}\mathrm{{GiB}}$ of HBM for models up to 11T parameters, HBM size must increase to 120 GiB for scaling further, even when given extra offloading memory. This happens because even when offloading is used, there must be enough HBM memory to hold two transformer blocks, the one used in computation and the one needed for offloading and prefetching. During model scaling, the transformer block size grows mostly due to weights and activations. Unsurprisingly, offload memory capacity also needs to scale aaccordingly.
|
| 207 |
+
|
| 208 |
+
< g r a p h i c s >
|
| 209 |
+
|
| 210 |
+
Fig. 8. Efficiency and memory consumption for LLM training on 4,096 GPUs. Green dash line indicates ${75}\%$ MFU, red dash line indicates ${50}\%$ . Memory consumption presented for the ${120}\mathrm{{GiB}}\mathrm{{HBM}}$ and $2\mathrm{{TiB}}$ offload memory.
|
| 211 |
+
|
| 212 |
+
Our experiments indicate that growing the HBM size to ${120}\mathrm{{GiB}}$ and offload memory to $2\mathrm{{TiB}}$ is enough to maintain the further model scaling to ${100}\mathrm{\;T}$ parameters. We can see that past 11T parameter, models occupy most of the available memories. This indicates that further efficiency improvements are possible, either by providing more memory, or by increasing the size of the NVLink domain to reduce per-GPU weight space and increase local microbatch size. These experiments show that the proposed LLMs can scale up to 128T parameters while maintaining an MFU above 75%, more than typically seen on current systems for much smaller LLMs.
|
| 213 |
+
|
| 214 |
+
§ VII. CONCLUSION
|
| 215 |
+
|
| 216 |
+
Our co-design analysis reveals that well structured multi-trillion parameter LLMs are able to train efficiently at 75%+ MFU when supplied with optimal settings and a secondary memory pool for tensor offloading. We analyzed strong scaling of models, finding optimal configuration strategies and quantitatively revealing fundamental limitations. We analyzed weak scaling of models, showing successful scaling up to 11T parameters with only tensor offloading, and showing successful scaling up to ${128}\mathrm{\;T}$ parameters using a ${120}\mathrm{{GiB}}$ HBM and a 2 TiB offloading memory.
|
papers/ISCA/ISCA 2023/ISCA 2023 Workshop/ISCA 2023 Workshop ASSYST/ymfPxccNUZ/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,151 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Towards A Reconfigurable Systolic Array with Multi-Level Packing for Transformers
|
| 2 |
+
|
| 3 |
+
${Abstract}$ -Transformer-based models has achieved remarkable success in extensive tasks for natural language processing. To handle the variable-length sentences in human language, prior works suffer from low hardware efficiency due to either the shape mismatch between fixed-shape PEs (processing elements) and variable-shape workloads with data parallelism or large bubbles with pipeline parallelism. This ongoing work proposes a hybrid parallelism mixed with data parallelism for linear operators and pipeline parallelism for the attention. We develop a reconfigurable systolic array with multi-level packing to improve hardware efficiency. First, linear operators for different inputs can be packed along the array columns to improve spatial efficiency. Meanwhile, to boost temporal efficiency, we develop a head-level pipeline for attention with different stages packed on the array. We further skip the redundant computation in the masked attention by packing the computation of two heads along time. Packing decisions are explored with a dynamic programming based algorithm to maximize the overall throughput. Applied to GPT, our FPGA design has achieved ${1.16} \times$ higher normalized throughput and ${1.94} \times$ better runtime MAC utilization over the state-of-the-art GPU performance for variable-length sequences from MRPC, RTE and SQuADv2 datasets.
|
| 4 |
+
|
| 5 |
+
## I. INTRODUCTION
|
| 6 |
+
|
| 7 |
+
Transformer-based models have achieved remarkable triumphs in a wide range of deep learning tasks for natural language processing, such as machine translation [1], text classification [2] and generation [3], [4]. The extensive success is attributed to the task-agnostic model architecture with increasing number of encoder and decoder layers, and vocabulary size for better quality on various tasks. Such a trend, along with the unlimited text length that these language models need to handle, results in huge amounts of computation and parameters. The full GPT-3 [5] holds 175 billion parameters and requires ${3.14} \times {10}^{23}$ floating-point operations (FLOPS) for training, which would cost over $\$ {4.6}\mathrm{M}$ using a Tesla V100 cloud instance for a single training run. The progressively higher computational demand calls for the need to exploit the efficiency of these models on devices.
|
| 8 |
+
|
| 9 |
+
The acceleration approaches in prior works fall into two paradigms. One approach exploits the intra-operator data parallelism (Fig.1a) in an operator-by-operator basis. [6] optimizes the operator partitioning for Transformer inference on TPUv4 [7]. [8], [9] boost GPU performance with tensor cores, while [10], [11] lay focus on the memory optimizations on GPU. Padding arises from variable-length inputs in a batch due to the fact that popular deep learning frameworks [12], [13] can only handle rectangular shapes. The padded zeros thus introduce excessive overhead in both computation and memory. [14], [15] reduce the padding redundancy by reordering inputs during pre-processing, while [16] eliminates padding for linear blocks and fused attention on GPU by offsetting the variable-length inputs in memory. These works suffer from low efficiency either spatially from the mismatch between fixed-shape processing elements (PEs) and variable-shape workload, or temporally from non-overlapped memory access latency, especially in data-parallel fused attention.
|
| 10 |
+
|
| 11 |
+

|
| 12 |
+
|
| 13 |
+
Fig. 1. Execution timeline for different parallel approaches. (a)Intra-operator data parallelism. (b)Sub-layer pipeline parallelism. (c) Hybrid parallelism. The time length of each block is only for illustration.
|
| 14 |
+
|
| 15 |
+
Another approach resorts to the inter-operator pipeline parallelism (Fig.1b) where consecutive operators are assigned to different PEs. [17]-[20] accelerate training of deep learning models with micro-batch layer pipeline. [21] constructs a sequence-wise sub-layer pipeline with approximated attention and feed-forward network. Since the computation of attention and linear blocks are at least linear complexity with regard to the variable input length, pipeline parallelism at sub-layer level could inevitably result in severe pipeline bubbles for large length variance across input sequences.
|
| 16 |
+
|
| 17 |
+
We have two key observations on the GPU performance of Transformer-based models with intra-operator data parallelism. First, the attention suffers from both low temporal and spatial efficiency even for a fused kernel, indicating that the attention could potentially benefit more from inter-operator pipeline parallelism rather than intra-operator data parallelism. Second, the highly-optimized linear blocks only obtain 70% temporal efficiency and ${25}\%$ spatial efficiency. Besides the shape mismatch, it is also limited by the capacity of shared memory and registers due to the fact that data parallelism leads to data replication, meaning less data reuse.
|
| 18 |
+
|
| 19 |
+
To address the above problems, we propose a hybrid parallelism (Fig.1c), data parallelism for linear blocks and pipeline parallelism for attention. The latter is finer-grained than sublayer level to reduce pipeline bubbles. However, challenges arise in the architecture support for the hybrid parallelism. On one hand, inter-operator pipeline parallelism needs to split the PEs for pipeline stages. The shape mismatch can also be alleviated with the split along with workload decomposition. On the other hand, intra-operator data parallelism needs to unify the PEs and registers to maximize the data reuse. To meet the both requirements, we propose a runtime reconfigurable systolic array (RSA), where PEs across columns can work either together for a single operator or separately for multiple operators. Specifically, the RSA can be split for different input tokens in linear operators, or for different pipeline stages in the attention. We use column packing for this column-wise reconfigurable working pattern. Moreover, the masking in the decoder, which preserves the auto-regressive property to prevent leftward information in the flow, brings ${50}\%$ redundancy in attention, especially for long sequences, but is neglected in prior works. We further propose mask packing to skip the redundant computation between two heads assigned to the same RSA columns.
|
| 20 |
+
|
| 21 |
+
Our contributions are summarized as follows:
|
| 22 |
+
|
| 23 |
+
- We develop a reconfigurable systolic array for hybrid parallelism, data parallelism for linear blocks and pipeline parallelism for attention, to improve the hardware efficiency of Transformer-based models.
|
| 24 |
+
|
| 25 |
+
- We propose a two-level packing, column packing and mask packing, to boost efficiency spatially and temporally for variable-length inputs. Packing decisions are explored with a dynamic programming based algorithm to maximize the overall throughput.
|
| 26 |
+
|
| 27 |
+
- Applied to GPT, our design on U200 FPGA shows ${1.16} \times$ higher normalized throughput and ${1.94} \times$ better runtime MAC utilization over the state-of-the-art GPU performance for variable-length input sequences from MRPC, RTE and SQuADv2 datasets.
|
| 28 |
+
|
| 29 |
+
In the following sections, we will first describe details of column packing and mask packing in Section II and then propose RSA architecture in Section III. Then we will explore the column packing decisions for hybrid parallelism in Section IV.Section V and Section VI present experiment results and conclusions.
|
| 30 |
+
|
| 31 |
+
## II. Method
|
| 32 |
+
|
| 33 |
+
## A. Column Packing
|
| 34 |
+
|
| 35 |
+
1) Pack Linear Blocks: We exploit the intra-operator data parallelism for each linear operator, namely a $M \times N \times K$ matrix multiplication(MM), where $M, N, K$ stand for input rows, output columns and hidden size. We have some observations on the MM shapes in a Transformer-based model. For a variable-length input, $M$ is equal to input sequence length $L$ . Whether $N$ and $K$ are variable varies across different MMs. In the first case, which is also the most common case in the linear blocks of Transformer-based models, $N$ and $K$ are fixed as a multiple of head size ${d}_{h}$ . The second case includes the two MMs in the attention with variable $N$ or $K$ , where the shapes are $L \times L \times {d}_{h}$ and $L \times {d}_{h} \times L$ . The shape mismatch between fixed-shape PEs and variable-shape MMs leads to low efficiency. Rather than suffering from multidimensional shape mismatch between PE and MM, we map the fixed shapes to the RSA rows and variable shapes to the RSA columns and temporal dimension so that the shape mismatch can be maximally alleviated by column packing. To be more specific, for a MM with variable-length inputs, we pack $N$ from different input sequences along RSA columns to maximize the spatial efficiency. We also take advantage of split-k, as described in [9], to partially unroll the $K$ dimension to balance the parallel workloads along columns for temporal efficiency.
|
| 36 |
+
|
| 37 |
+
2) Pack Attention: We propose a coarse-grained head-level pipeline for the attention with six stages, including ${KQ}$ load, MM $K{Q}^{T}, V$ load, softmax, MM $S{V}^{T}$ and final save. The two MMs are packed along RSA columns during the pipeline, where the former has variable $N$ and the latter has variable $K$ . Since two different variable dimensions are mapped to RSA columns, weight stationary and output stationary dataflow are respectively required. Moreover, the number of heads to run per stage is worth study. More heads to pack in a MM stage leads to better spatial efficiency locally within the stage, but potentially results in worse global efficiency, since the larger pipeline granularity brings more bubbles. The pipeline stage partition will be discussed more in Section IV.
|
| 38 |
+
|
| 39 |
+
## B. Mask Packing
|
| 40 |
+
|
| 41 |
+
Each token only needs the computation results from its preceding tokens in the input sequence, but do not need those after. A Transformer decoder masks out the unnecessary ones to preserve the auto-regressive property. To eliminate the masking redundancy in softmax $\left( {\operatorname{mask}\left( {K{Q}^{T}}\right) }\right) {V}^{T}$ , we propose mask packing as in Fig.2d. Rather than applying masking after the full computation of two $K{Q}^{T}\mathrm{\;s}$ , we skip the redundant computation and only generate a packed result matrix $S$ . We use $S$ as packed layout for the following softmax and $S{V}^{T}$ for memory efficiency. So PEs need to handle $K{Q}^{T}$ and $S{V}^{T}$ with fixed and variable reduction length, respectively, and the softmax module needs to handle vectors in the packed layout.
|
| 42 |
+
|
| 43 |
+
## III. Hardware Architecture Design
|
| 44 |
+
|
| 45 |
+
To provide the underlying architecture for the column packing and mask packing, we develop a RSA along with arbiter networks and a nonlinear vector module (NVM), as shown in Fig.2b.
|
| 46 |
+
|
| 47 |
+
We develop a two-dimensional systolic array with PEs and coupled shift registers. Fig.2b shows the circuit diagram of a RSA PE, which has three-level reconfigurablity from the control signals (gray). First, use_reg_row configures the RSA split along columns by selecting the input data path to the multiplier. If it is set to 1 , the multiplier will take the value stored in the row register (orange) as input via the reconfigurable data path (blue) rather than the forwarded value from the left PE, where the two neighboring PEs can thus work separate workloads. Second, the coupled shift register is for input buffering and its buffer switch can be configured for mask packing. Third, use_forward_psum configures the dataflow for a RSA. If it is set to 1, PE uses the partial sum forwarded from upper PE and thus enables weight stationary dataflow. Otherwise, the accumulation will be performed locally as output stationary dataflow. The reconfigurable dataflow is for the two MMs in attention with variable $N$ and $K$ respectively so that they can be packed along RSA columns.
|
| 48 |
+
|
| 49 |
+

|
| 50 |
+
|
| 51 |
+
Fig. 2. (a) System diagram. (b) Circuit diagram of a RSA PE and its coupled shift registers. The input data path and buffer switch can be configured for packing. (c) RSA is split to halves in different dataflows for column packing. (d) Mask packing. We show an example where we skip the redundant computation for $K{Q}^{T}$ from two heads.
|
| 52 |
+
|
| 53 |
+
We use two arbiter networks for the interconnection between RSA and on-chip buffers to meet different communication patterns under column packing. For a MM where $K$ dimension is partially unrolled across columns, we need to collectively reduce the partial sums from multiple columns. For the attention pipeline, the result of the first MM computed on a RSA partition is written to on-chip buffers and then fed to another RSA partition. These two patterns are realized by the arbiter networks. Moreover, to handle the packed layout for mask packing, our NVM takes advantage of a configurable reduction tree proposed in [22] for maximum and sum reduction with arbitrary length in softmax.
|
| 54 |
+
|
| 55 |
+
## IV. SchEDULING
|
| 56 |
+
|
| 57 |
+
## A. Column Packing for A Single Operator
|
| 58 |
+
|
| 59 |
+
We first discuss the column packing decisions for a MM with variable-length inputs in the shape ${L}_{i} \times N \times K.{L}_{i}$ is the length of the $i$ th input in a batch. Mapping $N$ and partial $K$ to the spatial column dimension and ${L}_{i}$ to the temporal dimension, we enumerate all combinations of $N$ values and $K$ factors to find the pair with maximal spatial efficiency. For example, we are mapping a MM with $N = 4$ and $K = 8$ to a RSA with 16 columns and 4 rows. To maximize spatial efficiency, we unroll $K = 8$ along 2 columns besides 4 rows. Still, only 8 columns $(N = 4 \times 2,2$ is $K$ ’s column unroll factor) are used. So we split RSA to two partitions, each holding 8 columns and serving part of ${L}_{i}$ along the time. We then split the ${L}_{i}$ to two parts to balance the workload packed along columns.
|
| 60 |
+
|
| 61 |
+
## B. Column Packing for Attention Pipeline
|
| 62 |
+
|
| 63 |
+
For the attention pipeline at head level, we aim to split head sequences $\mathcal{H} = \left\{ {h}_{ij}\right\}$ , where $i$ is the sequence index in a batch and $j$ is the head index, to multiple stages, while minimizing the overall latency. Within each stage, intra-operator column packing in Section IV-A is applied. Mask packing is also applied to each MM. We formulate the pipeline stage partition as a dynamic programming problem. Its optimal sub-structure is listed in Eq.1. $p$ is a bit vector, where 1 means the $k$ th head in $\mathcal{H}$ is packed with its last preceding head and 0 means no packing. The stage partition can be inferred from $p$ with simple union-and-find method. Column packing is constrained by the on-chip memory capacity. ${M}_{\max }$ is the maximally allowed on-chip memory pressure and ${M}_{k}$ is the memory pressure of $k$ th head. Iterating the ${h}_{k}$ in $\mathcal{H}$ , we find the maximal overall throughput $\mathbb{T}$ with the head packed to last preceding head to one stage or not. If ${h}_{k}$ is packed, bookkeeping $p\left\lbrack k\right\rbrack = 1,{M}_{k}$ is on hold when exploring the column packing decision for next head. Otherwise, we check the packing of next head at a new stage with ${M}_{\max }$ . The optimal stage partition will maximally reduce the pipeline bubbles.
|
| 64 |
+
|
| 65 |
+
$$
|
| 66 |
+
\mathbb{T}\left( {k, p, M}\right) = \max \left\{ \begin{array}{l} \mathbb{T}\left( {k - 1, p\left\lbrack k\right\rbrack = 1, M - {M}_{k}}\right) \mid M > {M}_{k} \\ \mathbb{T}\left( {k - 1, p\left\lbrack k\right\rbrack = 0,{M}_{max}}\right) \end{array}\right.
|
| 67 |
+
$$
|
| 68 |
+
|
| 69 |
+
(1)
|
| 70 |
+
|
| 71 |
+
## V. EXPERIMENT RESULTS
|
| 72 |
+
|
| 73 |
+
## A. Evaluation Setting
|
| 74 |
+
|
| 75 |
+
We implement our accelerator on Xilinx U200 FPGA with RSA in 4 rows and 1024 columns, NVM in 32 vector length, and four DDR4. The latency in cycle count for the evaluation below is collected through RTL-level simulation with Xilinx Vivado Suite.
|
| 76 |
+
|
| 77 |
+
We benchmark our design with 6 different settings on input sequence length. The first three are constructed with fixed-length inputs in length64,512,2048in batch size 8 . The other three collect variable-length inputs from MRPC, RTE and SQuADv2 [24] test sets, respectively, and then packed in batches with size 8 . Three datasets have ${14}/{40},{54}/{240}$ , 167/791 for average/maximal sequence length. The former two datasets are from GLUE [25] benchmark suite with representative length in small and medium length, while the latter covers more long length. We run these datasets on GPT for evaluation.
|
| 78 |
+
|
| 79 |
+
## B. Performance with Step-wise Optimization
|
| 80 |
+
|
| 81 |
+
We apply step-wise evaluation to show the effect of hybrid parallelism with column packing and mask packing. Intra-operator Data Parallel runs the model on RSA with intra-operator data parallelism in operator-by-operator basis. Layer Pipeline runs a two-stage layer pipeline parallelism, where the RSA is split to halves and each runs a Transformer layer for different sequences. The other three applies hybrid parallelism incrementally with different packing methods.
|
| 82 |
+
|
| 83 |
+
Fig. 3 shows the impact of step-wise optimization on GPT. One can find that Layer Pipeline is limited by off-chip memory bandwidth on a single device and thus performs worse for longer sequences. The hybrid parallelism is effective in all cases with ${1.17} \times$ higher throughput on average than intra-operator data parallelism, while column packing and mask packing bring ${1.21} \times$ and ${1.26} \times$ performance boost respectively. Column packing gets more benefits for short sequences, such as fixed 64, MRPC and RTE, with better spatial efficiency. Mask packing benefits more for long sequences, which brings additional ${30}\%$ for fixed 2048, but is marginal for other cases. This is because the computation of attention grows quadratically with sequence length and the attention takes ${50}\%$ of total computation for fixed 2048, while it takes less than ${10}\%$ for fixed 64 .
|
| 84 |
+
|
| 85 |
+

|
| 86 |
+
|
| 87 |
+
Fig. 3. GPT performance on RSA with step-wise optimization.
|
| 88 |
+
|
| 89 |
+
## C. End-to-End Performance
|
| 90 |
+
|
| 91 |
+
Table I compares our performance with other works on GPU and FPGA on GPT. Batch size 8 is used for all cases. We evaluate GPU performance with [16], which is the state-of-the-art GPU work for variable-length inputs. [21] is optimized for variable-length inputs on FPGA, but does not include detailed throughput for each dataset. [23] only reports performance for fixed-length inputs, which is also the case for other FPGA works. So we compare the performance for fixed 128 length inputs with [23] and that for variable-length inputs with [16] on three datasets. The throughput is normalized in MAC units and 16-bit precision. Our design outperforms GPU and FPGA works by ${1.16} \times$ and ${2.11} \times$ on normalized throughput, respectively, across fixed-length and variable-length inputs. The advantage comes from the better efficiency due to the column packing and mask packing on our RSA, which shows ${1.94} \times$ and ${1.18} \times$ better MAC efficiency over GPU and FPGA works. This observation exhibits the advantage of our RSA architecture over others for Transformer-based models with variable-length inputs.
|
| 92 |
+
|
| 93 |
+
## VI. CONCLUSION
|
| 94 |
+
|
| 95 |
+
We propose a hybrid parallelism for Transformer-based models with variable-length inputs, specifically data parallelism for linear operators and pipeline parallelism for attention. To make it happen, we develop a reconfigurable systolic array with multi-level packing. First, for a single linear operator, we pack the computation of different input sequences along the array columns for spatial efficiency. Second, to improve the temporal efficiency of the attention block, we develop a head-level pipeline with stages packed along the array columns. Moreover, we develop mask packing to skip the redundant computation that are masked out by Transformer decoder masking. Column packing decisions are explored with a dynamic programming based algorithm to maximize the overall throughput. Applied to GPT, our design on Xilinx U200 FPGA outperforms state-of-the-art GPU work for variable-length inputs by ${1.16} \times$ in normalized throughput and ${1.94} \times$ in runtime MAC utilization across MRPC, RTE and SQuADv2 datasets.
|
| 96 |
+
|
| 97 |
+
<table><tr><td>Input Sequence</td><td colspan="3">Fixed 128</td><td colspan="2">MRPC</td><td colspan="2">RTE</td><td colspan="2">SQuADv2</td></tr><tr><td>Platform</td><td>[16]</td><td>[23]</td><td>Ours</td><td>[16]</td><td>Ours</td><td>[16]</td><td>Ours</td><td>[16]</td><td>Ours</td></tr><tr><td>Device</td><td>A100 GPU</td><td>ZCU102 FPGA</td><td>U200 FPGA</td><td>A100 GPU</td><td>U200 FPGA</td><td>A100 GPU</td><td>U200 FPGA</td><td>A100 GPU</td><td>U200 FPGA</td></tr><tr><td>Precision</td><td>FP16</td><td>INT8</td><td>INT16</td><td>FP16</td><td>INT16</td><td>FP16</td><td>INT16</td><td>FP16</td><td>INT16</td></tr><tr><td>Frequency(MHz)</td><td>1095</td><td>214</td><td>200</td><td>1095</td><td>200</td><td>1095</td><td>200</td><td>1095</td><td>200</td></tr><tr><td>Tensor Core/DSP</td><td>432</td><td>3287</td><td>4160</td><td>432</td><td>4160</td><td>432</td><td>4160</td><td>432</td><td>4160</td></tr><tr><td>Runtime Utilization (FLOPS/MAC)</td><td>0.60</td><td>0.79</td><td>0.93</td><td>0.21</td><td>0.66</td><td>0.34</td><td>0.70</td><td>0.54</td><td>0.75</td></tr><tr><td>Normalized Throughput (GFLOPS/MAC)</td><td>0.16</td><td>0.09</td><td>0.19</td><td>0.11</td><td>0.13</td><td>0.10</td><td>0.14</td><td>0.14</td><td>0.15</td></tr><tr><td colspan="10">TABLE I</td></tr></table>
|
| 98 |
+
|
| 99 |
+
COMPARE END-TO-END PERFORMANCE WITH GPU AND OTHER FPGA WORKS.
|
| 100 |
+
|
| 101 |
+
[1] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin, "Attention is all you need," Advances in
|
| 102 |
+
|
| 103 |
+
neural information processing systems, vol. 30, 2017.
|
| 104 |
+
|
| 105 |
+
[2] J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, "Bert: Pre-training of deep bidirectional transformers for language understanding," arXiv preprint arXiv:1810.04805, 2018.
|
| 106 |
+
|
| 107 |
+
[3] A. Radford, K. Narasimhan, T. Salimans, I. Sutskever et al., "Improving language understanding by generative pre-training," 2018.
|
| 108 |
+
|
| 109 |
+
[4] A. Radford, J. Wu, R. Child, D. Luan, D. Amodei, I. Sutskever et al., "Language models are unsupervised multitask learners," OpenAI blog, vol. 1, no. 8, p. 9, 2019.
|
| 110 |
+
|
| 111 |
+
[5] T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell et al., "Language models are few-shot learners," Advances in neural information processing systems, vol. 33, pp. 1877-1901, 2020.
|
| 112 |
+
|
| 113 |
+
[6] R. Pope, S. Douglas, A. Chowdhery, J. Devlin, J. Bradbury, A. Lev-skaya, J. Heek, K. Xiao, S. Agrawal, and J. Dean, "Efficiently scaling transformer inference," arXiv preprint arXiv:2211.05102, 2022.
|
| 114 |
+
|
| 115 |
+
[7] N. P. Jouppi, C. Young, N. Patil, D. Patterson, G. Agrawal, R. Bajwa, S. Bates, S. Bhatia, N. Boden, A. Borchers et al., "In-datacenter performance analysis of a tensor processing unit," in Proceedings of the 44th annual international symposium on computer architecture, 2017, pp. 1-12.
|
| 116 |
+
|
| 117 |
+
[8] S. Feng, B. Hou, H. Jin, W. Lin, J. Shao, R. Lai, Z. Ye, L. Zheng, C. H. Yu, Y. Yu et al., "Tensorir: An abstraction for automatic tensorized program optimization," in Proceedings of the 28th ACM International Conference on Architectural Support for Programming Languages and Operating Systems, Volume 2, 2023, pp. 804-817.
|
| 118 |
+
|
| 119 |
+
[9] A. Kerr, D. Merrill, J. Demouth, and J. Tran, "Cutlass: Fast linear algebra in cuda c++," NVIDIA Developer Blog, 2017.
|
| 120 |
+
|
| 121 |
+
[10] X. Wang, Y. Xiong, X. Qian, Y. Wei, L. Li, and M. Wang, "Lightseq2: Accelerated training for transformer-based models on gpus," arXiv preprint arXiv:2110.05722, 2021.
|
| 122 |
+
|
| 123 |
+
[11] T. Dao, D. Y. Fu, S. Ermon, A. Rudra, and C. Ré, "Flashattention: Fast and memory-efficient exact attention with io-awareness," arXiv preprint arXiv:2205.14135, 2022.
|
| 124 |
+
|
| 125 |
+
[12] A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga et al., "Pytorch: An imperative style, high-performance deep learning library," Advances in neural information processing systems, vol. 32, 2019.
|
| 126 |
+
|
| 127 |
+
[13] M. Abadi, P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, M. Devin, S. Ghemawat, G. Irving, M. Isard et al., "Tensorflow: a system for large-scale machine learning." in Osdi, vol. 16, no. 2016. Savannah, GA, USA, 2016, pp. 265-283.
|
| 128 |
+
|
| 129 |
+
[14] J. Fang, Y. Yu, C. Zhao, and J. Zhou, "TurboTransformers: An Efficient GPU Serving System for Transformer Models," ser. PPoPP '21. New York, NY, USA: Association for Computing Machinery, 2021, p. 389-402. [Online]. Available: https://doi.org/10.1145/3437801.3441578
|
| 130 |
+
|
| 131 |
+
[15] J. Zeng, M. Li, Z. Wu, J. Liu, Y. Liu, D. Yu, and Y. Ma, "Boosting Distributed Training Performance of the Unpadded BERT Model," arXiv preprint arXiv:2208.08124, 2022.
|
| 132 |
+
|
| 133 |
+
[16] Y. Zhai, C. Jiang, L. Wang, X. Jia, S. Zhang, Z. Chen, X. Liu, and Y. Zhu, "ByteTransformer: A High-Performance Transformer Boosted for Variable-Length Inputs," 2023.
|
| 134 |
+
|
| 135 |
+
[17] S. Fan, Y. Rong, C. Meng, Z. Cao, S. Wang, Z. Zheng, C. Wu, G. Long, J. Yang, L. Xia et al., "Dapple: A pipelined data parallel approach for training large models," in Proceedings of the 26th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, 2021, pp. 431-445.
|
| 136 |
+
|
| 137 |
+
[18] Y. Huang, Y. Cheng, A. Bapna, O. Firat, D. Chen, M. Chen, H. Lee, J. Ngiam, Q. V. Le, Y. Wu et al., "Gpipe: Efficient training of giant neural networks using pipeline parallelism," Advances in neural information processing systems, vol. 32, 2019.
|
| 138 |
+
|
| 139 |
+
[19] D. Narayanan, A. Harlap, A. Phanishayee, V. Seshadri, N. R. Devanur, G. R. Ganger, P. B. Gibbons, and M. Zaharia, "Pipedream: Generalized pipeline parallelism for dnn training," in Proceedings of the 27th ACM Symposium on Operating Systems Principles, 2019, pp. 1-15.
|
| 140 |
+
|
| 141 |
+
[20] D. Narayanan, A. Phanishayee, K. Shi, X. Chen, and M. Zaharia, "Memory-efficient pipeline-parallel dnn training," in International Conference on Machine Learning. PMLR, 2021, pp. 7937-7947.
|
| 142 |
+
|
| 143 |
+
[21] H. Peng, S. Huang, S. Chen, B. Li, T. Geng, A. Li, W. Jiang, W. Wen, J. Bi, H. Liu, and C. Ding, "A Length Adaptive Algorithm-Hardware Co-Design of Transformer on FPGA through Sparse Attention and Dynamic Pipelining," in Proceedings of the 59th ACM/IEEE Design Automation Conference, ser. DAC '22. New York, NY, USA: Association for Computing Machinery, 2022, p. 1135-1140. [Online]. Available: https://doi.org/10.1145/3489517.3530585
|
| 144 |
+
|
| 145 |
+
[22] E. Qin, A. Samajdar, H. Kwon, V. Nadella, S. Srinivasan, D. Das, B. Kaul, and T. Krishna, "SIGMA: A Sparse and Irregular GEMM Accelerator with Flexible Interconnects for DNN Training," in 2020 IEEE International Symposium on High Performance Computer Architecture (HPCA), 2020, pp. 58-70.
|
| 146 |
+
|
| 147 |
+
[23] Z. Liu, G. Li, and J. Cheng, "Hardware acceleration of fully quantized bert for efficient natural language processing," in 2021 Design, Automation & Test in Europe Conference & Exhibition (DATE). IEEE, 2021, pp. 513-516.
|
| 148 |
+
|
| 149 |
+
[24] P. Rajpurkar, J. Zhang, K. Lopyrev, and P. Liang, "Squad: 100,000+ questions for machine comprehension of text," arXiv preprint arXiv:1606.05250, 2016.
|
| 150 |
+
|
| 151 |
+
[25] A. Wang, A. Singh, J. Michael, F. Hill, O. Levy, and S. R. Bowman, "GLUE: A multi-task benchmark and analysis platform for natural language understanding," arXiv preprint arXiv:1804.07461, 2018.
|
papers/ISCA/ISCA 2023/ISCA 2023 Workshop/ISCA 2023 Workshop ASSYST/ymfPxccNUZ/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,127 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ TOWARDS A RECONFIGURABLE SYSTOLIC ARRAY WITH MULTI-LEVEL PACKING FOR TRANSFORMERS
|
| 2 |
+
|
| 3 |
+
${Abstract}$ -Transformer-based models has achieved remarkable success in extensive tasks for natural language processing. To handle the variable-length sentences in human language, prior works suffer from low hardware efficiency due to either the shape mismatch between fixed-shape PEs (processing elements) and variable-shape workloads with data parallelism or large bubbles with pipeline parallelism. This ongoing work proposes a hybrid parallelism mixed with data parallelism for linear operators and pipeline parallelism for the attention. We develop a reconfigurable systolic array with multi-level packing to improve hardware efficiency. First, linear operators for different inputs can be packed along the array columns to improve spatial efficiency. Meanwhile, to boost temporal efficiency, we develop a head-level pipeline for attention with different stages packed on the array. We further skip the redundant computation in the masked attention by packing the computation of two heads along time. Packing decisions are explored with a dynamic programming based algorithm to maximize the overall throughput. Applied to GPT, our FPGA design has achieved ${1.16} \times$ higher normalized throughput and ${1.94} \times$ better runtime MAC utilization over the state-of-the-art GPU performance for variable-length sequences from MRPC, RTE and SQuADv2 datasets.
|
| 4 |
+
|
| 5 |
+
§ I. INTRODUCTION
|
| 6 |
+
|
| 7 |
+
Transformer-based models have achieved remarkable triumphs in a wide range of deep learning tasks for natural language processing, such as machine translation [1], text classification [2] and generation [3], [4]. The extensive success is attributed to the task-agnostic model architecture with increasing number of encoder and decoder layers, and vocabulary size for better quality on various tasks. Such a trend, along with the unlimited text length that these language models need to handle, results in huge amounts of computation and parameters. The full GPT-3 [5] holds 175 billion parameters and requires ${3.14} \times {10}^{23}$ floating-point operations (FLOPS) for training, which would cost over $\$ {4.6}\mathrm{M}$ using a Tesla V100 cloud instance for a single training run. The progressively higher computational demand calls for the need to exploit the efficiency of these models on devices.
|
| 8 |
+
|
| 9 |
+
The acceleration approaches in prior works fall into two paradigms. One approach exploits the intra-operator data parallelism (Fig.1a) in an operator-by-operator basis. [6] optimizes the operator partitioning for Transformer inference on TPUv4 [7]. [8], [9] boost GPU performance with tensor cores, while [10], [11] lay focus on the memory optimizations on GPU. Padding arises from variable-length inputs in a batch due to the fact that popular deep learning frameworks [12], [13] can only handle rectangular shapes. The padded zeros thus introduce excessive overhead in both computation and memory. [14], [15] reduce the padding redundancy by reordering inputs during pre-processing, while [16] eliminates padding for linear blocks and fused attention on GPU by offsetting the variable-length inputs in memory. These works suffer from low efficiency either spatially from the mismatch between fixed-shape processing elements (PEs) and variable-shape workload, or temporally from non-overlapped memory access latency, especially in data-parallel fused attention.
|
| 10 |
+
|
| 11 |
+
< g r a p h i c s >
|
| 12 |
+
|
| 13 |
+
Fig. 1. Execution timeline for different parallel approaches. (a)Intra-operator data parallelism. (b)Sub-layer pipeline parallelism. (c) Hybrid parallelism. The time length of each block is only for illustration.
|
| 14 |
+
|
| 15 |
+
Another approach resorts to the inter-operator pipeline parallelism (Fig.1b) where consecutive operators are assigned to different PEs. [17]-[20] accelerate training of deep learning models with micro-batch layer pipeline. [21] constructs a sequence-wise sub-layer pipeline with approximated attention and feed-forward network. Since the computation of attention and linear blocks are at least linear complexity with regard to the variable input length, pipeline parallelism at sub-layer level could inevitably result in severe pipeline bubbles for large length variance across input sequences.
|
| 16 |
+
|
| 17 |
+
We have two key observations on the GPU performance of Transformer-based models with intra-operator data parallelism. First, the attention suffers from both low temporal and spatial efficiency even for a fused kernel, indicating that the attention could potentially benefit more from inter-operator pipeline parallelism rather than intra-operator data parallelism. Second, the highly-optimized linear blocks only obtain 70% temporal efficiency and ${25}\%$ spatial efficiency. Besides the shape mismatch, it is also limited by the capacity of shared memory and registers due to the fact that data parallelism leads to data replication, meaning less data reuse.
|
| 18 |
+
|
| 19 |
+
To address the above problems, we propose a hybrid parallelism (Fig.1c), data parallelism for linear blocks and pipeline parallelism for attention. The latter is finer-grained than sublayer level to reduce pipeline bubbles. However, challenges arise in the architecture support for the hybrid parallelism. On one hand, inter-operator pipeline parallelism needs to split the PEs for pipeline stages. The shape mismatch can also be alleviated with the split along with workload decomposition. On the other hand, intra-operator data parallelism needs to unify the PEs and registers to maximize the data reuse. To meet the both requirements, we propose a runtime reconfigurable systolic array (RSA), where PEs across columns can work either together for a single operator or separately for multiple operators. Specifically, the RSA can be split for different input tokens in linear operators, or for different pipeline stages in the attention. We use column packing for this column-wise reconfigurable working pattern. Moreover, the masking in the decoder, which preserves the auto-regressive property to prevent leftward information in the flow, brings ${50}\%$ redundancy in attention, especially for long sequences, but is neglected in prior works. We further propose mask packing to skip the redundant computation between two heads assigned to the same RSA columns.
|
| 20 |
+
|
| 21 |
+
Our contributions are summarized as follows:
|
| 22 |
+
|
| 23 |
+
* We develop a reconfigurable systolic array for hybrid parallelism, data parallelism for linear blocks and pipeline parallelism for attention, to improve the hardware efficiency of Transformer-based models.
|
| 24 |
+
|
| 25 |
+
* We propose a two-level packing, column packing and mask packing, to boost efficiency spatially and temporally for variable-length inputs. Packing decisions are explored with a dynamic programming based algorithm to maximize the overall throughput.
|
| 26 |
+
|
| 27 |
+
* Applied to GPT, our design on U200 FPGA shows ${1.16} \times$ higher normalized throughput and ${1.94} \times$ better runtime MAC utilization over the state-of-the-art GPU performance for variable-length input sequences from MRPC, RTE and SQuADv2 datasets.
|
| 28 |
+
|
| 29 |
+
In the following sections, we will first describe details of column packing and mask packing in Section II and then propose RSA architecture in Section III. Then we will explore the column packing decisions for hybrid parallelism in Section IV.Section V and Section VI present experiment results and conclusions.
|
| 30 |
+
|
| 31 |
+
§ II. METHOD
|
| 32 |
+
|
| 33 |
+
§ A. COLUMN PACKING
|
| 34 |
+
|
| 35 |
+
1) Pack Linear Blocks: We exploit the intra-operator data parallelism for each linear operator, namely a $M \times N \times K$ matrix multiplication(MM), where $M,N,K$ stand for input rows, output columns and hidden size. We have some observations on the MM shapes in a Transformer-based model. For a variable-length input, $M$ is equal to input sequence length $L$ . Whether $N$ and $K$ are variable varies across different MMs. In the first case, which is also the most common case in the linear blocks of Transformer-based models, $N$ and $K$ are fixed as a multiple of head size ${d}_{h}$ . The second case includes the two MMs in the attention with variable $N$ or $K$ , where the shapes are $L \times L \times {d}_{h}$ and $L \times {d}_{h} \times L$ . The shape mismatch between fixed-shape PEs and variable-shape MMs leads to low efficiency. Rather than suffering from multidimensional shape mismatch between PE and MM, we map the fixed shapes to the RSA rows and variable shapes to the RSA columns and temporal dimension so that the shape mismatch can be maximally alleviated by column packing. To be more specific, for a MM with variable-length inputs, we pack $N$ from different input sequences along RSA columns to maximize the spatial efficiency. We also take advantage of split-k, as described in [9], to partially unroll the $K$ dimension to balance the parallel workloads along columns for temporal efficiency.
|
| 36 |
+
|
| 37 |
+
2) Pack Attention: We propose a coarse-grained head-level pipeline for the attention with six stages, including ${KQ}$ load, MM $K{Q}^{T},V$ load, softmax, MM $S{V}^{T}$ and final save. The two MMs are packed along RSA columns during the pipeline, where the former has variable $N$ and the latter has variable $K$ . Since two different variable dimensions are mapped to RSA columns, weight stationary and output stationary dataflow are respectively required. Moreover, the number of heads to run per stage is worth study. More heads to pack in a MM stage leads to better spatial efficiency locally within the stage, but potentially results in worse global efficiency, since the larger pipeline granularity brings more bubbles. The pipeline stage partition will be discussed more in Section IV.
|
| 38 |
+
|
| 39 |
+
§ B. MASK PACKING
|
| 40 |
+
|
| 41 |
+
Each token only needs the computation results from its preceding tokens in the input sequence, but do not need those after. A Transformer decoder masks out the unnecessary ones to preserve the auto-regressive property. To eliminate the masking redundancy in softmax $\left( {\operatorname{mask}\left( {K{Q}^{T}}\right) }\right) {V}^{T}$ , we propose mask packing as in Fig.2d. Rather than applying masking after the full computation of two $K{Q}^{T}\mathrm{\;s}$ , we skip the redundant computation and only generate a packed result matrix $S$ . We use $S$ as packed layout for the following softmax and $S{V}^{T}$ for memory efficiency. So PEs need to handle $K{Q}^{T}$ and $S{V}^{T}$ with fixed and variable reduction length, respectively, and the softmax module needs to handle vectors in the packed layout.
|
| 42 |
+
|
| 43 |
+
§ III. HARDWARE ARCHITECTURE DESIGN
|
| 44 |
+
|
| 45 |
+
To provide the underlying architecture for the column packing and mask packing, we develop a RSA along with arbiter networks and a nonlinear vector module (NVM), as shown in Fig.2b.
|
| 46 |
+
|
| 47 |
+
We develop a two-dimensional systolic array with PEs and coupled shift registers. Fig.2b shows the circuit diagram of a RSA PE, which has three-level reconfigurablity from the control signals (gray). First, use_reg_row configures the RSA split along columns by selecting the input data path to the multiplier. If it is set to 1, the multiplier will take the value stored in the row register (orange) as input via the reconfigurable data path (blue) rather than the forwarded value from the left PE, where the two neighboring PEs can thus work separate workloads. Second, the coupled shift register is for input buffering and its buffer switch can be configured for mask packing. Third, use_forward_psum configures the dataflow for a RSA. If it is set to 1, PE uses the partial sum forwarded from upper PE and thus enables weight stationary dataflow. Otherwise, the accumulation will be performed locally as output stationary dataflow. The reconfigurable dataflow is for the two MMs in attention with variable $N$ and $K$ respectively so that they can be packed along RSA columns.
|
| 48 |
+
|
| 49 |
+
< g r a p h i c s >
|
| 50 |
+
|
| 51 |
+
Fig. 2. (a) System diagram. (b) Circuit diagram of a RSA PE and its coupled shift registers. The input data path and buffer switch can be configured for packing. (c) RSA is split to halves in different dataflows for column packing. (d) Mask packing. We show an example where we skip the redundant computation for $K{Q}^{T}$ from two heads.
|
| 52 |
+
|
| 53 |
+
We use two arbiter networks for the interconnection between RSA and on-chip buffers to meet different communication patterns under column packing. For a MM where $K$ dimension is partially unrolled across columns, we need to collectively reduce the partial sums from multiple columns. For the attention pipeline, the result of the first MM computed on a RSA partition is written to on-chip buffers and then fed to another RSA partition. These two patterns are realized by the arbiter networks. Moreover, to handle the packed layout for mask packing, our NVM takes advantage of a configurable reduction tree proposed in [22] for maximum and sum reduction with arbitrary length in softmax.
|
| 54 |
+
|
| 55 |
+
§ IV. SCHEDULING
|
| 56 |
+
|
| 57 |
+
§ A. COLUMN PACKING FOR A SINGLE OPERATOR
|
| 58 |
+
|
| 59 |
+
We first discuss the column packing decisions for a MM with variable-length inputs in the shape ${L}_{i} \times N \times K.{L}_{i}$ is the length of the $i$ th input in a batch. Mapping $N$ and partial $K$ to the spatial column dimension and ${L}_{i}$ to the temporal dimension, we enumerate all combinations of $N$ values and $K$ factors to find the pair with maximal spatial efficiency. For example, we are mapping a MM with $N = 4$ and $K = 8$ to a RSA with 16 columns and 4 rows. To maximize spatial efficiency, we unroll $K = 8$ along 2 columns besides 4 rows. Still, only 8 columns $(N = 4 \times 2,2$ is $K$ ’s column unroll factor) are used. So we split RSA to two partitions, each holding 8 columns and serving part of ${L}_{i}$ along the time. We then split the ${L}_{i}$ to two parts to balance the workload packed along columns.
|
| 60 |
+
|
| 61 |
+
§ B. COLUMN PACKING FOR ATTENTION PIPELINE
|
| 62 |
+
|
| 63 |
+
For the attention pipeline at head level, we aim to split head sequences $\mathcal{H} = \left\{ {h}_{ij}\right\}$ , where $i$ is the sequence index in a batch and $j$ is the head index, to multiple stages, while minimizing the overall latency. Within each stage, intra-operator column packing in Section IV-A is applied. Mask packing is also applied to each MM. We formulate the pipeline stage partition as a dynamic programming problem. Its optimal sub-structure is listed in Eq.1. $p$ is a bit vector, where 1 means the $k$ th head in $\mathcal{H}$ is packed with its last preceding head and 0 means no packing. The stage partition can be inferred from $p$ with simple union-and-find method. Column packing is constrained by the on-chip memory capacity. ${M}_{\max }$ is the maximally allowed on-chip memory pressure and ${M}_{k}$ is the memory pressure of $k$ th head. Iterating the ${h}_{k}$ in $\mathcal{H}$ , we find the maximal overall throughput $\mathbb{T}$ with the head packed to last preceding head to one stage or not. If ${h}_{k}$ is packed, bookkeeping $p\left\lbrack k\right\rbrack = 1,{M}_{k}$ is on hold when exploring the column packing decision for next head. Otherwise, we check the packing of next head at a new stage with ${M}_{\max }$ . The optimal stage partition will maximally reduce the pipeline bubbles.
|
| 64 |
+
|
| 65 |
+
$$
|
| 66 |
+
\mathbb{T}\left( {k,p,M}\right) = \max \left\{ \begin{array}{l} \mathbb{T}\left( {k - 1,p\left\lbrack k\right\rbrack = 1,M - {M}_{k}}\right) \mid M > {M}_{k} \\ \mathbb{T}\left( {k - 1,p\left\lbrack k\right\rbrack = 0,{M}_{max}}\right) \end{array}\right.
|
| 67 |
+
$$
|
| 68 |
+
|
| 69 |
+
(1)
|
| 70 |
+
|
| 71 |
+
§ V. EXPERIMENT RESULTS
|
| 72 |
+
|
| 73 |
+
§ A. EVALUATION SETTING
|
| 74 |
+
|
| 75 |
+
We implement our accelerator on Xilinx U200 FPGA with RSA in 4 rows and 1024 columns, NVM in 32 vector length, and four DDR4. The latency in cycle count for the evaluation below is collected through RTL-level simulation with Xilinx Vivado Suite.
|
| 76 |
+
|
| 77 |
+
We benchmark our design with 6 different settings on input sequence length. The first three are constructed with fixed-length inputs in length64,512,2048in batch size 8 . The other three collect variable-length inputs from MRPC, RTE and SQuADv2 [24] test sets, respectively, and then packed in batches with size 8 . Three datasets have ${14}/{40},{54}/{240}$ , 167/791 for average/maximal sequence length. The former two datasets are from GLUE [25] benchmark suite with representative length in small and medium length, while the latter covers more long length. We run these datasets on GPT for evaluation.
|
| 78 |
+
|
| 79 |
+
§ B. PERFORMANCE WITH STEP-WISE OPTIMIZATION
|
| 80 |
+
|
| 81 |
+
We apply step-wise evaluation to show the effect of hybrid parallelism with column packing and mask packing. Intra-operator Data Parallel runs the model on RSA with intra-operator data parallelism in operator-by-operator basis. Layer Pipeline runs a two-stage layer pipeline parallelism, where the RSA is split to halves and each runs a Transformer layer for different sequences. The other three applies hybrid parallelism incrementally with different packing methods.
|
| 82 |
+
|
| 83 |
+
Fig. 3 shows the impact of step-wise optimization on GPT. One can find that Layer Pipeline is limited by off-chip memory bandwidth on a single device and thus performs worse for longer sequences. The hybrid parallelism is effective in all cases with ${1.17} \times$ higher throughput on average than intra-operator data parallelism, while column packing and mask packing bring ${1.21} \times$ and ${1.26} \times$ performance boost respectively. Column packing gets more benefits for short sequences, such as fixed 64, MRPC and RTE, with better spatial efficiency. Mask packing benefits more for long sequences, which brings additional ${30}\%$ for fixed 2048, but is marginal for other cases. This is because the computation of attention grows quadratically with sequence length and the attention takes ${50}\%$ of total computation for fixed 2048, while it takes less than ${10}\%$ for fixed 64 .
|
| 84 |
+
|
| 85 |
+
< g r a p h i c s >
|
| 86 |
+
|
| 87 |
+
Fig. 3. GPT performance on RSA with step-wise optimization.
|
| 88 |
+
|
| 89 |
+
§ C. END-TO-END PERFORMANCE
|
| 90 |
+
|
| 91 |
+
Table I compares our performance with other works on GPU and FPGA on GPT. Batch size 8 is used for all cases. We evaluate GPU performance with [16], which is the state-of-the-art GPU work for variable-length inputs. [21] is optimized for variable-length inputs on FPGA, but does not include detailed throughput for each dataset. [23] only reports performance for fixed-length inputs, which is also the case for other FPGA works. So we compare the performance for fixed 128 length inputs with [23] and that for variable-length inputs with [16] on three datasets. The throughput is normalized in MAC units and 16-bit precision. Our design outperforms GPU and FPGA works by ${1.16} \times$ and ${2.11} \times$ on normalized throughput, respectively, across fixed-length and variable-length inputs. The advantage comes from the better efficiency due to the column packing and mask packing on our RSA, which shows ${1.94} \times$ and ${1.18} \times$ better MAC efficiency over GPU and FPGA works. This observation exhibits the advantage of our RSA architecture over others for Transformer-based models with variable-length inputs.
|
| 92 |
+
|
| 93 |
+
§ VI. CONCLUSION
|
| 94 |
+
|
| 95 |
+
We propose a hybrid parallelism for Transformer-based models with variable-length inputs, specifically data parallelism for linear operators and pipeline parallelism for attention. To make it happen, we develop a reconfigurable systolic array with multi-level packing. First, for a single linear operator, we pack the computation of different input sequences along the array columns for spatial efficiency. Second, to improve the temporal efficiency of the attention block, we develop a head-level pipeline with stages packed along the array columns. Moreover, we develop mask packing to skip the redundant computation that are masked out by Transformer decoder masking. Column packing decisions are explored with a dynamic programming based algorithm to maximize the overall throughput. Applied to GPT, our design on Xilinx U200 FPGA outperforms state-of-the-art GPU work for variable-length inputs by ${1.16} \times$ in normalized throughput and ${1.94} \times$ in runtime MAC utilization across MRPC, RTE and SQuADv2 datasets.
|
| 96 |
+
|
| 97 |
+
max width=
|
| 98 |
+
|
| 99 |
+
Input Sequence 3|c|Fixed 128 2|c|MRPC 2|c|RTE 2|c|SQuADv2
|
| 100 |
+
|
| 101 |
+
1-10
|
| 102 |
+
Platform [16] [23] Ours [16] Ours [16] Ours [16] Ours
|
| 103 |
+
|
| 104 |
+
1-10
|
| 105 |
+
Device A100 GPU ZCU102 FPGA U200 FPGA A100 GPU U200 FPGA A100 GPU U200 FPGA A100 GPU U200 FPGA
|
| 106 |
+
|
| 107 |
+
1-10
|
| 108 |
+
Precision FP16 INT8 INT16 FP16 INT16 FP16 INT16 FP16 INT16
|
| 109 |
+
|
| 110 |
+
1-10
|
| 111 |
+
Frequency(MHz) 1095 214 200 1095 200 1095 200 1095 200
|
| 112 |
+
|
| 113 |
+
1-10
|
| 114 |
+
Tensor Core/DSP 432 3287 4160 432 4160 432 4160 432 4160
|
| 115 |
+
|
| 116 |
+
1-10
|
| 117 |
+
Runtime Utilization (FLOPS/MAC) 0.60 0.79 0.93 0.21 0.66 0.34 0.70 0.54 0.75
|
| 118 |
+
|
| 119 |
+
1-10
|
| 120 |
+
Normalized Throughput (GFLOPS/MAC) 0.16 0.09 0.19 0.11 0.13 0.10 0.14 0.14 0.15
|
| 121 |
+
|
| 122 |
+
1-10
|
| 123 |
+
10|c|TABLE I
|
| 124 |
+
|
| 125 |
+
1-10
|
| 126 |
+
|
| 127 |
+
COMPARE END-TO-END PERFORMANCE WITH GPU AND OTHER FPGA WORKS.
|
papers/JSYS/JSYS 2021/JSYS 2021 Mar_Papers/4Xo8nv5DNS/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
papers/JSYS/JSYS 2021/JSYS 2021 Mar_Papers/4Xo8nv5DNS/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
papers/JSYS/JSYS 2021/JSYS 2021 Mar_Papers/N6_kWfbABl1/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,447 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# SURFBOARD: REPRODUCIBLE PERFORMANCE ANALYSIS FOR DISTRIBUTED MACHINE LEARNING WORKFLOWS
|
| 2 |
+
|
| 3 |
+
Anonymous authors
|
| 4 |
+
|
| 5 |
+
Paper under double-blind review
|
| 6 |
+
|
| 7 |
+
## Abstract
|
| 8 |
+
|
| 9 |
+
Large-scale HPC infrastructures are enablers for scientific research in many domains. The recent advances in machine learning (ML) have led to an ever increasing demand for computation power, as well as the design of complex operational workflows. Understanding the performance and efficiency of these workflows is key to productivity, knowledge and model sharing, and energy efficiency. Even though there have been efforts in studying and designing portability protocols, performance analysis of large-scale ML is still an expert-driven task, tightly locked-in to specific physical and software infrastructure. Much like in other domains, this hinders reproducibility of both results and overall workflow performance. To overcome this challenge, we propose the design of a container-based framework for reproducible performance analysis of ML workflows at scale. We validate our framework using a case-study on two different large-scale production systems running ML workflows. We show empirically that our containerized approach is portable and allows arbitrarily low-level performance evaluation when run on two different, production-based HPC clusters with hundreds of GPUs. We report our findings on widely-used open-source software stacks and datasets and offer practitioners insights into what types of analyses our framework enables. To benefit the community, we open-source our software and results.
|
| 10 |
+
|
| 11 |
+
## 1 Introduction
|
| 12 |
+
|
| 13 |
+
The rapid advancements in hardware performance in recent years have enabled the operation of machine learning (ML), and especially deep learning (DL) [34]. This field has gained immense traction and interest, and made important contributions to many other domains such as medicine [56] or physics [11]. This led to an abundance of trained models or systems $\left\lbrack {1,8,{29},{41}}\right\rbrack$ to run such models. Dean et al. [15] show that articles published per year in the field grow faster than compute power per Moore's law, up to 100 articles per day at the end of 2018. Naturally, domain scientists and practitioners need to use such models and techniques to solve problems (more efficiently), and hence need to replicate the findings and setups of large amounts of ML published work.
|
| 14 |
+
|
| 15 |
+
Reproducibility $\left\lbrack {{27},{46}}\right\rbrack$ is the Achilles’ heel in computer science in general, and is much more difficult to achieve in large-scale computer systems $\left\lbrack {{19},{53}}\right\rbrack$ . This is because the sheer complexity of the involved physical infrastructure, interconnected through large-scale networks and many layers of software. ML scientists and practitioners not only need reproducible results, but also reproducible performance analysis. Understanding the performance of ML models and frameworks is key to achieving productivity, knowledge and model sharing, as well as energy efficiency. This is especially important since training has been shown to have significant environmental impact [49] for several ML models.
|
| 16 |
+
|
| 17 |
+
Although reproducible results are generally difficult to achieve, seminal work $\left\lbrack {{47},{48}}\right\rbrack$ has been steering the community toward achieving this goal. Instead, in this paper we explore the domain of reproducible performance analysis in large-scale distributed ML. This is a significant and challenging problem exacerbated by two aspects. First, most of the tools and systems involved are locked-in to specific infrastructure, such as HPC clusters and supercomputers. Second, large-scale infrastructure is intrinsically variable in hardware performance $\left\lbrack {{17},{37}}\right\rbrack$ , which subsequently affects application performance. Although guidelines for reproducible performance evaluation exist $\left\lbrack {{40},{52}}\right\rbrack$ , it is unclear whether these are sufficient for ML performance evaluation.
|
| 18 |
+
|
| 19 |
+
Due to their high demand of compute power, ML and DL workloads are naturally suited for deployment in large-scale HPC clusters equipped with special hardware, such as GPUs, FPGAs, or TPUs [15]. Being deployed in HPC infrastructure, ML frameworks such as pytorch [41] or Tensorflow [1] have evolved to run through specialized, tightly-coupled MPI [45] interfaces. Although several performance evaluation frameworks for MPI applications exist, like Tau [44], Scalasca [22], or VAMPIR [31], these are insufficient to assess the performance of ML workloads on HPC clusters. This is because ML and DL frameworks have many levels of complexity and use specialized hardware for which practitioners have to understand bottlenecks as well. Special lower-level profilers like nvprof [10], or pyprof ${}^{1}$ are needed to gather lower level metrics. Finally, all these metrics and measurements have to be combined at arbitrary levels in the software stack to understand the performance of specific components.
|
| 20 |
+
|
| 21 |
+
Moreover, reproducible performance analysis has become even more difficult due to the novel development of complex ML workflow pipelines. These workflows tend to continually expand and include increasingly complex models, pre-processing pipelines for data augmentation [13], diverse data formats and dimensionality, or even complex simulators $\left\lbrack {4,9}\right\rbrack$ . The hardware ecosystem used for training these complex systems is evolving and becoming more diverse and heterogeneous, making reproducible performance analysis difficult. Furthermore, the low-level kernels implementing key ML primitives, on which high-level frameworks depend, are also in continuous development and contribute to the complexity of these workflows. The previously-reported artificial intelligence reproducibility crisis [27] is growing at an accelerated pace and covers the whole spectrum: from numerical reproducibility to performance reproducibility. In this work we focus on addressing the latter.
|
| 22 |
+
|
| 23 |
+
---
|
| 24 |
+
|
| 25 |
+
${}^{1}$ https://github.com/NVIDIA/PyProf
|
| 26 |
+
|
| 27 |
+
---
|
| 28 |
+
|
| 29 |
+
Although several steps have been taken toward achieving in-depth performance evaluation for ML workloads, these are not fully reproducible and do not support complex work-flows. Building blocks for performance evaluation include visualization techniques [32], lower level performance characterization $\left\lbrack {5,{28}}\right\rbrack$ , or benchmarking efforts $\left\lbrack {7,{30}}\right\rbrack$ . To enable practitioners to use such systems and benchmarks over a variety of infrastructure and with better reproducibility guarantees, in contrast, in this paper we present a framework for reproducible performance analysis of ML workloads.
|
| 30 |
+
|
| 31 |
+
Our contribution is a containerized profiling framework, called SURFBoard, that operates on all modern container-enabled large-scale HPC infrastructure. In this paper we focus on performance analysis for computer vision DL work-flows. However, our work is modular and highly configurable and thus can be adapted to more general ML pipelines. As a consequence, the user can extend it using other profiling tools next to our current toolkit: pytorch [41], NVIDIA DALI, Horovod [33], OpenMPI [20], TAU [44], NVProf [10]. Using this toolset, we show that we can perform performance analysis at arbitrary levels of the software stack. Users can easily answer questions such as what is the training time scal-ability?, what parameters affect batch duration?, or what are the most time-consuming CUDA kernels per batch?.
|
| 32 |
+
|
| 33 |
+
In helping practitioners with performing reproducible performance analysis on complex ML/DL workflows, we show that SURFBoard is able to capture complex performance behavior on two different production-based GPU-enabled clusters. Our experiments focus on implementing typical, real-world analyses that practitioners use to search for bottlenecks and inefficiencies in their training workflows. To enable reproducible performance evaluation in ML workflows on large-scale HPC infrastructure, our contributions are:
|
| 34 |
+
|
| 35 |
+
### 1.We present the design and implementation of SURF- Board, a containerized profiling framework for ML work- loads. To benefit the community, we open source ${}^{2}$ our work as well as the visualization notebooks and collected performance datasets (Section 2).
|
| 36 |
+
|
| 37 |
+

|
| 38 |
+
|
| 39 |
+
Figure 1: SURFBoard in the typical ML Pipeline life-cycle.
|
| 40 |
+
|
| 41 |
+
2. To validate our work, we present a case-study of performance analysis for DL workloads on two large-scale GPU-enabled clusters. We provide an in-depth experiment design showcasing the features of our profiling framework when running typical performance analysis that practitioners do when benchmarking their DL work-flow. (Sections 3-4).
|
| 42 |
+
|
| 43 |
+
3. We present an in-depth performance analysis on the two clusters using real-world open-source frameworks, workloads, and datasets. We show the portability and reproducibility of our results, discuss the main findings of our experiments, and show how practitioners can use SURFBoard to identify bottlenecks and performance issues (Section 5).
|
| 44 |
+
|
| 45 |
+
## 2 SURFBoard: Containerized Profiling Work- flow Design
|
| 46 |
+
|
| 47 |
+
In this section we describe in detail the design and implementation of our containerized profiling framework called SURFBoard. Our work is able to integrate with any ML or DL framework running on high-end HPC infrastructure. We show how we integrate our profiling workflow with state-of-the-art ML software stacks, such as pytorch, openMPI, DALI, ${}^{3}$ or Horovod [43]. We show how ML workflows can be reproduced and ported to large-scale infrastructures that differ in software and hardware through containerization, and how users can perform parameter sweeps over important parameter spaces.
|
| 48 |
+
|
| 49 |
+
Figure 1 illustrates the typical life-cycle of a ML pipeline. The pipeline consists of training scripts and possibly container definitions for training a specific ML model, such as ResNet- 50. Starting from the initial pipeline, profiling is performed at scale, using some representative dataset, and the pipeline is optimized until performance meets some user-defined criteria. SURFBoard helps automate this part of the life-cycle. Subsequently, the optimized ML pipeline can be deployed to train ML models many times, on various datasets.
|
| 50 |
+
|
| 51 |
+
---
|
| 52 |
+
|
| 53 |
+
${}^{2}$ https://anonymous.4open.science/r/
|
| 54 |
+
|
| 55 |
+
1116c4aa-7342-45d2-b03a-e602e387cd3b/
|
| 56 |
+
|
| 57 |
+
${}^{3}$ https://github.com/NVIDIA/DALI
|
| 58 |
+
|
| 59 |
+
---
|
| 60 |
+
|
| 61 |
+

|
| 62 |
+
|
| 63 |
+
Figure 2: Software stack for profiling ML workflows.
|
| 64 |
+
|
| 65 |
+
In most cases, profiling starts with the user requesting a job orchestrator, e.g. SLURM, for a node allocation and subsequent execution of the training algorithms needed to run. Developers and practitioners may want to perform a parameter sweep over the possible parameter space: input dataset, number of I/O workers, number of GPUs per node, batch size, gradient precisions etc. To enable this, we implemented a Sweep Orchestrator with a specified set of YAML configuration files. The orchestrator utilizes the application-specific Sweep Plugin to generate a set of command line invocations of the training script inside a containerized environment, using MPI for inter-process communication.
|
| 66 |
+
|
| 67 |
+
To achieve portability and performance reproducibility over an extensive set of large-scale infrastructure, different in both hardware and software deployments, we created a containerized environment that can run state-of-the-art ML training stacks on most (high-end) HPC infrastructures. One of the most challenging issues in performance reproducibility is achieving similar setups on different infrastructure. We solve this by creating our containerized approach to ML training performance analysis.
|
| 68 |
+
|
| 69 |
+
Figure 2 illustrates the software stack involved in executing profiling experiments with our workflow. The stack is roughly divided into two sections consisting of code running in a host environment and code executed inside a Docker or Singularity container. The container is configured to provide communication and profiling infrastructure in addition to the training capability of Pytorch and associated libraries for neural network training. The specific capabilities of the container are:
|
| 70 |
+
|
| 71 |
+
1. Training with Pytorch and optionally using the Nvidia Data Loading Library, DALI. Compared to the Pytorch builtin dataloader library, Torchvision, DALI offers multiple advantages: input from folders of images or Tensorflow Records (TFRecords), various levels of GPU offloading of the preprocessing, and advanced profiling. We provide a custom-built version of DALI with NVTX annotations enabled, which allows GPU profilers to inspect the details of preprocessing-related computation. It is important to note here that our entire pipeline is configurable, and the user can add different training frameworks, like Tensorflow [1].
|
| 72 |
+
|
| 73 |
+
2. Communication over OpenMPI, which can be leveraged from Pytorch either through the Pytorch built-in native multi-node training module, Pytorch DDP, or through Horovod. We include Horovod specifically for its capability for explicit gradient quantization, i.e. casting the Pytorch-computed FP32 gradients to FP16 before invoking MPI to perform all-reduce for data-parallel training. We configure Horovod to perform all its operations with MPI even when transferring from GPU memory, which enables MPI profilers to inspect the traffic.
|
| 74 |
+
|
| 75 |
+
3. Profiling at multiple levels, targeting compute and communication. The MPI-profiler leveraged in this work is TAU [44], used for detailed communication profiling. NVProf and NSys are leveraged for detailed compute kernel profiling on the GPU, and Pyprof for linking GPU profiling data to the Python execution graph and neural network model.
|
| 76 |
+
|
| 77 |
+
We stress that the tools used in our work are mere examples of what practitioners can achieve with a containerized framework for reproducible performance evaluation. In fact, all three layers of our containerized design are highly configurable, and other types of tools could be integrated. For example, one can gather CPU performance counters using PAPI [55], or use VAMPIR [31] instead of TAU.
|
| 78 |
+
|
| 79 |
+
### 2.1 Sweep orchestrator
|
| 80 |
+
|
| 81 |
+
We denote a profiling sweep as the set of profiling experiments which traverses all possible application parameter combinations. As this set can be relatively large and varies between neural networks, we provide experiment orchestration infrastructure based on ${\mathrm{{Hydra}}}^{4}$ designed to enable automatic generation of command line arguments for a profiling sweep. We note that each experiment command follows the structure: mpirun <MPI args> singularity exec <singularity args> <profiler args> main.py <application args>
|
| 82 |
+
|
| 83 |
+
The orchestrator defines a base Python class Experiment, initialized with the complete set of possible experiment parameters, as well as necessary abstract functions for parameter combination filtering and command string generation. For non-numeric parameters, the orchestrator defines Enums which constrain the parameter values. The possible application parameters are defined in Table 1, and divided into four classes corresponding to the four types of CLI arguments in the command structure:
|
| 84 |
+
|
| 85 |
+
1. Scale and infrastructure parameters, which manifest in the MPI arguments, include the number of nodes, GPUs per node, CPUs per process, and the networking fabric utilized for communication (Infiniband or Ethernet)
|
| 86 |
+
|
| 87 |
+
2. Container parameters include, for example, the singularity container itself (SIF file), as well as the locations of training data and TFRecord index, if needed. These are mounted into the Singularity container at pre-defined mount points.
|
| 88 |
+
|
| 89 |
+
---
|
| 90 |
+
|
| 91 |
+
${}^{4}$ https://github.com/facebookresearch/hydra
|
| 92 |
+
|
| 93 |
+
---
|
| 94 |
+
|
| 95 |
+
<table><tr><td/><td>Possible Values in YAML Config</td><td>High-level Description</td></tr><tr><td>Ranks</td><td>Numeric</td><td>Total number of MPI Processes executed</td></tr><tr><td>GPU per Node</td><td>Numeric</td><td>Processes allowed per-node, each of which maps to one GPU</td></tr><tr><td>Profile Level</td><td>None, tau_exec, tau_python, tau_python_cuda, nsys, nvprof</td><td>No profiling Use TAU for mpi, python and GPU profiling respectively Use NVidia tools for GPU profiling</td></tr><tr><td>Network Backend</td><td>ib, eth</td><td>Data transfer fabric (InfiniBand or Ethernet)</td></tr><tr><td>Repetitions</td><td>Numeric</td><td>Number of runs of each experiment configuration</td></tr><tr><td>Gradient Precision</td><td>fp16, fp32</td><td>Precision of the communicated gradients.</td></tr><tr><td>Compute Precision</td><td>fp16, fp32, mixed</td><td>Precision of Compute</td></tr><tr><td>Batch size per GPU</td><td>Numeric</td><td>Size of per-GPU batches</td></tr><tr><td rowspan="3">Data Loader</td><td>pytorch,</td><td>Use Torchvision dataloader</td></tr><tr><td>dali-gpu,</td><td>Perform all preprocessing on GPU</td></tr><tr><td>dali-cpu-to-gpu</td><td>Perform all preprocessing on CPU, then move data to GPU</td></tr><tr><td rowspan="2">Data Format</td><td>folder,</td><td>Use compressed images in specified folder</td></tr><tr><td>tfrecord</td><td>Use pre-packaged Tensorflow Records</td></tr><tr><td>Workers</td><td>Numeric</td><td>Number of preprocessing worker threads</td></tr><tr><td>Communication Backend</td><td>Horovod [43]</td><td>Gradient synchronization framework</td></tr></table>
|
| 96 |
+
|
| 97 |
+
Table 1: Typical parameters that DL practitioners consider in their performance evaluations and their high-level description.
|
| 98 |
+
|
| 99 |
+
3. Profiler parameters define the profiler to be utilized as well as any parametrization of the profiler. Profiler options are TAU, TAU-Python (with optional CUDA support), or NVProf/NSys CUDA profilers.
|
| 100 |
+
|
| 101 |
+
4. Training parameters configure the NN training flow itself: input data format (folder of images or TF records), data loader (torchvision, DALI CPU, DALI GPU), execution precision - fp32, fp16, or mixed, gradient precision between fp32 and fp16, and distribution backend (Horovod).
|
| 102 |
+
|
| 103 |
+
### 2.2 Implementing New Training Workflows
|
| 104 |
+
|
| 105 |
+
We offer the user the possibility to define new types of experiments to enable novel training workflows while retaining the capability of achieving portable and reproducible performance evaluation. To plug a new training flow into the Orchestrator, the user must define a new class inheriting Experiment and implement the cmd( ) method which produces a command from the relevant parameters as well as the is_legal( ) method, which filters out parameter combinations that the target training flow cannot implement.
|
| 106 |
+
|
| 107 |
+
At run-time, the orchestrator leverages Hydra to assemble a sweep configuration from YAML files as follows: a main configuration file defines which parameters have constant values across all runs in a sweep and which cycle through multiple values. The possible values of non-constant parameters are defined in a second YAML configuration file. Hydra assembles the information about parameter values, calculates possible parameter value combinations as the Cartesian product of possible values for each parameter, checks the legality of the parameter combination using is_legal( ), and executes the command produced by cmd( ) from the specified parameter values. To define a new sweep, the user only needs to define the two YAML configuration files.
|
| 108 |
+
|
| 109 |
+
### 2.3 Open Source Commitment
|
| 110 |
+
|
| 111 |
+
Our work is implemented in python. Development took approximately 6 person-months, most of which was spent debugging the various components of the deep software stack illustrated in Figure 2, setting up the NVTX profiling infrastructure, and to ensure portability of containers and experiment orchestrator between various systems.
|
| 112 |
+
|
| 113 |
+
The experimental data gathering and visualization took another 6 person-months. We release both the source code of our reproducible performance analysis framework, as well as all the performance data and visualization scripts. Since the start of the project in 2019, approximately 50,000 core hours were spent for debugging and initial framework calibration, while 300,000 core hours have been utilised on the Cartesius cluster and 50,000 core hours have been used on the LISA cluster (see Section 4 for an in-depth description of these clusters) for running profiling experiments. In the following sections of this paper we present our validation of SURFBoard using a case-study of large-scale training experiments on two production HPC infrastructures.
|
| 114 |
+
|
| 115 |
+
## 3 Case Study: Experiment Design
|
| 116 |
+
|
| 117 |
+
In this section we describe in detail the design of the experiment we perform to validate SURFBoard. We seek to empirically show that our performance analysis framework adheres to reproducibility standards and is able to help ML practitioners answer valuable questions about the performance of ML training workflows. We describe the high-level goals of our experiment, which are typical questions a ML practitioner would ask when assessing the performance of a ML workflow. We focus on the methodology of performing the performance analysis of the high-level goals, and we describe in detail the DL model used in our study.
|
| 118 |
+
|
| 119 |
+
### 3.1 Case Study High-level Goals
|
| 120 |
+
|
| 121 |
+
The goal of this profiling exercise is to evaluate compute and communication efficiency for data-parallel distributed training of ResNet50 on SURFSara infrastructure, and quantify the contribution of each training pipeline stage (batch preprocessing, training, communication) to the total runtime, under various configurations of each stage. Furthermore, we wish to construct a performance model enabling performance extrapolation. We separate this goal into three large sub-objectives:
|
| 122 |
+
|
| 123 |
+
1. Scalability. We aim to determine the effect of various training configurations on the scalability of training up to the maximum sizes permitted by the hardware. We measure the scaling efficiency itself but also how each configuration option affect the scaling efficiency at each scale.
|
| 124 |
+
|
| 125 |
+
2. Computation Efficiency. We measure each stage in the training process: forward, backward, and model update, as well as the total batch duration, and calculate the overall compute efficiency of the GPU, as well as the overall memory bandwidth efficiency achieved by the GPU.
|
| 126 |
+
|
| 127 |
+
3. Preprocessing Computation. We compare CPU and GPU preprocessing via total application run-time at various scales in order to determine the effect of the number of preprocessing workers and preprocessing offload on run-time at various scales.
|
| 128 |
+
|
| 129 |
+
4. Adhering to Reproducibility Standards. We seek to determine whether our performance analysis framework is able to run and achieve significant results on multiple types of infrastructure. We compare the results of our framework on two different large-scale production HPC systems. The practical details of these systems are detailed in Section 4.
|
| 130 |
+
|
| 131 |
+
### 3.2 Performance Analysis Method
|
| 132 |
+
|
| 133 |
+
1. Important Parameters. When performing DL performance analysis, practitioners usually focus on several important parameters. In this study, we consider the impact of the following parameters: gradient and compute precision, size of the batches, data loader, and number of workers. These parameters are described and explained in Table 1.
|
| 134 |
+
|
| 135 |
+
2. Scalability. To study the scalability we measure both the duration of the experiments at various scales and the scaling efficiency (SE) for $N$ GPUs. The duration of one experiment is measured using the data from TAU as the duration of the .tau application. Since the application is running on several GPUs, the maximum duration over all GPUs is used as the final measure. The scaling efficiency is measured as the ratio between the experiment duration using a baseline number of GPUs (e.g., one GPU) over the duration of the same experiment using $N$ GPUs:
|
| 136 |
+
|
| 137 |
+
$$
|
| 138 |
+
S{E}_{N} = \frac{{t}_{\text{baseline }}}{{t}_{N}}
|
| 139 |
+
$$
|
| 140 |
+
|
| 141 |
+
3. Efficiency. To study the computational efficiency, we perform deeper analysis by measuring the batch duration and the duration of the three stages of each training iteration: forward pass, backward pass, parameters update. During the forward pass, the DNN makes a prediction of the labels associated with each image in the input batch, and an error is calculated by comparing the known correct labels with the predicted ones. In the backward pass, the error gradients are calculated and propagated through each network layer. Finally, the gradients are utilized to update each network parameter to minimize the error.
|
| 142 |
+
|
| 143 |
+
We use NVProf along with NVTX annotations to delimit the previous stages. We also calculate the overall compute efficiency of the GPU using the CUDA-kernel-level data from Pyprof. The number of floating points operation per second is measured as the ratio of the sum of the FLOPs over the sum of the duration of each kernel. To obtain the compute efficiency of the GPU, we divide the measured value by the theoretical value of the given GPU. Similarly, the memory bandwidth efficiency is computed as the ratio of the measured bandwidth (ratio of total amount of bytes in and out of the GPU over the sum of the duration of each kernel) of the GPU over its theoretical bandwidth.
|
| 144 |
+
|
| 145 |
+
4. Sensitivity Analysis. We study the impact of the configuration parameters over the scaling efficiency of the application by using Taguchi Methods [50]. The goal of such methods is to reduce the number of experiments needed to be performed in order to determine which factor(s) impacts a predetermined target variable the most. We do not use the method to design our experiment but only to evaluate the parameter importance given our experimental results. For that, for a given experiment, we use the signal to noise ratio (SN) defined as follows for the Taguchi Methods:
|
| 146 |
+
|
| 147 |
+
$$
|
| 148 |
+
{SN} = - {10}\log \left( {\frac{1}{N}\mathop{\sum }\limits_{{i = 1}}^{N}\frac{1}{{y}_{i}^{2}}}\right)
|
| 149 |
+
$$
|
| 150 |
+
|
| 151 |
+
$N$ : Number of repetitions of the given experiment; ${y}_{i}$ : Target variable value for repetition $i$ of the experiment.
|
| 152 |
+
|
| 153 |
+
### 3.3 DL Model Used in the Study
|
| 154 |
+
|
| 155 |
+
We perform our experiments on a state-of-the-art, industry-standard model and dataset: ResNet50 v1.5. We use training scripts implemented by Nvidia as part of their state-of-the-art reference examples in Pytorch [39]. The model and training scripts are configured for image classification with the ImageNet dataset.
|
| 156 |
+
|
| 157 |
+
Compared to the original definition [23], ResNet50 v1.5 has stride $= 2$ in the $3 \times 3$ convolutions, rather than in the $1 \times 1$ convolutions, in the bottleneck blocks that require downsam-pling. This comes at a small increase in terms of computational cost, but is beneficial in terms of accuracy. This modification was first introduced in a Lua Torch re-implementation of ResNet from Facebook [18], and has since been widely adopted. A more detailed overview of ResNet variants can be found in [24], where ResNet 1.5 is referred to as ResNet-B.
|
| 158 |
+
|
| 159 |
+
Nvidia's scripts provide an implementation that is highly tuned both in terms of hyper-parameters and final accuracy, as well as training-time performance. As such, it is more representative of the current state-of-the-art compared to Pytorch's reference ImageNet training implementation [42].
|
| 160 |
+
|
| 161 |
+
With respect to performance, Nvidia's implementation tightly integrates with the DALI library for data loading and pre-processing. DALI has multiple advantages over Pytorch's own dataloaders: it provides support for reading input data stored in the Tensorflow TFRecord format, which we leverage as part of our setup; it provides partially GPU-accelerated JPEG decoding and end-to-end GPU-accelerated preprocessing for the ImageNet dataset. With respect to accuracy, they implement all the strategies described in [24], that together contribute to pushing the top-1 ImageNet accuracy to around 78.4%.
|
| 162 |
+
|
| 163 |
+
It is important to note that DL model described above is used as an example for validation study and to showcase capabilities of the framework and type of the analysis integrated within the framework.The framework is designed with principle of extensibility and will require some effort from ML practitioners / developers to integrate their model within the framework. The focus of this research is to highlight the approach and the methodology rather than the results themselves on a specific DL model.
|
| 164 |
+
|
| 165 |
+
## 4 Experiment Setup
|
| 166 |
+
|
| 167 |
+
For our experiment we target two production-grade distributed infrastructures The two large-scale HPC clusters we run our experiments on are LISA ${}^{5}$ and Cartesius, ${}^{6}$ specifically their GPU islands. Note that both hardware and software stacks of the two systems are highly different. We show empirically that the containerized performance analysis workflow we propose is portable and produces reproducible results that could be compared between the two. Note that these two types of production-ready clusters are comparable to what ML and DL practitioners use in practice to deploy training workflows. We have not chosen any more clusters in our results because we would like to focus more on showcasing methodology and approach using SURFboard than the results themselves. We would like to encourage HPC community to expand and validate this approach on more infrastructures.
|
| 168 |
+
|
| 169 |
+
<table><tr><td>Software / Library</td><td>Version</td></tr><tr><td>PyTorch</td><td>1.2.0</td></tr><tr><td>Python</td><td>3.6</td></tr><tr><td>CUDA</td><td>10.0</td></tr><tr><td>DALI</td><td>0.18.0</td></tr><tr><td>TAU</td><td>2.28.1</td></tr><tr><td>PyProf</td><td>3.6.0</td></tr><tr><td>CuDNN</td><td>7</td></tr><tr><td>OS</td><td>Ubuntu 18.04</td></tr></table>
|
| 170 |
+
|
| 171 |
+
Table 2: Software versions for the container environment.
|
| 172 |
+
|
| 173 |
+
<table><tr><td/><td>Cartesius</td><td>LISA</td></tr><tr><td>Nodes</td><td>8,16,32,48</td><td>1,4,8</td></tr><tr><td>GPU per Node</td><td>2</td><td>4</td></tr><tr><td>Gradient Precision</td><td>fp16, fp32</td><td>fp16, fp32</td></tr><tr><td>Compute Precision</td><td>fp32</td><td>fp32</td></tr><tr><td>Batch size per GPU</td><td>32, 64</td><td>32, 64</td></tr><tr><td>Data Loader</td><td>dali-gpu, dali-cpu-to-gpu</td><td>dali-gpu, dali-cpu-to-gpu</td></tr><tr><td>Workers</td><td>2,8</td><td>2,4</td></tr></table>
|
| 174 |
+
|
| 175 |
+
Table 3: Parameters considered for the experiment on both Cartesius and LISA systems.
|
| 176 |
+
|
| 177 |
+
### 4.1 Cartesius Hardware Specification
|
| 178 |
+
|
| 179 |
+
The Cartesius GPU island consists of 66 Bullx B515 processing nodes. Each node is equipped with a 16-core E5-2450 v2 Intel CPU (Ivy Bridge microarchitecture), operating at 2.5 $\mathrm{{GHz}}$ , and ${96}\mathrm{{GB}}$ of memory. Each node is also equipped with two K40m GPUs, and two Mellanox Connect-X3 Infini-band adapters, with a maximum throughput of 56 Gbps each. For our experiments we utilized up to 48 nodes. Cartesius is maintained with RedHat 4.8.5-39, Linux version 3.10.0- 1127.8.2.e17.x86_64. We have used CUDA enabled Open-MPI/3.1.2 for transferring data buffers directly between GPUs using Infiniband network.
|
| 180 |
+
|
| 181 |
+
### 4.2 LISA Hardware Specification
|
| 182 |
+
|
| 183 |
+
The LISA cluster consists of 25 GPU-accelerated nodes, each equipped with Intel Xeon Bronze 3104 CPUs (12-core, 1.7 $\mathrm{{GHz}}$ ), ${256}\mathrm{{GB}}$ of memory, and four $\mathrm{{GPU}}$ accelerators, either NVIDIA GeForce 1080Ti or NVIDIA Titan V GPUs. The nodes are connected through 40Gbps Ethernet. LISA is maintained with Debian GNU version 10 (buster). We have used OpenMPI/3.1.4 for multinode scaling experiments.
|
| 184 |
+
|
| 185 |
+
---
|
| 186 |
+
|
| 187 |
+
${}^{5}$ https://userinfo.surfsara.nl/systems/lisa
|
| 188 |
+
|
| 189 |
+
${}^{6}$ https://userinfo.surfsara.nl/systems/cartesius
|
| 190 |
+
|
| 191 |
+
---
|
| 192 |
+
|
| 193 |
+
### 4.3 Software Environment inside the Con- tainer
|
| 194 |
+
|
| 195 |
+
Table 2 outlines the software environment inside the container for both LISA and Cartesius. This is one of the main advantages of using containers. It enables the use of the same software environment on both HPC clusters, enabling reproducibility over many types of software and hardware infrastructure.
|
| 196 |
+
|
| 197 |
+
### 4.4 Achieving Empirical Reproducibility
|
| 198 |
+
|
| 199 |
+
We profile the communication of the application with TAU both on Cartesius and LISA using combinations of the parameters presented in Table 3. In order to gather a statistically valid sample, we conduct at least 10 experiments for each combination of parameters, each of them being run for a total of 50 batches. Since our analysis is performed by gathering metrics at batch level, we ensured that in total 500 batches per experiment achieves statistical significance and is in check with current reproducibility standards [37,40,52]. In our figures, we present the median of a given metric over the 10 experimental runs along with the ${95}\%$ confidence interval for the communication data. Additionally, we conduct GPU profiling on Cartesius gathering 10 experiments for each combination of parameters in Table 3 for a total of 25 batches using NVProf, and 25 batches using NVProf and enabling kernel profiling via Pyprof.
|
| 200 |
+
|
| 201 |
+
## 5 Results and Visualization
|
| 202 |
+
|
| 203 |
+
In this section, we showcase results and visualizations of data that can be produced using the framework presented in this paper. We present the data from higher (experiment duration and communication) to lower (GPU efficiencies and CUDA kernels) levels of the software stack. The experiments we performed are typical analyses performed by DL practitioners and the conclusions we draw can help practitioners build training infrastructure that is suitable to their workloads, identify bottlenecks, and identify what are important parameters in their setups. Moreover, this kind of analysis shows that SURFBoard is useful in helping practitioners analyze the performance of their DL workflows in a reproducible manner, across multipe types of infrastructure.
|
| 204 |
+
|
| 205 |
+
Lessons Learned. The main lessons learned from analyzing the empirical experiments performed in our study are the following:
|
| 206 |
+
|
| 207 |
+
1. We confirm that Pytorch, when coupled with Horovod, achieves a good scalability $\left( { > {90}\% }\right)$ for ResNet-50-like workloads, see Figures 3, 4.
|
| 208 |
+
|
| 209 |
+
2. On infrastructure like LISA and Cartesius, where resources are not shared, there is not much overall performance variability, especially on the MPI collective operations, see (for example, whiskers in) Figures 5, 6.
|
| 210 |
+
|
| 211 |
+
3. The computation throughput of ResNet-50-like workloads is neither memory-bound, nor compute-bound. The bottlenecks lie in waiting for remote data from other GPUs to be transferred by the DL framework, see Section 5.3.
|
| 212 |
+
|
| 213 |
+
4. On some types of machines, the CPU-to-GPU ratio is important, as the GPUs need to be fed data quickly enough to achieve good performance. Our framework can be used by system designers to detect bottlenecks like these and areas of improvement. Users could perform similar steps of analysis using their own workloads to determine an appropriate ratio of CPUs to GPUs, see Section 5.4.
|
| 214 |
+
|
| 215 |
+
5. The number of preprocessing workers has more importance than the batch size on LISA. However, the opposite behavior is true in architecture like Cartesius. Practitioners should perform similar types of analyses to decide what parameters are most important in their workloads and how the performance could be improved by using this knowledge, see Section 5.4.
|
| 216 |
+
|
| 217 |
+
6. SURFBoard is able to be deployed on two different large-scale HPC infrastructures. It can further help practitioners identify behavioral differences on large-scale infrastructures and what deployment parameters cause these.
|
| 218 |
+
|
| 219 |
+
### 5.1 Scalability
|
| 220 |
+
|
| 221 |
+
Execution Time. We measure the duration of the experiments as the total runtime of the TAU application on a GPU involved in the computation. Experiment durations for both Cartesius and LISA are presented in Figures 3 and 4 respectively. For both systems, the duration scales linearly with the number of GPUs. This is likely due to the communication overhead being increasingly more significant for higher numbers of GPUs used. On Cartesius, experiments using fp32 gradients all take longer than the ones with fp16 gradient. This behavior is the opposite on LISA.
|
| 222 |
+
|
| 223 |
+
Since gradient casting requires additional CPU cycles, and the GPU to CPU thoughput ratio on LISA is higher than on Cartesius, we hypothesize that this difference in behaviour on the two systems is caused by a CPU-induced bottleneck on LISA. These results illustrate the utility of our framework to system designers, e.g. our results would indicate provisioning more CPUs to LISA as a relatively inexpensive way to increase DL performance. Alternatively, gradient casting should be offloaded to the GPU or some other accelerator to relieve the bottleneck.
|
| 224 |
+
|
| 225 |
+

|
| 226 |
+
|
| 227 |
+
Figure 3: Duration and scaling efficiencies of 50-batch experiments on Cartesius with different configurations. Experiments depicted uses 32 images per batch. Legend reads as follows: <number of workers>w,<preprocessing>,<gradient precision>. Note: the vertical axis does not start at 0 for better visibility.
|
| 228 |
+
|
| 229 |
+

|
| 230 |
+
|
| 231 |
+
Figure 4: Duration and scaling efficiencies of 50-batch experiments on LISA with different configurations. Experiments depicted uses 32 images per batch. Legend reads as follows: <number of workers>w,<preprocessing>,<gradient precision>. Note: the vertical axis does not start at 0 for better visibility.
|
| 232 |
+
|
| 233 |
+
Scaling efficiency. We compute and present the scaling efficiencies of the application for both Cartesius and LISA in Figures 3 and 4, respectively. The efficiencies are computed as the ratio of the duration of the given experiment over the baseline experiment. For the experiment on Cartesius, the baseline was taken as the expriment using 8 nodes ( 16 GPUs) whereas the baseline on LISA was taken as the experiment using 1 node ( 4 GPUs). We used different baselines for computing scalability to show the flexibility of our framework, and to model practitioners' behavior when scaling DL computations: for large-scale training, using few resources it too time-consuming and early scale-out is needed. Our experiments also show that using fp32 gradients results in lower scaling efficiency while using CPU preprocessing gives a larger one on both systems.
|
| 234 |
+
|
| 235 |
+
### 5.2 Communication
|
| 236 |
+
|
| 237 |
+
Our framework uses TAU to profile communication through several MPI call metrics. Figure 6 and 5 presents the sum of the duration of MPI_Allreduce, MPI_Bcast and MPI_Gather across all GPUs. All of these are considered extremely important for DL performance by practitioners. It is also possible to gather other metrics such as number of MPI calls, and total volumes of messages sent across GPUs, see Figures 7, 8, 9 and 10. MPI_Allreduce is used during the model update phase to share the gradients of each weights of the neural network. Resnet50 has approximately 23 million parameters. The experiment runs on each GPU for 50 batches and the gradients are stored using 2 or 4 bytes (half or full-precision floating point precision format). As a consequence, we expect the total volume of message exchanged for MPI_Allreduce to be:
|
| 238 |
+
|
| 239 |
+
$$
|
| 240 |
+
\text{Volume} = {N}_{\text{weights }} \times \text{grad_prec} \times {N}_{\text{batches }} \times {N}_{GPU}\text{,}
|
| 241 |
+
$$
|
| 242 |
+
|
| 243 |
+
where grad_prec is the number of bytes used to store each gradient ( 4 bytes for fp32 gradients); ${N}_{\text{batches }}$ is the number of batches (50); ${N}_{GPU}$ is the number of GPUs. The total volume of messages exchanged for MPI_Allreduce presented on Figure 9 and 10 is in line with the expected volume. Using similar types of analyses, practitioners can identify bottlenecks or ill-behavior at the MPI collective operation and networking layer when performing large-scale training.
|
| 244 |
+
|
| 245 |
+

|
| 246 |
+
|
| 247 |
+
Figure 5: MPI calls durations on LISA for 2 workers, CPU preprocessing, 32 images per batch, gradient fp32. Note: bars are the median values over 10 runs, whiskers ${95}\%$ confidence interval, vertical axis is logarithmic, lower is better.
|
| 248 |
+
|
| 249 |
+

|
| 250 |
+
|
| 251 |
+
Figure 6: MPI calls durations on Cartesius for 2 workers, CPU preprocessing, 32 images per batch, gradient fp32. Note: bars are the median values over 10 runs, whiskers 95% confidence interval, vertical axis is logarithmic, lower is better.
|
| 252 |
+
|
| 253 |
+
### 5.3 Compute and Memory Efficiency
|
| 254 |
+
|
| 255 |
+
Batch duration. Our framework combines NVProf along with NVTX annotations to delimit the training stages (forward, backward, update) and obtain more details about the training. The batch duration and training-stages duration can be visualized in Figure 11. We observe that the duration of the batches scales with the number of nodes/GPUs. In particular, the backward phase of the training, which includes gradient synchronization over InfiniBand, takes longer for larger number of GPUs whereas the forward and update phases stay constant. We also note the increased variability in batch duration at larger scales, caused by corresponding variability in the time required for gradient synchronization over the network fabric and the effect of uncorrelated performance jitter between the GPU workers, which can have a variety of causes - OS scheduling, resource contention, garbage collection. Practitioners can make use of this type of analysis to decide which parts of the per-batch computation are bottlenecks or variable in performance.
|
| 256 |
+
|
| 257 |
+

|
| 258 |
+
|
| 259 |
+
Figure 7: Number of MPI messages exchanged across all GPUs on Cartesius for 2 workers, CPU preprocessing, 32 images per batch, gradient fp32. Note: bars are the median values over 10 runs, whiskers 95% confidence interval, vertical axis is logarithmic, lower is better.
|
| 260 |
+
|
| 261 |
+

|
| 262 |
+
|
| 263 |
+
Figure 8: Number of MPI messages exchanged across all GPUs on LISA for 2 workers, CPU preprocessing, 32 images per batch, gradient fp32. Note: bars are the median values over 10 runs, whiskers 95% confidence interval, vertical axis is logarithmic, lower is better.
|
| 264 |
+
|
| 265 |
+
GPU performance metrics. Using Pyprof, we measure kernel-level data and compute the utilized memory bandwidth and utilized compute capacity of the GPUs on Cartesius. We present the results in Table 4 along with the efficiencies relative to the theoretical performance of the specific GPU model we run experiments on. NVIDIA Tesla K40m has a peak memory bandwidth of ${288.4}\mathrm{{GB}}/\mathrm{s}$ and a theoretical compute performance using fp32 float of 5.046 TFLOP/s. Table 4 shows that both memory and compute efficiency are low (below 18%). This is due to the mismatch in model (implementation) and the hardware we tested on. Given the low computed memory bandwidth and compute efficiencies, each below ${20}\%$ , it seems that the application is neither memory nor compute bound. Low GPU utilization (below 50%) is expected for deep learning frameworks, as noted in [12,57]. Using this kind of analysis, practitioners can identify which parts of the GPU implementation represent bottlenecks.
|
| 266 |
+
|
| 267 |
+

|
| 268 |
+
|
| 269 |
+
Figure 9: Volume in bytes of MPI messages exchanged across all GPUs on Cartesius for 2 workers, CPU preprocessing, 32 images per batch, gradient fp32. Note: bars are the median values over 10 runs, whiskers 95% confidence interval, vertical axis is logarithmic, lower is better.
|
| 270 |
+
|
| 271 |
+

|
| 272 |
+
|
| 273 |
+
Figure 10: Volume in bytes of MPI messages exchanged across all GPUs on LISA for 2 workers, CPU preprocessing, 32 images per batch, gradient fp32. Note: bars are the median values over 10 runs, whiskers 95% confidence interval, vertical axis is logarithmic, lower is better.
|
| 274 |
+
|
| 275 |
+
CUDA kernels. Pyprof also allows to retrieve a detailed kernel summary containing, among other metrics, the time elapsed, number of bytes in and out of the GPU, number of floating points operations performed during the execution of a given kernel. Table 5 presents the duration of the 10 most used kernels during a single batch. According to [16], the top four kernels in Table 5 correspond to backwards pass convolution, forward pass convolution, fully connected layer (forward and backward), and element-wise addition respectively. These results are in accordance with expectations given the structure of the ResNet50 DNN. Using this type of analysis, practitioners can visualize what GPU kernels take the most GPU computation time and identify potential bottlenecks.
|
| 276 |
+
|
| 277 |
+

|
| 278 |
+
|
| 279 |
+
Figure 11: Scaling of training stages duration on Cartesius. Configuration presented: 2 workers, dali-cpu-to-gpu, 32 images per batch, gradients fp32, compute fp32. lower is better.
|
| 280 |
+
|
| 281 |
+
<table><tr><td>GB/s</td><td>Bandwidth efficiency</td><td>TFLOPs/s</td><td>Compute efficiency</td></tr><tr><td>51.16</td><td>16.23 %</td><td>0.6964</td><td>17.77 %</td></tr></table>
|
| 282 |
+
|
| 283 |
+
Table 4: NVIDIA Tesla K40m GPU performance measures on Cartesius for 8 nodes, 2 workers, CPU preprocessing, 32 images per batch, gradient fp32. Both efficiencies are computed with respect to the theoretical performance of the GPU.
|
| 284 |
+
|
| 285 |
+
### 5.4 Parameter Sensitivity
|
| 286 |
+
|
| 287 |
+
To get a more detailed understanding on the effect of each parameter we considered in this work, we study the impact of the number of GPUs, number of workers, type of dataloader, size of the batches, and gradient precision on the scaling efficiency using Taguchi Methods, as described in Section 3.1. To do so, we compute the range of SN ratio for each of the parameters based on the set of experiment configuration performed. Since the goal is to determine the ranking of the parameters for each system, we normalize the range of SN ratio and present the order of importance on Figure 12a for Cartesius and Figure 12b for LISA. The larger the range of the SN ratio is, the more impactful a parameter is. Disregarding the number of GPUs which is obviously the most impactful parameter of the scaling efficiency, on both systems, the gradient precision is the second-most important parameter. Because the gradients are sent between nodes/GPUs, smaller precision gradient results in lower overall volume of data exchanged via MPI and therefore a better scalability of the application. The least impactful parameter on both system is the dataloader. Interestingly, batch size and number of workers switch places in their effect on scaling efficiency on the two systems under analysis (note the different coloring scheme for number of workers and batch size in Figures 12a and 12b). Conceptually, batch size should be the more important parameter to scalability, as it directly correlates to the time spent in computing and decreases the relative importance of communication to the total application time. This expectation is confirmed on Cartesius. On LISA, the relative under-provisioning of CPU compute to GPU compute may cause the difference in relative importance of the number of preprocessing workers to scalability, although it must be noted that the absolute differences between configurations in this respect are small (see Figure 4). Using this type of analysis, practitioners can perform in-depth analysis on what kind of parameters are most important for their DL training work-flows and decide on which type of other analysis to zoom-in to identify possible bottlenecks and places of improvement.
|
| 288 |
+
|
| 289 |
+
<table><tr><td>Kernel name</td><td>Time (s)</td><td>$\%$ total</td></tr><tr><td>cudnn::detail::wgrad_alg0_engine</td><td>0.2597</td><td>15.37</td></tr><tr><td>cudnn::detail::implicit_convolve_sgemm</td><td>0.2395</td><td>14.17</td></tr><tr><td>sgemm_sm35_ldg_nt_64x16x64x16x16</td><td>0.1679</td><td>9.935</td></tr><tr><td>elementwise_kernel</td><td>0.1522</td><td>9.009</td></tr><tr><td>sgemm_largek_lds64</td><td>0.1332</td><td>7.882</td></tr><tr><td>cudnn::detail::dgrad_engine</td><td>0.1181</td><td>6.989</td></tr><tr><td>sgemm_sm35_ldg_nn_64x16x64x16x16</td><td>0.1023</td><td>6.055</td></tr><tr><td>cudnn::detail::dgrad_alg1_engine</td><td>0.0933</td><td>5.521</td></tr><tr><td>cudnn::detail::bn_bw_1C11_kernel_new</td><td>0.0913</td><td>5.402</td></tr><tr><td>cudnn::detail::bn_fw_tr_1C11_kernel_NCHW</td><td>0.0415</td><td>2.457</td></tr></table>
|
| 290 |
+
|
| 291 |
+
Table 5: Duration and percent of total runtime of the 10 most used kernels during a single training batch on Cartesius for 2 workers, CPU preprocessing, 32 images per batch, gradient fp16.
|
| 292 |
+
|
| 293 |
+
## 6 Limitations
|
| 294 |
+
|
| 295 |
+
We discuss limitations and threats to validity of our work. The scope and goal of this work is to provide practitioners with a framework for the reproducible performance analysis of ML workloads, and not in presenting an in-depth performance analysis and tuning of a given workload. Instead, we provide a case-study on a workload and two different clusters, showcasing the reproducibility of our work and the differences in performance obtained in the two clusters. SURFBoard is extensible and can be tuned to accept novel workloads, ML frameworks and container orchestration tools. To this end, we identify the following limitations of our work.
|
| 296 |
+
|
| 297 |
+
1. Single Model We only validated our framework on the de-facto computer vision workload and dataset. This is a widely-used workload in the ML community and we chose this because the results we gathered can be compared by practitioners against their own results. However, adding other models and datasets is feasible. We are working toward adding new models and open-sourcing our workflows and results.
|
| 298 |
+
|
| 299 |
+
2. PyTorch Our case-study and results are obtained only on PyTorch, which is widely used by practitioners. However, swapping PyTorch for Tensorflow in our workflows is only an implementation effort. All other steps and components can be re-used from our PyTorch proof-of-concept.
|
| 300 |
+
|
| 301 |
+

|
| 302 |
+
|
| 303 |
+
Figure 12: Ranking of experimental parameters from most to least important in order to maximize the scaling efficiency of the application on both systems. Note that the most important is the ordering of the parameters here, not their magnitudes and that the two figures are not comparable in absolute values.
|
| 304 |
+
|
| 305 |
+
3. Difficulty of Extensibility We have built SURFBoard with extensibility in mind, allowing each component to be replaced by others. However, we have not studied in depth, using independent programmers and practitioners how easy it is to achieve extensibility. We plan to study this in the future by performing user studies that can help us improve our framework.
|
| 306 |
+
|
| 307 |
+
## 7 Related Work
|
| 308 |
+
|
| 309 |
+
In this section we discuss work related to our approach. We identify four main categories of related work: (i) performance reproducibility in large-scale systems; (ii) ML performance analysis; (iii) ML workflows and orchestration; and (iv) ML/DL benchmarks. We discuss each in detail and contrast with our own approach.
|
| 310 |
+
|
| 311 |
+
Performance reproducibility in large-scale systems. Performance reproducibility in large-scale systems is an elusive task. There are no community-wide agreed methodologies to achieve it, although several authors have addressed the topic. In HPC, Hoefler and Belli [26] propose a set of 12 principles to achieve credible performance evaluation. In cloud computing, which is much more variable than HPC environments [37], Papadopoulos et al. [40] propose a set of methodological principles to achieve reproducible performance evaluation. However, these seem insufficient as multiple types of variability are identified by Uta et al. [52], which highly affect reproducibility of results. In our work, we leverage all the findings of such types of work and perform sanity checks on our results. We also adhere to reproducible methods for achieving performance results, like performing many repeated experiments (during different days), computing nonparametric confidence intervals for medians, and analyzing variability.
|
| 312 |
+
|
| 313 |
+
ML performance analysis. Due to the inherent demands in computational requirements, ML performance analysis is currently of utmost importance. As outlined by Amodei and Hernandez [3], the amount of compute used when training the largest AI models increased exponentially with a 3.4- month doubling time, far outpacing Moore's law, resulting in a ${300},{000x}$ increase between 2012 and 2018. Furthermore, Hernandez and Brown [25] estimate that the algorithmic efficiency also improved by a factor of ${25x}$ in the same period, leading to 7.5 million times increase in the effective training compute available to the largest AI experiments. Dakkak et al. [14] propose the MLModelScope toolkit that includes performance analysis along with model evaluation, in a reproducible, containerized fashion. However, the toolkit is concentrated on non-distributed training only. Modern ML models are trained in a distributed fashion, using a variety of communication interconnects (e.g. NVLink, Infiniband, OmniPath, Ethernet, PCIe), and employing different parallelization strategies. Awan et al. [6] aim to measure these characteristics and propose improvements to communication patterns, with visualization tools for HPC GPU-based clusters proposed by Kousha in [32]. Distributed ML training performance is also analyzed in [28] and [5], and communication however there is no complete framework that allows reproducible performance analysis of ML workloads on modern distributed systems. With SURFBoard, we offer a common ground for all these types of analyses to be performed in comparable settings and in a reproducible fashion.
|
| 314 |
+
|
| 315 |
+
ML workflows and orchestration. As depicted in Figure 1, Machine Learning workflows are composed of elements spanning a broad software stack, going from efficient GPU kernel execution, CPU-GPU work partitioning, efficient storage access, and multi-node orchestration. The interaction between these elements often leads to complex systems producing results that are very challenging to reproduce, both from a numerical (i.e. model accuracy) perspective [27], as well as from a performance perspective [7]. Typical training workflows, such as the computer vision one presented in this work, include stages such as data preprocessing and augmentation [13], hyperparameter tuning [2, 36], or model interpretation [51]. The high computational complexity of the training process often requires the workflow to be executed in a distributed fashion, adding additional dependencies to distribution mechanisms such as Horovod [43] or PyTorch Distributed [35] and orchestration tools such as Kubernetes or SLURM. SURFBoard offers practitioners an easy-to-use and configurable framework for gathering performance data from all the components of the training workflows.
|
| 316 |
+
|
| 317 |
+
$\mathbf{{ML}/{DL}\;{benchmarks}.{Benchmarkinglarge} - {scalesystem}}$ behavior under diverse workloads like HPC and big data is a well-studied topic. Recently, with the highly increased interest in ML and DL workloads, several benchmarks [7, 14, 21, ${30},{38},{54}\rbrack$ emerged to cover this need. However, due to the relatively early days of the field, none of them have emerged as a community- or industry-wide de-facto benchmark. It is also unclear at the moment how easy it is to port these benchmarks to all possible infrastructure that runs ML/DL code. Our approach helps in this sense by being able to build reproducible instances of these benchmarks to run on many types of large-scale infrastructure. Moreover, our containerized approach would ensure an even playing field (i.e., common software infrastructure) to cut down on technology-and software-induced performance differences.
|
| 318 |
+
|
| 319 |
+
Overall, SURFBoard presents a more holistic approach at achieving performance reproducibility in large-scale systems when running ML workloads. Even though SURFBoard is related and contains technologies from each of the aforementioned categories, SURFBoard is more than the sum of its components, as it is the first enabler of reproducible ML performance analysis at scale.
|
| 320 |
+
|
| 321 |
+
## 8 Conclusion
|
| 322 |
+
|
| 323 |
+
Many large-scale software systems suffer from poor performance reproducibility. Machine learning training workflows are no exception as their performance analysis is largely an expert-driven task, tightly-coupled to the underlying physical and software ecosystems. This behavior hinders productivity, knowledge sharing, and overall the notion of achieving energy efficiency.
|
| 324 |
+
|
| 325 |
+
We presented our approach at supporting reproducible performance analysis for machine learning workflows through a containerized framework. This framework is able to run on many container-ready types of infrastructure, such as HPC clusters and even clouds. Moreover, it is able to gather performance results at arbitrary levels in the software stack and is extendable such that more experienced users are able to add custom analyses.
|
| 326 |
+
|
| 327 |
+
We validated our framework through an empirical evaluation on two GPU-enabled, large-scale production-based HPC clusters, with different software stacks. Our analysis shows that our framework is portable and is able to gather performance data ranging from high-level MPI metrics, down to FLOP efficiency for CUDA kernels, as well as kernel-level data for each processing batch. For future work, we plan to extend our framework with more types of analysis tools, implement multiple types of workloads regarding state-of-the-art benchmarks, as well as evaluate them on more types of large-scale infrastructure.
|
| 328 |
+
|
| 329 |
+
## References
|
| 330 |
+
|
| 331 |
+
[1] Martín Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, et al. Tensorflow: A system for large-scale machine learning. In 12th $\{$ USENIX $\}$ symposium on operating systems design and implementation ( $\{ {OSDI}\} {16})$ , pages 265-283, 2016.
|
| 332 |
+
|
| 333 |
+
[2] Takuya Akiba, Shotaro Sano, Toshihiko Yanase, Takeru Ohta, and Masanori Koyama. Optuna: A next-generation hyperparameter optimization framework. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 2623-2631, 2019.
|
| 334 |
+
|
| 335 |
+
[3] Dario Amodei and Danny Hernandez. Ai and compute. Heruntergeladen von https://blog.openai.com/aiand-compute, 2018.
|
| 336 |
+
|
| 337 |
+
[4] Kai Arulkumaran, Antoine Cully, and Julian Togelius. Alphastar. Proceedings of the Genetic and Evolutionary Computation Conference Companion, Jul 2019.
|
| 338 |
+
|
| 339 |
+
[5] Ammar Ahmad Awan, Jereon Bédorf, Ching-Hsiang Chu, Hari Subramoni, and Dhabaleswar K Panda. Scalable distributed dnn training using tensorflow and cuda-aware mpi: Characterization, designs, and performance evaluation. In 2019 19th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CC-GRID), pages 498-507. IEEE, 2019.
|
| 340 |
+
|
| 341 |
+
[6] Ammar Ahmad Awan, Arpan Jain, Ching-Hsiang Chu, Hari Subramoni, and Dhableswar K Panda. Communication profiling and characterization of deep-learning workloads on clusters with high-performance interconnects. IEEE Micro, 40(1):35-43, 2019.
|
| 342 |
+
|
| 343 |
+
[7] Tal Ben-Nun, Maciej Besta, Simon Huber, Alexandros Nikolaos Ziogas, Daniel Peter, and Torsten Hoe-fler. A modular benchmarking infrastructure for high-performance and reproducible deep learning. In 2019 IEEE International Parallel and Distributed Processing Symposium (IPDPS), pages 66-77. IEEE, 2019.
|
| 344 |
+
|
| 345 |
+
[8] James Bergstra, Frédéric Bastien, Olivier Breuleux, Pascal Lamblin, Razvan Pascanu, Olivier Delalleau, Guillaume Desjardins, David Warde-Farley, Ian Goodfellow,
|
| 346 |
+
|
| 347 |
+
Arnaud Bergeron, et al. Theano: Deep learning on gpus with python. In NIPS 2011, BigLearning Workshop, Granada, Spain, volume 3, pages 1-48. Citeseer, 2011.
|
| 348 |
+
|
| 349 |
+
[9] Christopher Berner, Greg Brockman, Brooke Chan, Vicki Cheung, Przemysław Debiak, Christy Dennison, David Farhi, Quirin Fischer, Shariq Hashme, Chris Hesse, et al. Dota 2 with large scale deep reinforcement learning. arXiv preprint arXiv:1912.06680, 2019.
|
| 350 |
+
|
| 351 |
+
[10] Thomas Bradley. Gpu performance analysis and opti-misation. NVIDIA Corporation, 2012.
|
| 352 |
+
|
| 353 |
+
[11] Juan Carrasquilla and Roger G Melko. Machine learning phases of matter. Nature Physics, 13(5):431-434, 2017.
|
| 354 |
+
|
| 355 |
+
[12] Cody Coleman, Daniel Kang, Deepak Narayanan, Luigi Nardi, Tian Zhao, Jian Zhang, Peter Bailis, Kunle Olukotun, Chris Ré, and Matei Zaharia. Analysis of dawnbench, a time-to-accuracy machine learning performance benchmark. ACM SIGOPS Operating Systems Review, 53(1):14-25, 2019.
|
| 356 |
+
|
| 357 |
+
[13] Ekin D Cubuk, Barret Zoph, Dandelion Mane, Vijay Vasudevan, and Quoc V Le. Autoaugment: Learning augmentation policies from data. arXiv preprint arXiv:1805.09501, 2018.
|
| 358 |
+
|
| 359 |
+
[14] Abdul Dakkak, Cheng Li, Jinjun Xiong, and Wen-mei Hwu. Mlmodelscope: A distributed platform for model evaluation and benchmarking at scale. arXiv preprint arXiv:2002.08295, 2020.
|
| 360 |
+
|
| 361 |
+
[15] Jeff Dean, David Patterson, and Cliff Young. A new golden age in computer architecture: Empowering the machine-learning revolution. IEEE Micro, 38(2):21-29, 2018.
|
| 362 |
+
|
| 363 |
+
[16] Shi Dong and David Kaeli. Dnnmark: A deep neural network benchmark suite for gpus. In Proceedings of the General Purpose GPUs, pages 63-72. 2017.
|
| 364 |
+
|
| 365 |
+
[17] Dmitry Duplyakin, Alexandru Uta, Aleksander Maricq, and Robert Ricci. In datacenter performance, the only constant is change. In 20th IEEE/ACM International Symposium on Cluster, Cloud and Internet Computing, CCGRID 2020, Melbourne, Australia, May 11-14, 2020, pages 370-379. IEEE, 2020.
|
| 366 |
+
|
| 367 |
+
[18] Facebook. fb.resnet.torch. https://github.com/ facebookarchive/fb.resnet.torch.
|
| 368 |
+
|
| 369 |
+
[19] Dror G Feitelson. From repeatability to reproducibility and corroboration. ACM SIGOPS Operating Systems Review, 49(1):3-11, 2015.
|
| 370 |
+
|
| 371 |
+
[20] Edgar Gabriel, Graham E Fagg, George Bosilca, Thara Angskun, Jack J Dongarra, Jeffrey M Squyres, Vishal Sahay, Prabhanjan Kambadur, Brian Barrett, Andrew Lumsdaine, et al. Open mpi: Goals, concept, and design of a next generation mpi implementation. In European Parallel Virtual Machine/Message Passing Interface Users' Group Meeting, pages 97-104. Springer, 2004.
|
| 372 |
+
|
| 373 |
+
[21] Wanling Gao, Chunjie Luo, Lei Wang, Xingwang Xiong, Jianan Chen, Tianshu Hao, Zihan Jiang, Fanda Fan, Mengjia Du, Yunyou Huang, et al. Aibench: towards scalable and comprehensive datacenter ai benchmarking. In International Symposium on Benchmarking, Measuring and Optimization, pages 3-9. Springer, 2018.
|
| 374 |
+
|
| 375 |
+
[22] Markus Geimer, Felix Wolf, Brian JN Wylie, Erika Åbrahám, Daniel Becker, and Bernd Mohr. The scalasca performance toolset architecture. Concurrency and Computation: Practice and Experience, 22(6):702-719, 2010.
|
| 376 |
+
|
| 377 |
+
[23] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770-778, 2016.
|
| 378 |
+
|
| 379 |
+
[24] Tong He, Zhi Zhang, Hang Zhang, Zhongyue Zhang, Junyuan Xie, and Mu Li. Bag of tricks for image classification with convolutional neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 558-567, 2019.
|
| 380 |
+
|
| 381 |
+
[25] Danny Hernandez and Tom B Brown. Measuring the algorithmic efficiency of neural networks. arXiv preprint arXiv:2005.04305, 2020.
|
| 382 |
+
|
| 383 |
+
[26] Torsten Hoefler and Roberto Belli. Scientific benchmarking of parallel computing systems: twelve ways to tell the masses when reporting performance results. In Proceedings of the international conference for high performance computing, networking, storage and analysis, pages 1-12, 2015.
|
| 384 |
+
|
| 385 |
+
[27] Matthew Hutson. Artificial intelligence faces reproducibility crisis, 2018.
|
| 386 |
+
|
| 387 |
+
[28] Arpan Jain, Ammar Ahmad Awan, Quentin Anthony, Hari Subramoni, and Dhableswar K DK Panda. Performance characterization of dnn training using tensorflow and pytorch on modern clusters. In 2019 IEEE International Conference on Cluster Computing (CLUSTER), pages 1-11. IEEE, 2019.
|
| 388 |
+
|
| 389 |
+
[29] Yangqing Jia, Evan Shelhamer, Jeff Donahue, Sergey Karayev, Jonathan Long, Ross Girshick, Sergio Guadar-rama, and Trevor Darrell. Caffe: Convolutional architecture for fast feature embedding. In Proceedings of the 22nd ACM international conference on Multimedia,
|
| 390 |
+
|
| 391 |
+
pages 675-678, 2014.
|
| 392 |
+
|
| 393 |
+
[30] Zihan Jiang, Wanling Gao, Lei Wang, Xingwang Xiong, Yuchen Zhang, Xu Wen, Chunjie Luo, Hainan Ye, Xi-aoyi Lu, Yunquan Zhang, et al. Hpc ai500: a benchmark suite for hpc ai systems. In International Symposium on Benchmarking, Measuring and Optimization, pages 10-22. Springer, 2018.
|
| 394 |
+
|
| 395 |
+
[31] Andreas Knüpfer, Holger Brunst, Jens Doleschal, Matthias Jurenz, Matthias Lieber, Holger Mickler, Matthias S Müller, and Wolfgang E Nagel. The vampir performance analysis tool-set. In Tools for high performance computing, pages 139-155. Springer, 2008.
|
| 396 |
+
|
| 397 |
+
[32] Pouya Kousha, Bharath Ramesh, Kaushik Kan-dadi Suresh, Ching-Hsiang Chu, Arpan Jain, Nick Sarkauskas, Hari Subramoni, and Dhabaleswar K Panda. Designing a profiling and visualization tool for scalable and in-depth analysis of high-performance gpu clusters. In 2019 IEEE 26th International Conference on High Performance Computing, Data, and Analytics (HiPC), pages 93-102. IEEE, 2019.
|
| 398 |
+
|
| 399 |
+
[33] Thorsten Kurth, Mikhail Smorkalov, Peter Mendygral, Srinivas Sridharan, and Amrita Mathuriya. Tensorflow at scale: Performance and productivity analysis of distributed training with horovod, mlsl, and cray pe ml. Concurrency and Computation: Practice and Experience, 31(16):e4989, 2019.
|
| 400 |
+
|
| 401 |
+
[34] Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. nature, 521(7553):436-444, 2015.
|
| 402 |
+
|
| 403 |
+
[35] Shen Li, Yanli Zhao, Rohan Varma, Omkar Salpekar, Pieter Noordhuis, Teng Li, Adam Paszke, Jeff Smith, Brian Vaughan, Pritam Damania, et al. Pytorch distributed: Experiences on accelerating data parallel training. arXiv preprint arXiv:2006.15704, 2020.
|
| 404 |
+
|
| 405 |
+
[36] Richard Liaw, Eric Liang, Robert Nishihara, Philipp Moritz, Joseph E Gonzalez, and Ion Stoica. Tune: A research platform for distributed model selection and training. arXiv preprint arXiv:1807.05118, 2018.
|
| 406 |
+
|
| 407 |
+
[37] Aleksander Maricq, Dmitry Duplyakin, Ivo Jimenez, Carlos Maltzahn, Ryan Stutsman, and Robert Ricci. Taming performance variability. In 13th [USENIX] Symposium on Operating Systems Design and Implementation ( $\{$ OSDI $\}$ 18), pages ${409} - {425},{2018}$ .
|
| 408 |
+
|
| 409 |
+
[38] Peter Mattson, Vijay Janapa Reddi, Christine Cheng, Cody Coleman, Greg Diamos, David Kanter, Paulius Micikevicius, David Patterson, Guenther Schmuelling, Hanlin Tang, et al. Mlperf: An industry standard benchmark suite for machine learning performance. IEEE Micro, 40(2):8-16, 2020.
|
| 410 |
+
|
| 411 |
+
[39] NVIDIA. Deeplearningexamples. https://github.com/NVIDIA/DeepLearningExamples.
|
| 412 |
+
|
| 413 |
+
[40] Alessandro Vittorio Papadopoulos, Laurens Versluis, André Bauer, Nikolas Herbst, Jóakim Von Kistowski, Ahmed Ali-Eldin, Cristina Abad, José Nelson Amaral, Petr Tma, and Alexandru Iosup. Methodological principles for reproducible performance evaluation in cloud computing. IEEE Transactions on Software Engineering, 2019.
|
| 414 |
+
|
| 415 |
+
[41] Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. Pytorch: An imperative style, high-performance deep learning library. In Advances in neural information processing systems, pages 8026-8037, 2019.
|
| 416 |
+
|
| 417 |
+
[42] Pytorch. Pytorch imagenet training. https://github.com/pytorch/examples/tree/master/imagenet.
|
| 418 |
+
|
| 419 |
+
[43] Alexander Sergeev and Mike Del Balso. Horovod: fast and easy distributed deep learning in tensorflow. arXiv preprint arXiv:1802.05799, 2018.
|
| 420 |
+
|
| 421 |
+
[44] Sameer S Shende and Allen D Malony. The tau parallel performance system. The International Journal of High Performance Computing Applications, 20(2):287-311, 2006.
|
| 422 |
+
|
| 423 |
+
[45] Marc Snir, William Gropp, Steve Otto, Steven Huss-Lederman, Jack Dongarra, and David Walker. MPI-the Complete Reference: the MPI core, volume 1. MIT press, 1998.
|
| 424 |
+
|
| 425 |
+
[46] Victoria Stodden, Peixuan Guo, and Zhaokun Ma. Toward reproducible computational research: an empirical analysis of data and code policy adoption by journals. PloS one, 8(6):e67111, 2013.
|
| 426 |
+
|
| 427 |
+
[47] Victoria Stodden, Friedrich Leisch, and Roger D Peng. Implementing reproducible research. CRC Press, 2014.
|
| 428 |
+
|
| 429 |
+
[48] Victoria Stodden, Marcia McNutt, David H Bailey, Ewa Deelman, Yolanda Gil, Brooks Hanson, Michael A Her-oux, John PA Ioannidis, and Michela Taufer. Enhancing reproducibility for computational methods. Science, 354(6317):1240-1241, 2016.
|
| 430 |
+
|
| 431 |
+
[49] Emma Strubell, Ananya Ganesh, and Andrew McCallum. Energy and policy considerations for deep learning in nlp. arXiv preprint arXiv:1906.02243, 2019.
|
| 432 |
+
|
| 433 |
+
[50] Genichi Taguchi. Quality engineering (taguchi methods) for the development of electronic circuit technology. IEEE Transactions on Reliability, 44(2):225-229, 1995.
|
| 434 |
+
|
| 435 |
+
[51] Erico Tjoa and Cuntai Guan. A survey on explainable artificial intelligence (xai): towards medical xai. arXiv preprint arXiv:1907.07374, 2019.
|
| 436 |
+
|
| 437 |
+
[52] Alexandru Uta, Alexandru Custura, Dmitry Duplyakin, Ivo Jimenez, Jan Rellermeyer, Carlos Maltzahn, Robert Ricci, and Alexandru Iosup. Is big data performance reproducible in modern cloud networks? In 17th [USENIX\} Symposium on Networked Systems Design and Implementation ( $\{ {NSDI}\} {20}$ ), pages 513-527, 2020.
|
| 438 |
+
|
| 439 |
+
[53] Jan Vitek and Tomas Kalibera. Repeatability, reproducibility and rigor in systems research. In 2011 Proceedings of the Ninth ACM International Conference on Embedded Software (EMSOFT), pages 33-38. IEEE, 2011.
|
| 440 |
+
|
| 441 |
+
[54] Yu Emma Wang, Gu-Yeon Wei, and David Brooks. Benchmarking tpu, gpu, and cpu platforms for deep learning. arXiv preprint arXiv:1907.10701, 2019.
|
| 442 |
+
|
| 443 |
+
[55] Vincent M Weaver, Matt Johnson, Kiran Kasichayanula, James Ralph, Piotr Luszczek, Dan Terpstra, and Shirley Moore. Measuring energy and power with papi. In 2012 41st International Conference on Parallel Processing Workshops, pages 262-268. IEEE, 2012.
|
| 444 |
+
|
| 445 |
+
[56] Derek Wong and Stephen Yip. Machine learning classifies cancer, 2018.
|
| 446 |
+
|
| 447 |
+
[57] Hongyu Zhu, Mohamed Akrout, Bojian Zheng, Andrew Pelegris, Amar Phanishayee, Bianca Schroeder, and Gennady Pekhimenko. Tbd: Benchmarking and analyzing deep neural network training. arXiv preprint arXiv:1803.06905, 2018.
|
papers/JSYS/JSYS 2021/JSYS 2021 Mar_Papers/N6_kWfbABl1/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,454 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ SURFBOARD: REPRODUCIBLE PERFORMANCE ANALYSIS FOR DISTRIBUTED MACHINE LEARNING WORKFLOWS
|
| 2 |
+
|
| 3 |
+
Anonymous authors
|
| 4 |
+
|
| 5 |
+
Paper under double-blind review
|
| 6 |
+
|
| 7 |
+
§ ABSTRACT
|
| 8 |
+
|
| 9 |
+
Large-scale HPC infrastructures are enablers for scientific research in many domains. The recent advances in machine learning (ML) have led to an ever increasing demand for computation power, as well as the design of complex operational workflows. Understanding the performance and efficiency of these workflows is key to productivity, knowledge and model sharing, and energy efficiency. Even though there have been efforts in studying and designing portability protocols, performance analysis of large-scale ML is still an expert-driven task, tightly locked-in to specific physical and software infrastructure. Much like in other domains, this hinders reproducibility of both results and overall workflow performance. To overcome this challenge, we propose the design of a container-based framework for reproducible performance analysis of ML workflows at scale. We validate our framework using a case-study on two different large-scale production systems running ML workflows. We show empirically that our containerized approach is portable and allows arbitrarily low-level performance evaluation when run on two different, production-based HPC clusters with hundreds of GPUs. We report our findings on widely-used open-source software stacks and datasets and offer practitioners insights into what types of analyses our framework enables. To benefit the community, we open-source our software and results.
|
| 10 |
+
|
| 11 |
+
§ 1 INTRODUCTION
|
| 12 |
+
|
| 13 |
+
The rapid advancements in hardware performance in recent years have enabled the operation of machine learning (ML), and especially deep learning (DL) [34]. This field has gained immense traction and interest, and made important contributions to many other domains such as medicine [56] or physics [11]. This led to an abundance of trained models or systems $\left\lbrack {1,8,{29},{41}}\right\rbrack$ to run such models. Dean et al. [15] show that articles published per year in the field grow faster than compute power per Moore's law, up to 100 articles per day at the end of 2018. Naturally, domain scientists and practitioners need to use such models and techniques to solve problems (more efficiently), and hence need to replicate the findings and setups of large amounts of ML published work.
|
| 14 |
+
|
| 15 |
+
Reproducibility $\left\lbrack {{27},{46}}\right\rbrack$ is the Achilles’ heel in computer science in general, and is much more difficult to achieve in large-scale computer systems $\left\lbrack {{19},{53}}\right\rbrack$ . This is because the sheer complexity of the involved physical infrastructure, interconnected through large-scale networks and many layers of software. ML scientists and practitioners not only need reproducible results, but also reproducible performance analysis. Understanding the performance of ML models and frameworks is key to achieving productivity, knowledge and model sharing, as well as energy efficiency. This is especially important since training has been shown to have significant environmental impact [49] for several ML models.
|
| 16 |
+
|
| 17 |
+
Although reproducible results are generally difficult to achieve, seminal work $\left\lbrack {{47},{48}}\right\rbrack$ has been steering the community toward achieving this goal. Instead, in this paper we explore the domain of reproducible performance analysis in large-scale distributed ML. This is a significant and challenging problem exacerbated by two aspects. First, most of the tools and systems involved are locked-in to specific infrastructure, such as HPC clusters and supercomputers. Second, large-scale infrastructure is intrinsically variable in hardware performance $\left\lbrack {{17},{37}}\right\rbrack$ , which subsequently affects application performance. Although guidelines for reproducible performance evaluation exist $\left\lbrack {{40},{52}}\right\rbrack$ , it is unclear whether these are sufficient for ML performance evaluation.
|
| 18 |
+
|
| 19 |
+
Due to their high demand of compute power, ML and DL workloads are naturally suited for deployment in large-scale HPC clusters equipped with special hardware, such as GPUs, FPGAs, or TPUs [15]. Being deployed in HPC infrastructure, ML frameworks such as pytorch [41] or Tensorflow [1] have evolved to run through specialized, tightly-coupled MPI [45] interfaces. Although several performance evaluation frameworks for MPI applications exist, like Tau [44], Scalasca [22], or VAMPIR [31], these are insufficient to assess the performance of ML workloads on HPC clusters. This is because ML and DL frameworks have many levels of complexity and use specialized hardware for which practitioners have to understand bottlenecks as well. Special lower-level profilers like nvprof [10], or pyprof ${}^{1}$ are needed to gather lower level metrics. Finally, all these metrics and measurements have to be combined at arbitrary levels in the software stack to understand the performance of specific components.
|
| 20 |
+
|
| 21 |
+
Moreover, reproducible performance analysis has become even more difficult due to the novel development of complex ML workflow pipelines. These workflows tend to continually expand and include increasingly complex models, pre-processing pipelines for data augmentation [13], diverse data formats and dimensionality, or even complex simulators $\left\lbrack {4,9}\right\rbrack$ . The hardware ecosystem used for training these complex systems is evolving and becoming more diverse and heterogeneous, making reproducible performance analysis difficult. Furthermore, the low-level kernels implementing key ML primitives, on which high-level frameworks depend, are also in continuous development and contribute to the complexity of these workflows. The previously-reported artificial intelligence reproducibility crisis [27] is growing at an accelerated pace and covers the whole spectrum: from numerical reproducibility to performance reproducibility. In this work we focus on addressing the latter.
|
| 22 |
+
|
| 23 |
+
${}^{1}$ https://github.com/NVIDIA/PyProf
|
| 24 |
+
|
| 25 |
+
Although several steps have been taken toward achieving in-depth performance evaluation for ML workloads, these are not fully reproducible and do not support complex work-flows. Building blocks for performance evaluation include visualization techniques [32], lower level performance characterization $\left\lbrack {5,{28}}\right\rbrack$ , or benchmarking efforts $\left\lbrack {7,{30}}\right\rbrack$ . To enable practitioners to use such systems and benchmarks over a variety of infrastructure and with better reproducibility guarantees, in contrast, in this paper we present a framework for reproducible performance analysis of ML workloads.
|
| 26 |
+
|
| 27 |
+
Our contribution is a containerized profiling framework, called SURFBoard, that operates on all modern container-enabled large-scale HPC infrastructure. In this paper we focus on performance analysis for computer vision DL work-flows. However, our work is modular and highly configurable and thus can be adapted to more general ML pipelines. As a consequence, the user can extend it using other profiling tools next to our current toolkit: pytorch [41], NVIDIA DALI, Horovod [33], OpenMPI [20], TAU [44], NVProf [10]. Using this toolset, we show that we can perform performance analysis at arbitrary levels of the software stack. Users can easily answer questions such as what is the training time scal-ability?, what parameters affect batch duration?, or what are the most time-consuming CUDA kernels per batch?.
|
| 28 |
+
|
| 29 |
+
In helping practitioners with performing reproducible performance analysis on complex ML/DL workflows, we show that SURFBoard is able to capture complex performance behavior on two different production-based GPU-enabled clusters. Our experiments focus on implementing typical, real-world analyses that practitioners use to search for bottlenecks and inefficiencies in their training workflows. To enable reproducible performance evaluation in ML workflows on large-scale HPC infrastructure, our contributions are:
|
| 30 |
+
|
| 31 |
+
§ 1.WE PRESENT THE DESIGN AND IMPLEMENTATION OF SURF- BOARD, A CONTAINERIZED PROFILING FRAMEWORK FOR ML WORK- LOADS. TO BENEFIT THE COMMUNITY, WE OPEN SOURCE ${}^{2}$ OUR WORK AS WELL AS THE VISUALIZATION NOTEBOOKS AND COLLECTED PERFORMANCE DATASETS (SECTION 2).
|
| 32 |
+
|
| 33 |
+
< g r a p h i c s >
|
| 34 |
+
|
| 35 |
+
Figure 1: SURFBoard in the typical ML Pipeline life-cycle.
|
| 36 |
+
|
| 37 |
+
2. To validate our work, we present a case-study of performance analysis for DL workloads on two large-scale GPU-enabled clusters. We provide an in-depth experiment design showcasing the features of our profiling framework when running typical performance analysis that practitioners do when benchmarking their DL work-flow. (Sections 3-4).
|
| 38 |
+
|
| 39 |
+
3. We present an in-depth performance analysis on the two clusters using real-world open-source frameworks, workloads, and datasets. We show the portability and reproducibility of our results, discuss the main findings of our experiments, and show how practitioners can use SURFBoard to identify bottlenecks and performance issues (Section 5).
|
| 40 |
+
|
| 41 |
+
§ 2 SURFBOARD: CONTAINERIZED PROFILING WORK- FLOW DESIGN
|
| 42 |
+
|
| 43 |
+
In this section we describe in detail the design and implementation of our containerized profiling framework called SURFBoard. Our work is able to integrate with any ML or DL framework running on high-end HPC infrastructure. We show how we integrate our profiling workflow with state-of-the-art ML software stacks, such as pytorch, openMPI, DALI, ${}^{3}$ or Horovod [43]. We show how ML workflows can be reproduced and ported to large-scale infrastructures that differ in software and hardware through containerization, and how users can perform parameter sweeps over important parameter spaces.
|
| 44 |
+
|
| 45 |
+
Figure 1 illustrates the typical life-cycle of a ML pipeline. The pipeline consists of training scripts and possibly container definitions for training a specific ML model, such as ResNet- 50. Starting from the initial pipeline, profiling is performed at scale, using some representative dataset, and the pipeline is optimized until performance meets some user-defined criteria. SURFBoard helps automate this part of the life-cycle. Subsequently, the optimized ML pipeline can be deployed to train ML models many times, on various datasets.
|
| 46 |
+
|
| 47 |
+
${}^{2}$ https://anonymous.4open.science/r/
|
| 48 |
+
|
| 49 |
+
1116c4aa-7342-45d2-b03a-e602e387cd3b/
|
| 50 |
+
|
| 51 |
+
${}^{3}$ https://github.com/NVIDIA/DALI
|
| 52 |
+
|
| 53 |
+
< g r a p h i c s >
|
| 54 |
+
|
| 55 |
+
Figure 2: Software stack for profiling ML workflows.
|
| 56 |
+
|
| 57 |
+
In most cases, profiling starts with the user requesting a job orchestrator, e.g. SLURM, for a node allocation and subsequent execution of the training algorithms needed to run. Developers and practitioners may want to perform a parameter sweep over the possible parameter space: input dataset, number of I/O workers, number of GPUs per node, batch size, gradient precisions etc. To enable this, we implemented a Sweep Orchestrator with a specified set of YAML configuration files. The orchestrator utilizes the application-specific Sweep Plugin to generate a set of command line invocations of the training script inside a containerized environment, using MPI for inter-process communication.
|
| 58 |
+
|
| 59 |
+
To achieve portability and performance reproducibility over an extensive set of large-scale infrastructure, different in both hardware and software deployments, we created a containerized environment that can run state-of-the-art ML training stacks on most (high-end) HPC infrastructures. One of the most challenging issues in performance reproducibility is achieving similar setups on different infrastructure. We solve this by creating our containerized approach to ML training performance analysis.
|
| 60 |
+
|
| 61 |
+
Figure 2 illustrates the software stack involved in executing profiling experiments with our workflow. The stack is roughly divided into two sections consisting of code running in a host environment and code executed inside a Docker or Singularity container. The container is configured to provide communication and profiling infrastructure in addition to the training capability of Pytorch and associated libraries for neural network training. The specific capabilities of the container are:
|
| 62 |
+
|
| 63 |
+
1. Training with Pytorch and optionally using the Nvidia Data Loading Library, DALI. Compared to the Pytorch builtin dataloader library, Torchvision, DALI offers multiple advantages: input from folders of images or Tensorflow Records (TFRecords), various levels of GPU offloading of the preprocessing, and advanced profiling. We provide a custom-built version of DALI with NVTX annotations enabled, which allows GPU profilers to inspect the details of preprocessing-related computation. It is important to note here that our entire pipeline is configurable, and the user can add different training frameworks, like Tensorflow [1].
|
| 64 |
+
|
| 65 |
+
2. Communication over OpenMPI, which can be leveraged from Pytorch either through the Pytorch built-in native multi-node training module, Pytorch DDP, or through Horovod. We include Horovod specifically for its capability for explicit gradient quantization, i.e. casting the Pytorch-computed FP32 gradients to FP16 before invoking MPI to perform all-reduce for data-parallel training. We configure Horovod to perform all its operations with MPI even when transferring from GPU memory, which enables MPI profilers to inspect the traffic.
|
| 66 |
+
|
| 67 |
+
3. Profiling at multiple levels, targeting compute and communication. The MPI-profiler leveraged in this work is TAU [44], used for detailed communication profiling. NVProf and NSys are leveraged for detailed compute kernel profiling on the GPU, and Pyprof for linking GPU profiling data to the Python execution graph and neural network model.
|
| 68 |
+
|
| 69 |
+
We stress that the tools used in our work are mere examples of what practitioners can achieve with a containerized framework for reproducible performance evaluation. In fact, all three layers of our containerized design are highly configurable, and other types of tools could be integrated. For example, one can gather CPU performance counters using PAPI [55], or use VAMPIR [31] instead of TAU.
|
| 70 |
+
|
| 71 |
+
§ 2.1 SWEEP ORCHESTRATOR
|
| 72 |
+
|
| 73 |
+
We denote a profiling sweep as the set of profiling experiments which traverses all possible application parameter combinations. As this set can be relatively large and varies between neural networks, we provide experiment orchestration infrastructure based on ${\mathrm{{Hydra}}}^{4}$ designed to enable automatic generation of command line arguments for a profiling sweep. We note that each experiment command follows the structure: mpirun <MPI args> singularity exec <singularity args> <profiler args> main.py <application args>
|
| 74 |
+
|
| 75 |
+
The orchestrator defines a base Python class Experiment, initialized with the complete set of possible experiment parameters, as well as necessary abstract functions for parameter combination filtering and command string generation. For non-numeric parameters, the orchestrator defines Enums which constrain the parameter values. The possible application parameters are defined in Table 1, and divided into four classes corresponding to the four types of CLI arguments in the command structure:
|
| 76 |
+
|
| 77 |
+
1. Scale and infrastructure parameters, which manifest in the MPI arguments, include the number of nodes, GPUs per node, CPUs per process, and the networking fabric utilized for communication (Infiniband or Ethernet)
|
| 78 |
+
|
| 79 |
+
2. Container parameters include, for example, the singularity container itself (SIF file), as well as the locations of training data and TFRecord index, if needed. These are mounted into the Singularity container at pre-defined mount points.
|
| 80 |
+
|
| 81 |
+
${}^{4}$ https://github.com/facebookresearch/hydra
|
| 82 |
+
|
| 83 |
+
max width=
|
| 84 |
+
|
| 85 |
+
X Possible Values in YAML Config High-level Description
|
| 86 |
+
|
| 87 |
+
1-3
|
| 88 |
+
Ranks Numeric Total number of MPI Processes executed
|
| 89 |
+
|
| 90 |
+
1-3
|
| 91 |
+
GPU per Node Numeric Processes allowed per-node, each of which maps to one GPU
|
| 92 |
+
|
| 93 |
+
1-3
|
| 94 |
+
Profile Level None, tau_exec, tau_python, tau_python_cuda, nsys, nvprof No profiling Use TAU for mpi, python and GPU profiling respectively Use NVidia tools for GPU profiling
|
| 95 |
+
|
| 96 |
+
1-3
|
| 97 |
+
Network Backend ib, eth Data transfer fabric (InfiniBand or Ethernet)
|
| 98 |
+
|
| 99 |
+
1-3
|
| 100 |
+
Repetitions Numeric Number of runs of each experiment configuration
|
| 101 |
+
|
| 102 |
+
1-3
|
| 103 |
+
Gradient Precision fp16, fp32 Precision of the communicated gradients.
|
| 104 |
+
|
| 105 |
+
1-3
|
| 106 |
+
Compute Precision fp16, fp32, mixed Precision of Compute
|
| 107 |
+
|
| 108 |
+
1-3
|
| 109 |
+
Batch size per GPU Numeric Size of per-GPU batches
|
| 110 |
+
|
| 111 |
+
1-3
|
| 112 |
+
3*Data Loader pytorch, Use Torchvision dataloader
|
| 113 |
+
|
| 114 |
+
2-3
|
| 115 |
+
dali-gpu, Perform all preprocessing on GPU
|
| 116 |
+
|
| 117 |
+
2-3
|
| 118 |
+
dali-cpu-to-gpu Perform all preprocessing on CPU, then move data to GPU
|
| 119 |
+
|
| 120 |
+
1-3
|
| 121 |
+
2*Data Format folder, Use compressed images in specified folder
|
| 122 |
+
|
| 123 |
+
2-3
|
| 124 |
+
tfrecord Use pre-packaged Tensorflow Records
|
| 125 |
+
|
| 126 |
+
1-3
|
| 127 |
+
Workers Numeric Number of preprocessing worker threads
|
| 128 |
+
|
| 129 |
+
1-3
|
| 130 |
+
Communication Backend Horovod [43] Gradient synchronization framework
|
| 131 |
+
|
| 132 |
+
1-3
|
| 133 |
+
|
| 134 |
+
Table 1: Typical parameters that DL practitioners consider in their performance evaluations and their high-level description.
|
| 135 |
+
|
| 136 |
+
3. Profiler parameters define the profiler to be utilized as well as any parametrization of the profiler. Profiler options are TAU, TAU-Python (with optional CUDA support), or NVProf/NSys CUDA profilers.
|
| 137 |
+
|
| 138 |
+
4. Training parameters configure the NN training flow itself: input data format (folder of images or TF records), data loader (torchvision, DALI CPU, DALI GPU), execution precision - fp32, fp16, or mixed, gradient precision between fp32 and fp16, and distribution backend (Horovod).
|
| 139 |
+
|
| 140 |
+
§ 2.2 IMPLEMENTING NEW TRAINING WORKFLOWS
|
| 141 |
+
|
| 142 |
+
We offer the user the possibility to define new types of experiments to enable novel training workflows while retaining the capability of achieving portable and reproducible performance evaluation. To plug a new training flow into the Orchestrator, the user must define a new class inheriting Experiment and implement the cmd( ) method which produces a command from the relevant parameters as well as the is_legal( ) method, which filters out parameter combinations that the target training flow cannot implement.
|
| 143 |
+
|
| 144 |
+
At run-time, the orchestrator leverages Hydra to assemble a sweep configuration from YAML files as follows: a main configuration file defines which parameters have constant values across all runs in a sweep and which cycle through multiple values. The possible values of non-constant parameters are defined in a second YAML configuration file. Hydra assembles the information about parameter values, calculates possible parameter value combinations as the Cartesian product of possible values for each parameter, checks the legality of the parameter combination using is_legal( ), and executes the command produced by cmd( ) from the specified parameter values. To define a new sweep, the user only needs to define the two YAML configuration files.
|
| 145 |
+
|
| 146 |
+
§ 2.3 OPEN SOURCE COMMITMENT
|
| 147 |
+
|
| 148 |
+
Our work is implemented in python. Development took approximately 6 person-months, most of which was spent debugging the various components of the deep software stack illustrated in Figure 2, setting up the NVTX profiling infrastructure, and to ensure portability of containers and experiment orchestrator between various systems.
|
| 149 |
+
|
| 150 |
+
The experimental data gathering and visualization took another 6 person-months. We release both the source code of our reproducible performance analysis framework, as well as all the performance data and visualization scripts. Since the start of the project in 2019, approximately 50,000 core hours were spent for debugging and initial framework calibration, while 300,000 core hours have been utilised on the Cartesius cluster and 50,000 core hours have been used on the LISA cluster (see Section 4 for an in-depth description of these clusters) for running profiling experiments. In the following sections of this paper we present our validation of SURFBoard using a case-study of large-scale training experiments on two production HPC infrastructures.
|
| 151 |
+
|
| 152 |
+
§ 3 CASE STUDY: EXPERIMENT DESIGN
|
| 153 |
+
|
| 154 |
+
In this section we describe in detail the design of the experiment we perform to validate SURFBoard. We seek to empirically show that our performance analysis framework adheres to reproducibility standards and is able to help ML practitioners answer valuable questions about the performance of ML training workflows. We describe the high-level goals of our experiment, which are typical questions a ML practitioner would ask when assessing the performance of a ML workflow. We focus on the methodology of performing the performance analysis of the high-level goals, and we describe in detail the DL model used in our study.
|
| 155 |
+
|
| 156 |
+
§ 3.1 CASE STUDY HIGH-LEVEL GOALS
|
| 157 |
+
|
| 158 |
+
The goal of this profiling exercise is to evaluate compute and communication efficiency for data-parallel distributed training of ResNet50 on SURFSara infrastructure, and quantify the contribution of each training pipeline stage (batch preprocessing, training, communication) to the total runtime, under various configurations of each stage. Furthermore, we wish to construct a performance model enabling performance extrapolation. We separate this goal into three large sub-objectives:
|
| 159 |
+
|
| 160 |
+
1. Scalability. We aim to determine the effect of various training configurations on the scalability of training up to the maximum sizes permitted by the hardware. We measure the scaling efficiency itself but also how each configuration option affect the scaling efficiency at each scale.
|
| 161 |
+
|
| 162 |
+
2. Computation Efficiency. We measure each stage in the training process: forward, backward, and model update, as well as the total batch duration, and calculate the overall compute efficiency of the GPU, as well as the overall memory bandwidth efficiency achieved by the GPU.
|
| 163 |
+
|
| 164 |
+
3. Preprocessing Computation. We compare CPU and GPU preprocessing via total application run-time at various scales in order to determine the effect of the number of preprocessing workers and preprocessing offload on run-time at various scales.
|
| 165 |
+
|
| 166 |
+
4. Adhering to Reproducibility Standards. We seek to determine whether our performance analysis framework is able to run and achieve significant results on multiple types of infrastructure. We compare the results of our framework on two different large-scale production HPC systems. The practical details of these systems are detailed in Section 4.
|
| 167 |
+
|
| 168 |
+
§ 3.2 PERFORMANCE ANALYSIS METHOD
|
| 169 |
+
|
| 170 |
+
1. Important Parameters. When performing DL performance analysis, practitioners usually focus on several important parameters. In this study, we consider the impact of the following parameters: gradient and compute precision, size of the batches, data loader, and number of workers. These parameters are described and explained in Table 1.
|
| 171 |
+
|
| 172 |
+
2. Scalability. To study the scalability we measure both the duration of the experiments at various scales and the scaling efficiency (SE) for $N$ GPUs. The duration of one experiment is measured using the data from TAU as the duration of the .tau application. Since the application is running on several GPUs, the maximum duration over all GPUs is used as the final measure. The scaling efficiency is measured as the ratio between the experiment duration using a baseline number of GPUs (e.g., one GPU) over the duration of the same experiment using $N$ GPUs:
|
| 173 |
+
|
| 174 |
+
$$
|
| 175 |
+
S{E}_{N} = \frac{{t}_{\text{ baseline }}}{{t}_{N}}
|
| 176 |
+
$$
|
| 177 |
+
|
| 178 |
+
3. Efficiency. To study the computational efficiency, we perform deeper analysis by measuring the batch duration and the duration of the three stages of each training iteration: forward pass, backward pass, parameters update. During the forward pass, the DNN makes a prediction of the labels associated with each image in the input batch, and an error is calculated by comparing the known correct labels with the predicted ones. In the backward pass, the error gradients are calculated and propagated through each network layer. Finally, the gradients are utilized to update each network parameter to minimize the error.
|
| 179 |
+
|
| 180 |
+
We use NVProf along with NVTX annotations to delimit the previous stages. We also calculate the overall compute efficiency of the GPU using the CUDA-kernel-level data from Pyprof. The number of floating points operation per second is measured as the ratio of the sum of the FLOPs over the sum of the duration of each kernel. To obtain the compute efficiency of the GPU, we divide the measured value by the theoretical value of the given GPU. Similarly, the memory bandwidth efficiency is computed as the ratio of the measured bandwidth (ratio of total amount of bytes in and out of the GPU over the sum of the duration of each kernel) of the GPU over its theoretical bandwidth.
|
| 181 |
+
|
| 182 |
+
4. Sensitivity Analysis. We study the impact of the configuration parameters over the scaling efficiency of the application by using Taguchi Methods [50]. The goal of such methods is to reduce the number of experiments needed to be performed in order to determine which factor(s) impacts a predetermined target variable the most. We do not use the method to design our experiment but only to evaluate the parameter importance given our experimental results. For that, for a given experiment, we use the signal to noise ratio (SN) defined as follows for the Taguchi Methods:
|
| 183 |
+
|
| 184 |
+
$$
|
| 185 |
+
{SN} = - {10}\log \left( {\frac{1}{N}\mathop{\sum }\limits_{{i = 1}}^{N}\frac{1}{{y}_{i}^{2}}}\right)
|
| 186 |
+
$$
|
| 187 |
+
|
| 188 |
+
$N$ : Number of repetitions of the given experiment; ${y}_{i}$ : Target variable value for repetition $i$ of the experiment.
|
| 189 |
+
|
| 190 |
+
§ 3.3 DL MODEL USED IN THE STUDY
|
| 191 |
+
|
| 192 |
+
We perform our experiments on a state-of-the-art, industry-standard model and dataset: ResNet50 v1.5. We use training scripts implemented by Nvidia as part of their state-of-the-art reference examples in Pytorch [39]. The model and training scripts are configured for image classification with the ImageNet dataset.
|
| 193 |
+
|
| 194 |
+
Compared to the original definition [23], ResNet50 v1.5 has stride $= 2$ in the $3 \times 3$ convolutions, rather than in the $1 \times 1$ convolutions, in the bottleneck blocks that require downsam-pling. This comes at a small increase in terms of computational cost, but is beneficial in terms of accuracy. This modification was first introduced in a Lua Torch re-implementation of ResNet from Facebook [18], and has since been widely adopted. A more detailed overview of ResNet variants can be found in [24], where ResNet 1.5 is referred to as ResNet-B.
|
| 195 |
+
|
| 196 |
+
Nvidia's scripts provide an implementation that is highly tuned both in terms of hyper-parameters and final accuracy, as well as training-time performance. As such, it is more representative of the current state-of-the-art compared to Pytorch's reference ImageNet training implementation [42].
|
| 197 |
+
|
| 198 |
+
With respect to performance, Nvidia's implementation tightly integrates with the DALI library for data loading and pre-processing. DALI has multiple advantages over Pytorch's own dataloaders: it provides support for reading input data stored in the Tensorflow TFRecord format, which we leverage as part of our setup; it provides partially GPU-accelerated JPEG decoding and end-to-end GPU-accelerated preprocessing for the ImageNet dataset. With respect to accuracy, they implement all the strategies described in [24], that together contribute to pushing the top-1 ImageNet accuracy to around 78.4%.
|
| 199 |
+
|
| 200 |
+
It is important to note that DL model described above is used as an example for validation study and to showcase capabilities of the framework and type of the analysis integrated within the framework.The framework is designed with principle of extensibility and will require some effort from ML practitioners / developers to integrate their model within the framework. The focus of this research is to highlight the approach and the methodology rather than the results themselves on a specific DL model.
|
| 201 |
+
|
| 202 |
+
§ 4 EXPERIMENT SETUP
|
| 203 |
+
|
| 204 |
+
For our experiment we target two production-grade distributed infrastructures The two large-scale HPC clusters we run our experiments on are LISA ${}^{5}$ and Cartesius, ${}^{6}$ specifically their GPU islands. Note that both hardware and software stacks of the two systems are highly different. We show empirically that the containerized performance analysis workflow we propose is portable and produces reproducible results that could be compared between the two. Note that these two types of production-ready clusters are comparable to what ML and DL practitioners use in practice to deploy training workflows. We have not chosen any more clusters in our results because we would like to focus more on showcasing methodology and approach using SURFboard than the results themselves. We would like to encourage HPC community to expand and validate this approach on more infrastructures.
|
| 205 |
+
|
| 206 |
+
max width=
|
| 207 |
+
|
| 208 |
+
Software / Library Version
|
| 209 |
+
|
| 210 |
+
1-2
|
| 211 |
+
PyTorch 1.2.0
|
| 212 |
+
|
| 213 |
+
1-2
|
| 214 |
+
Python 3.6
|
| 215 |
+
|
| 216 |
+
1-2
|
| 217 |
+
CUDA 10.0
|
| 218 |
+
|
| 219 |
+
1-2
|
| 220 |
+
DALI 0.18.0
|
| 221 |
+
|
| 222 |
+
1-2
|
| 223 |
+
TAU 2.28.1
|
| 224 |
+
|
| 225 |
+
1-2
|
| 226 |
+
PyProf 3.6.0
|
| 227 |
+
|
| 228 |
+
1-2
|
| 229 |
+
CuDNN 7
|
| 230 |
+
|
| 231 |
+
1-2
|
| 232 |
+
OS Ubuntu 18.04
|
| 233 |
+
|
| 234 |
+
1-2
|
| 235 |
+
|
| 236 |
+
Table 2: Software versions for the container environment.
|
| 237 |
+
|
| 238 |
+
max width=
|
| 239 |
+
|
| 240 |
+
X Cartesius LISA
|
| 241 |
+
|
| 242 |
+
1-3
|
| 243 |
+
Nodes 8,16,32,48 1,4,8
|
| 244 |
+
|
| 245 |
+
1-3
|
| 246 |
+
GPU per Node 2 4
|
| 247 |
+
|
| 248 |
+
1-3
|
| 249 |
+
Gradient Precision fp16, fp32 fp16, fp32
|
| 250 |
+
|
| 251 |
+
1-3
|
| 252 |
+
Compute Precision fp32 fp32
|
| 253 |
+
|
| 254 |
+
1-3
|
| 255 |
+
Batch size per GPU 32, 64 32, 64
|
| 256 |
+
|
| 257 |
+
1-3
|
| 258 |
+
Data Loader dali-gpu, dali-cpu-to-gpu dali-gpu, dali-cpu-to-gpu
|
| 259 |
+
|
| 260 |
+
1-3
|
| 261 |
+
Workers 2,8 2,4
|
| 262 |
+
|
| 263 |
+
1-3
|
| 264 |
+
|
| 265 |
+
Table 3: Parameters considered for the experiment on both Cartesius and LISA systems.
|
| 266 |
+
|
| 267 |
+
§ 4.1 CARTESIUS HARDWARE SPECIFICATION
|
| 268 |
+
|
| 269 |
+
The Cartesius GPU island consists of 66 Bullx B515 processing nodes. Each node is equipped with a 16-core E5-2450 v2 Intel CPU (Ivy Bridge microarchitecture), operating at 2.5 $\mathrm{{GHz}}$ , and ${96}\mathrm{{GB}}$ of memory. Each node is also equipped with two K40m GPUs, and two Mellanox Connect-X3 Infini-band adapters, with a maximum throughput of 56 Gbps each. For our experiments we utilized up to 48 nodes. Cartesius is maintained with RedHat 4.8.5-39, Linux version 3.10.0- 1127.8.2.e17.x86_64. We have used CUDA enabled Open-MPI/3.1.2 for transferring data buffers directly between GPUs using Infiniband network.
|
| 270 |
+
|
| 271 |
+
§ 4.2 LISA HARDWARE SPECIFICATION
|
| 272 |
+
|
| 273 |
+
The LISA cluster consists of 25 GPU-accelerated nodes, each equipped with Intel Xeon Bronze 3104 CPUs (12-core, 1.7 $\mathrm{{GHz}}$ ), ${256}\mathrm{{GB}}$ of memory, and four $\mathrm{{GPU}}$ accelerators, either NVIDIA GeForce 1080Ti or NVIDIA Titan V GPUs. The nodes are connected through 40Gbps Ethernet. LISA is maintained with Debian GNU version 10 (buster). We have used OpenMPI/3.1.4 for multinode scaling experiments.
|
| 274 |
+
|
| 275 |
+
${}^{5}$ https://userinfo.surfsara.nl/systems/lisa
|
| 276 |
+
|
| 277 |
+
${}^{6}$ https://userinfo.surfsara.nl/systems/cartesius
|
| 278 |
+
|
| 279 |
+
§ 4.3 SOFTWARE ENVIRONMENT INSIDE THE CON- TAINER
|
| 280 |
+
|
| 281 |
+
Table 2 outlines the software environment inside the container for both LISA and Cartesius. This is one of the main advantages of using containers. It enables the use of the same software environment on both HPC clusters, enabling reproducibility over many types of software and hardware infrastructure.
|
| 282 |
+
|
| 283 |
+
§ 4.4 ACHIEVING EMPIRICAL REPRODUCIBILITY
|
| 284 |
+
|
| 285 |
+
We profile the communication of the application with TAU both on Cartesius and LISA using combinations of the parameters presented in Table 3. In order to gather a statistically valid sample, we conduct at least 10 experiments for each combination of parameters, each of them being run for a total of 50 batches. Since our analysis is performed by gathering metrics at batch level, we ensured that in total 500 batches per experiment achieves statistical significance and is in check with current reproducibility standards [37,40,52]. In our figures, we present the median of a given metric over the 10 experimental runs along with the ${95}\%$ confidence interval for the communication data. Additionally, we conduct GPU profiling on Cartesius gathering 10 experiments for each combination of parameters in Table 3 for a total of 25 batches using NVProf, and 25 batches using NVProf and enabling kernel profiling via Pyprof.
|
| 286 |
+
|
| 287 |
+
§ 5 RESULTS AND VISUALIZATION
|
| 288 |
+
|
| 289 |
+
In this section, we showcase results and visualizations of data that can be produced using the framework presented in this paper. We present the data from higher (experiment duration and communication) to lower (GPU efficiencies and CUDA kernels) levels of the software stack. The experiments we performed are typical analyses performed by DL practitioners and the conclusions we draw can help practitioners build training infrastructure that is suitable to their workloads, identify bottlenecks, and identify what are important parameters in their setups. Moreover, this kind of analysis shows that SURFBoard is useful in helping practitioners analyze the performance of their DL workflows in a reproducible manner, across multipe types of infrastructure.
|
| 290 |
+
|
| 291 |
+
Lessons Learned. The main lessons learned from analyzing the empirical experiments performed in our study are the following:
|
| 292 |
+
|
| 293 |
+
1. We confirm that Pytorch, when coupled with Horovod, achieves a good scalability $\left( { > {90}\% }\right)$ for ResNet-50-like workloads, see Figures 3, 4.
|
| 294 |
+
|
| 295 |
+
2. On infrastructure like LISA and Cartesius, where resources are not shared, there is not much overall performance variability, especially on the MPI collective operations, see (for example, whiskers in) Figures 5, 6.
|
| 296 |
+
|
| 297 |
+
3. The computation throughput of ResNet-50-like workloads is neither memory-bound, nor compute-bound. The bottlenecks lie in waiting for remote data from other GPUs to be transferred by the DL framework, see Section 5.3.
|
| 298 |
+
|
| 299 |
+
4. On some types of machines, the CPU-to-GPU ratio is important, as the GPUs need to be fed data quickly enough to achieve good performance. Our framework can be used by system designers to detect bottlenecks like these and areas of improvement. Users could perform similar steps of analysis using their own workloads to determine an appropriate ratio of CPUs to GPUs, see Section 5.4.
|
| 300 |
+
|
| 301 |
+
5. The number of preprocessing workers has more importance than the batch size on LISA. However, the opposite behavior is true in architecture like Cartesius. Practitioners should perform similar types of analyses to decide what parameters are most important in their workloads and how the performance could be improved by using this knowledge, see Section 5.4.
|
| 302 |
+
|
| 303 |
+
6. SURFBoard is able to be deployed on two different large-scale HPC infrastructures. It can further help practitioners identify behavioral differences on large-scale infrastructures and what deployment parameters cause these.
|
| 304 |
+
|
| 305 |
+
§ 5.1 SCALABILITY
|
| 306 |
+
|
| 307 |
+
Execution Time. We measure the duration of the experiments as the total runtime of the TAU application on a GPU involved in the computation. Experiment durations for both Cartesius and LISA are presented in Figures 3 and 4 respectively. For both systems, the duration scales linearly with the number of GPUs. This is likely due to the communication overhead being increasingly more significant for higher numbers of GPUs used. On Cartesius, experiments using fp32 gradients all take longer than the ones with fp16 gradient. This behavior is the opposite on LISA.
|
| 308 |
+
|
| 309 |
+
Since gradient casting requires additional CPU cycles, and the GPU to CPU thoughput ratio on LISA is higher than on Cartesius, we hypothesize that this difference in behaviour on the two systems is caused by a CPU-induced bottleneck on LISA. These results illustrate the utility of our framework to system designers, e.g. our results would indicate provisioning more CPUs to LISA as a relatively inexpensive way to increase DL performance. Alternatively, gradient casting should be offloaded to the GPU or some other accelerator to relieve the bottleneck.
|
| 310 |
+
|
| 311 |
+
< g r a p h i c s >
|
| 312 |
+
|
| 313 |
+
Figure 3: Duration and scaling efficiencies of 50-batch experiments on Cartesius with different configurations. Experiments depicted uses 32 images per batch. Legend reads as follows: <number of workers>w,<preprocessing>,<gradient precision>. Note: the vertical axis does not start at 0 for better visibility.
|
| 314 |
+
|
| 315 |
+
< g r a p h i c s >
|
| 316 |
+
|
| 317 |
+
Figure 4: Duration and scaling efficiencies of 50-batch experiments on LISA with different configurations. Experiments depicted uses 32 images per batch. Legend reads as follows: <number of workers>w,<preprocessing>,<gradient precision>. Note: the vertical axis does not start at 0 for better visibility.
|
| 318 |
+
|
| 319 |
+
Scaling efficiency. We compute and present the scaling efficiencies of the application for both Cartesius and LISA in Figures 3 and 4, respectively. The efficiencies are computed as the ratio of the duration of the given experiment over the baseline experiment. For the experiment on Cartesius, the baseline was taken as the expriment using 8 nodes ( 16 GPUs) whereas the baseline on LISA was taken as the experiment using 1 node ( 4 GPUs). We used different baselines for computing scalability to show the flexibility of our framework, and to model practitioners' behavior when scaling DL computations: for large-scale training, using few resources it too time-consuming and early scale-out is needed. Our experiments also show that using fp32 gradients results in lower scaling efficiency while using CPU preprocessing gives a larger one on both systems.
|
| 320 |
+
|
| 321 |
+
§ 5.2 COMMUNICATION
|
| 322 |
+
|
| 323 |
+
Our framework uses TAU to profile communication through several MPI call metrics. Figure 6 and 5 presents the sum of the duration of MPI_Allreduce, MPI_Bcast and MPI_Gather across all GPUs. All of these are considered extremely important for DL performance by practitioners. It is also possible to gather other metrics such as number of MPI calls, and total volumes of messages sent across GPUs, see Figures 7, 8, 9 and 10. MPI_Allreduce is used during the model update phase to share the gradients of each weights of the neural network. Resnet50 has approximately 23 million parameters. The experiment runs on each GPU for 50 batches and the gradients are stored using 2 or 4 bytes (half or full-precision floating point precision format). As a consequence, we expect the total volume of message exchanged for MPI_Allreduce to be:
|
| 324 |
+
|
| 325 |
+
$$
|
| 326 |
+
\text{ Volume } = {N}_{\text{ weights }} \times \text{ grad\_prec } \times {N}_{\text{ batches }} \times {N}_{GPU}\text{ , }
|
| 327 |
+
$$
|
| 328 |
+
|
| 329 |
+
where grad_prec is the number of bytes used to store each gradient ( 4 bytes for fp32 gradients); ${N}_{\text{ batches }}$ is the number of batches (50); ${N}_{GPU}$ is the number of GPUs. The total volume of messages exchanged for MPI_Allreduce presented on Figure 9 and 10 is in line with the expected volume. Using similar types of analyses, practitioners can identify bottlenecks or ill-behavior at the MPI collective operation and networking layer when performing large-scale training.
|
| 330 |
+
|
| 331 |
+
< g r a p h i c s >
|
| 332 |
+
|
| 333 |
+
Figure 5: MPI calls durations on LISA for 2 workers, CPU preprocessing, 32 images per batch, gradient fp32. Note: bars are the median values over 10 runs, whiskers ${95}\%$ confidence interval, vertical axis is logarithmic, lower is better.
|
| 334 |
+
|
| 335 |
+
< g r a p h i c s >
|
| 336 |
+
|
| 337 |
+
Figure 6: MPI calls durations on Cartesius for 2 workers, CPU preprocessing, 32 images per batch, gradient fp32. Note: bars are the median values over 10 runs, whiskers 95% confidence interval, vertical axis is logarithmic, lower is better.
|
| 338 |
+
|
| 339 |
+
§ 5.3 COMPUTE AND MEMORY EFFICIENCY
|
| 340 |
+
|
| 341 |
+
Batch duration. Our framework combines NVProf along with NVTX annotations to delimit the training stages (forward, backward, update) and obtain more details about the training. The batch duration and training-stages duration can be visualized in Figure 11. We observe that the duration of the batches scales with the number of nodes/GPUs. In particular, the backward phase of the training, which includes gradient synchronization over InfiniBand, takes longer for larger number of GPUs whereas the forward and update phases stay constant. We also note the increased variability in batch duration at larger scales, caused by corresponding variability in the time required for gradient synchronization over the network fabric and the effect of uncorrelated performance jitter between the GPU workers, which can have a variety of causes - OS scheduling, resource contention, garbage collection. Practitioners can make use of this type of analysis to decide which parts of the per-batch computation are bottlenecks or variable in performance.
|
| 342 |
+
|
| 343 |
+
< g r a p h i c s >
|
| 344 |
+
|
| 345 |
+
Figure 7: Number of MPI messages exchanged across all GPUs on Cartesius for 2 workers, CPU preprocessing, 32 images per batch, gradient fp32. Note: bars are the median values over 10 runs, whiskers 95% confidence interval, vertical axis is logarithmic, lower is better.
|
| 346 |
+
|
| 347 |
+
< g r a p h i c s >
|
| 348 |
+
|
| 349 |
+
Figure 8: Number of MPI messages exchanged across all GPUs on LISA for 2 workers, CPU preprocessing, 32 images per batch, gradient fp32. Note: bars are the median values over 10 runs, whiskers 95% confidence interval, vertical axis is logarithmic, lower is better.
|
| 350 |
+
|
| 351 |
+
GPU performance metrics. Using Pyprof, we measure kernel-level data and compute the utilized memory bandwidth and utilized compute capacity of the GPUs on Cartesius. We present the results in Table 4 along with the efficiencies relative to the theoretical performance of the specific GPU model we run experiments on. NVIDIA Tesla K40m has a peak memory bandwidth of ${288.4}\mathrm{{GB}}/\mathrm{s}$ and a theoretical compute performance using fp32 float of 5.046 TFLOP/s. Table 4 shows that both memory and compute efficiency are low (below 18%). This is due to the mismatch in model (implementation) and the hardware we tested on. Given the low computed memory bandwidth and compute efficiencies, each below ${20}\%$ , it seems that the application is neither memory nor compute bound. Low GPU utilization (below 50%) is expected for deep learning frameworks, as noted in [12,57]. Using this kind of analysis, practitioners can identify which parts of the GPU implementation represent bottlenecks.
|
| 352 |
+
|
| 353 |
+
< g r a p h i c s >
|
| 354 |
+
|
| 355 |
+
Figure 9: Volume in bytes of MPI messages exchanged across all GPUs on Cartesius for 2 workers, CPU preprocessing, 32 images per batch, gradient fp32. Note: bars are the median values over 10 runs, whiskers 95% confidence interval, vertical axis is logarithmic, lower is better.
|
| 356 |
+
|
| 357 |
+
< g r a p h i c s >
|
| 358 |
+
|
| 359 |
+
Figure 10: Volume in bytes of MPI messages exchanged across all GPUs on LISA for 2 workers, CPU preprocessing, 32 images per batch, gradient fp32. Note: bars are the median values over 10 runs, whiskers 95% confidence interval, vertical axis is logarithmic, lower is better.
|
| 360 |
+
|
| 361 |
+
CUDA kernels. Pyprof also allows to retrieve a detailed kernel summary containing, among other metrics, the time elapsed, number of bytes in and out of the GPU, number of floating points operations performed during the execution of a given kernel. Table 5 presents the duration of the 10 most used kernels during a single batch. According to [16], the top four kernels in Table 5 correspond to backwards pass convolution, forward pass convolution, fully connected layer (forward and backward), and element-wise addition respectively. These results are in accordance with expectations given the structure of the ResNet50 DNN. Using this type of analysis, practitioners can visualize what GPU kernels take the most GPU computation time and identify potential bottlenecks.
|
| 362 |
+
|
| 363 |
+
< g r a p h i c s >
|
| 364 |
+
|
| 365 |
+
Figure 11: Scaling of training stages duration on Cartesius. Configuration presented: 2 workers, dali-cpu-to-gpu, 32 images per batch, gradients fp32, compute fp32. lower is better.
|
| 366 |
+
|
| 367 |
+
max width=
|
| 368 |
+
|
| 369 |
+
GB/s Bandwidth efficiency TFLOPs/s Compute efficiency
|
| 370 |
+
|
| 371 |
+
1-4
|
| 372 |
+
51.16 16.23 % 0.6964 17.77 %
|
| 373 |
+
|
| 374 |
+
1-4
|
| 375 |
+
|
| 376 |
+
Table 4: NVIDIA Tesla K40m GPU performance measures on Cartesius for 8 nodes, 2 workers, CPU preprocessing, 32 images per batch, gradient fp32. Both efficiencies are computed with respect to the theoretical performance of the GPU.
|
| 377 |
+
|
| 378 |
+
§ 5.4 PARAMETER SENSITIVITY
|
| 379 |
+
|
| 380 |
+
To get a more detailed understanding on the effect of each parameter we considered in this work, we study the impact of the number of GPUs, number of workers, type of dataloader, size of the batches, and gradient precision on the scaling efficiency using Taguchi Methods, as described in Section 3.1. To do so, we compute the range of SN ratio for each of the parameters based on the set of experiment configuration performed. Since the goal is to determine the ranking of the parameters for each system, we normalize the range of SN ratio and present the order of importance on Figure 12a for Cartesius and Figure 12b for LISA. The larger the range of the SN ratio is, the more impactful a parameter is. Disregarding the number of GPUs which is obviously the most impactful parameter of the scaling efficiency, on both systems, the gradient precision is the second-most important parameter. Because the gradients are sent between nodes/GPUs, smaller precision gradient results in lower overall volume of data exchanged via MPI and therefore a better scalability of the application. The least impactful parameter on both system is the dataloader. Interestingly, batch size and number of workers switch places in their effect on scaling efficiency on the two systems under analysis (note the different coloring scheme for number of workers and batch size in Figures 12a and 12b). Conceptually, batch size should be the more important parameter to scalability, as it directly correlates to the time spent in computing and decreases the relative importance of communication to the total application time. This expectation is confirmed on Cartesius. On LISA, the relative under-provisioning of CPU compute to GPU compute may cause the difference in relative importance of the number of preprocessing workers to scalability, although it must be noted that the absolute differences between configurations in this respect are small (see Figure 4). Using this type of analysis, practitioners can perform in-depth analysis on what kind of parameters are most important for their DL training work-flows and decide on which type of other analysis to zoom-in to identify possible bottlenecks and places of improvement.
|
| 381 |
+
|
| 382 |
+
max width=
|
| 383 |
+
|
| 384 |
+
Kernel name Time (s) $\%$ total
|
| 385 |
+
|
| 386 |
+
1-3
|
| 387 |
+
cudnn::detail::wgrad_alg0_engine 0.2597 15.37
|
| 388 |
+
|
| 389 |
+
1-3
|
| 390 |
+
cudnn::detail::implicit_convolve_sgemm 0.2395 14.17
|
| 391 |
+
|
| 392 |
+
1-3
|
| 393 |
+
sgemm_sm35_ldg_nt_64x16x64x16x16 0.1679 9.935
|
| 394 |
+
|
| 395 |
+
1-3
|
| 396 |
+
elementwise_kernel 0.1522 9.009
|
| 397 |
+
|
| 398 |
+
1-3
|
| 399 |
+
sgemm_largek_lds64 0.1332 7.882
|
| 400 |
+
|
| 401 |
+
1-3
|
| 402 |
+
cudnn::detail::dgrad_engine 0.1181 6.989
|
| 403 |
+
|
| 404 |
+
1-3
|
| 405 |
+
sgemm_sm35_ldg_nn_64x16x64x16x16 0.1023 6.055
|
| 406 |
+
|
| 407 |
+
1-3
|
| 408 |
+
cudnn::detail::dgrad_alg1_engine 0.0933 5.521
|
| 409 |
+
|
| 410 |
+
1-3
|
| 411 |
+
cudnn::detail::bn_bw_1C11_kernel_new 0.0913 5.402
|
| 412 |
+
|
| 413 |
+
1-3
|
| 414 |
+
cudnn::detail::bn_fw_tr_1C11_kernel_NCHW 0.0415 2.457
|
| 415 |
+
|
| 416 |
+
1-3
|
| 417 |
+
|
| 418 |
+
Table 5: Duration and percent of total runtime of the 10 most used kernels during a single training batch on Cartesius for 2 workers, CPU preprocessing, 32 images per batch, gradient fp16.
|
| 419 |
+
|
| 420 |
+
§ 6 LIMITATIONS
|
| 421 |
+
|
| 422 |
+
We discuss limitations and threats to validity of our work. The scope and goal of this work is to provide practitioners with a framework for the reproducible performance analysis of ML workloads, and not in presenting an in-depth performance analysis and tuning of a given workload. Instead, we provide a case-study on a workload and two different clusters, showcasing the reproducibility of our work and the differences in performance obtained in the two clusters. SURFBoard is extensible and can be tuned to accept novel workloads, ML frameworks and container orchestration tools. To this end, we identify the following limitations of our work.
|
| 423 |
+
|
| 424 |
+
1. Single Model We only validated our framework on the de-facto computer vision workload and dataset. This is a widely-used workload in the ML community and we chose this because the results we gathered can be compared by practitioners against their own results. However, adding other models and datasets is feasible. We are working toward adding new models and open-sourcing our workflows and results.
|
| 425 |
+
|
| 426 |
+
2. PyTorch Our case-study and results are obtained only on PyTorch, which is widely used by practitioners. However, swapping PyTorch for Tensorflow in our workflows is only an implementation effort. All other steps and components can be re-used from our PyTorch proof-of-concept.
|
| 427 |
+
|
| 428 |
+
< g r a p h i c s >
|
| 429 |
+
|
| 430 |
+
Figure 12: Ranking of experimental parameters from most to least important in order to maximize the scaling efficiency of the application on both systems. Note that the most important is the ordering of the parameters here, not their magnitudes and that the two figures are not comparable in absolute values.
|
| 431 |
+
|
| 432 |
+
3. Difficulty of Extensibility We have built SURFBoard with extensibility in mind, allowing each component to be replaced by others. However, we have not studied in depth, using independent programmers and practitioners how easy it is to achieve extensibility. We plan to study this in the future by performing user studies that can help us improve our framework.
|
| 433 |
+
|
| 434 |
+
§ 7 RELATED WORK
|
| 435 |
+
|
| 436 |
+
In this section we discuss work related to our approach. We identify four main categories of related work: (i) performance reproducibility in large-scale systems; (ii) ML performance analysis; (iii) ML workflows and orchestration; and (iv) ML/DL benchmarks. We discuss each in detail and contrast with our own approach.
|
| 437 |
+
|
| 438 |
+
Performance reproducibility in large-scale systems. Performance reproducibility in large-scale systems is an elusive task. There are no community-wide agreed methodologies to achieve it, although several authors have addressed the topic. In HPC, Hoefler and Belli [26] propose a set of 12 principles to achieve credible performance evaluation. In cloud computing, which is much more variable than HPC environments [37], Papadopoulos et al. [40] propose a set of methodological principles to achieve reproducible performance evaluation. However, these seem insufficient as multiple types of variability are identified by Uta et al. [52], which highly affect reproducibility of results. In our work, we leverage all the findings of such types of work and perform sanity checks on our results. We also adhere to reproducible methods for achieving performance results, like performing many repeated experiments (during different days), computing nonparametric confidence intervals for medians, and analyzing variability.
|
| 439 |
+
|
| 440 |
+
ML performance analysis. Due to the inherent demands in computational requirements, ML performance analysis is currently of utmost importance. As outlined by Amodei and Hernandez [3], the amount of compute used when training the largest AI models increased exponentially with a 3.4- month doubling time, far outpacing Moore's law, resulting in a ${300},{000x}$ increase between 2012 and 2018. Furthermore, Hernandez and Brown [25] estimate that the algorithmic efficiency also improved by a factor of ${25x}$ in the same period, leading to 7.5 million times increase in the effective training compute available to the largest AI experiments. Dakkak et al. [14] propose the MLModelScope toolkit that includes performance analysis along with model evaluation, in a reproducible, containerized fashion. However, the toolkit is concentrated on non-distributed training only. Modern ML models are trained in a distributed fashion, using a variety of communication interconnects (e.g. NVLink, Infiniband, OmniPath, Ethernet, PCIe), and employing different parallelization strategies. Awan et al. [6] aim to measure these characteristics and propose improvements to communication patterns, with visualization tools for HPC GPU-based clusters proposed by Kousha in [32]. Distributed ML training performance is also analyzed in [28] and [5], and communication however there is no complete framework that allows reproducible performance analysis of ML workloads on modern distributed systems. With SURFBoard, we offer a common ground for all these types of analyses to be performed in comparable settings and in a reproducible fashion.
|
| 441 |
+
|
| 442 |
+
ML workflows and orchestration. As depicted in Figure 1, Machine Learning workflows are composed of elements spanning a broad software stack, going from efficient GPU kernel execution, CPU-GPU work partitioning, efficient storage access, and multi-node orchestration. The interaction between these elements often leads to complex systems producing results that are very challenging to reproduce, both from a numerical (i.e. model accuracy) perspective [27], as well as from a performance perspective [7]. Typical training workflows, such as the computer vision one presented in this work, include stages such as data preprocessing and augmentation [13], hyperparameter tuning [2, 36], or model interpretation [51]. The high computational complexity of the training process often requires the workflow to be executed in a distributed fashion, adding additional dependencies to distribution mechanisms such as Horovod [43] or PyTorch Distributed [35] and orchestration tools such as Kubernetes or SLURM. SURFBoard offers practitioners an easy-to-use and configurable framework for gathering performance data from all the components of the training workflows.
|
| 443 |
+
|
| 444 |
+
$\mathbf{{ML}/{DL}\;{benchmarks}.{Benchmarkinglarge} - {scalesystem}}$ behavior under diverse workloads like HPC and big data is a well-studied topic. Recently, with the highly increased interest in ML and DL workloads, several benchmarks [7, 14, 21, ${30},{38},{54}\rbrack$ emerged to cover this need. However, due to the relatively early days of the field, none of them have emerged as a community- or industry-wide de-facto benchmark. It is also unclear at the moment how easy it is to port these benchmarks to all possible infrastructure that runs ML/DL code. Our approach helps in this sense by being able to build reproducible instances of these benchmarks to run on many types of large-scale infrastructure. Moreover, our containerized approach would ensure an even playing field (i.e., common software infrastructure) to cut down on technology-and software-induced performance differences.
|
| 445 |
+
|
| 446 |
+
Overall, SURFBoard presents a more holistic approach at achieving performance reproducibility in large-scale systems when running ML workloads. Even though SURFBoard is related and contains technologies from each of the aforementioned categories, SURFBoard is more than the sum of its components, as it is the first enabler of reproducible ML performance analysis at scale.
|
| 447 |
+
|
| 448 |
+
§ 8 CONCLUSION
|
| 449 |
+
|
| 450 |
+
Many large-scale software systems suffer from poor performance reproducibility. Machine learning training workflows are no exception as their performance analysis is largely an expert-driven task, tightly-coupled to the underlying physical and software ecosystems. This behavior hinders productivity, knowledge sharing, and overall the notion of achieving energy efficiency.
|
| 451 |
+
|
| 452 |
+
We presented our approach at supporting reproducible performance analysis for machine learning workflows through a containerized framework. This framework is able to run on many container-ready types of infrastructure, such as HPC clusters and even clouds. Moreover, it is able to gather performance results at arbitrary levels in the software stack and is extendable such that more experienced users are able to add custom analyses.
|
| 453 |
+
|
| 454 |
+
We validated our framework through an empirical evaluation on two GPU-enabled, large-scale production-based HPC clusters, with different software stacks. Our analysis shows that our framework is portable and is able to gather performance data ranging from high-level MPI metrics, down to FLOP efficiency for CUDA kernels, as well as kernel-level data for each processing batch. For future work, we plan to extend our framework with more types of analysis tools, implement multiple types of workloads regarding state-of-the-art benchmarks, as well as evaluate them on more types of large-scale infrastructure.
|
papers/JSYS/JSYS 2021/JSYS 2021 Mar_Papers/VdWaMgaTKtX/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,529 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Serverless Computing: From An Application Developer’s Perspective
|
| 2 |
+
|
| 3 |
+
Anonymous authors
|
| 4 |
+
|
| 5 |
+
Paper under double-blind review
|
| 6 |
+
|
| 7 |
+
## Abstract
|
| 8 |
+
|
| 9 |
+
In the past few years, serverless computing has gained significant popularity and became a go-to choice for deploying cloud applications and micro-services. Serverless computing with its unique 'pay as you go' pricing model and key performance benefits over other cloud services, offers an easy and intuitive programming model to build cloud applications. In this model, a developer focuses on writing the code of the application while infrastructure management is left to the cloud provider who is responsible for the underlying resources, security, isolation, and scaling of the application. Recently, a number of commercial and open-source serverless platforms have emerged, offering a wide range of features to application developers. In this paper, first, we present measurement studies demystifying various features and performance of commercial and open-source serverless platforms that can help developers with deploying and configuring their server-less applications. Second, we discuss the distinct performance and cost benefits of serverless computing and present a set of potential applications that can leverage the performance, cost, or both aspects of serverless computing. In the end, we discuss future research directions for serverless computing and suggest building tools and technologies, which would not only make serverless usage efficient but will also accelerate serverless adoption.
|
| 10 |
+
|
| 11 |
+
## 1 Introduction
|
| 12 |
+
|
| 13 |
+
Serverless computing has emerged as a new paradigm that makes the cloud-based application development model simple and hassle-free. In the serverless model, an application developer focuses on writing code and producing new features without worrying about infrastructure management, which is left to the cloud provider. Serverless computing was first introduced by Amazon in 2014 as Amazon Lambda [1], and since then, other commercial cloud providers have introduced their serverless platforms, i.e. Google Cloud Function (GCF) [18] from Google, Azure Function [12] from Microsoft, and IBM Cloud Function [19] from IBM. There are also several open-source projects like Apache OpenWhisk, Knative, OpenLambda, Fission, and others.
|
| 14 |
+
|
| 15 |
+
At the time of the inception of the Internet, applications were built and deployed using dedicated hardware acting as servers, which needed a high degree of maintenance and often lead to under-utilization of resources [48,49]. Moreover, adding/removing physical resources to scale to varying demand, and debugging an application, was a cumbersome task. Under-utilization of resources and higher cost of maintenance led to the invention of new technologies like virtualization and container-based approaches. These approaches not only increased resource utilization but also made it easy to develop, deploy, and manage applications. Tools such as $\left\lbrack {{48},{49},{61},{98}}\right\rbrack$ were built to help users orchestrate resources and manage the application. Although virtualization and container-based approaches lead to higher utilization of resources and ease of building applications, developers still have to manage and scale the underlying infrastructure of an application, i.e. virtual machines (VMs) or containers, despite the availability of a number of approaches that would perform reactive or predictive scaling $\left\lbrack {{36},{54},{73},{80},{87},{103}}\right\rbrack$ . To abstract away the complexities of infrastructure management and application scaling, serverless computing emerged as a new paradigm to build, deploy, and manage cloud applications. The serverless computing model allows a developer to focus on writing code in a high-level language (as shown in Table 1) and producing new features of the application, while leaving various logistical aspects like the server configuration, management, and maintenance to the serverless platform [100].
|
| 16 |
+
|
| 17 |
+
Even though serverless computing has been around for only a few years, this field has produced a significant volume of research. This research addresses various aspects of serverless computing from benchmarking/improving the performance of various serverless platforms/applications, porting new applications into a serverless model, to suggesting altogether new serverless platforms. As serverless computing is still an evolving field, there is a significant need for systematization of the knowledge (SoK) particularly from the perspective of an application developer. We believe that for an application developer, an ideal SoK paper should address three main aspects: 1) current state of serverless platforms, e. $g$ . performance and features,2) what makes serverless computing ideal for certain classes of applications, and 3) and future research directions for helping a developer leverage the full potential of serverless computing with her limited control over the serverless platform.
|
| 18 |
+
|
| 19 |
+
Previous SoK papers are written from the perspective of the service provider. Castro et al. [45] present an overview of serverless computing and discuss the serverless architecture, development, and deployment model. Hellerstein et al. and Jonas et al. $\left\lbrack {{40},{60},{64}}\right\rbrack$ also provide an overview of serverless computing, and discuss potential challenges that a serverless provider should address for the popularization of serverless computing. Similarly, in [89], challenges and potential research directions for serverless computing are discussed. Eismann et al. [52] perform a systematic review of serverless applications and provide useful insights into the current usage of serverless platforms. Eyk et al. [97] give perspectives on how serverless computing can evolve and identify adoption challenges. Lynn et al. [72] give an overview of various features provided by popular serverless platforms. The aforementioned work take the perspective of a service provider and discuss the challenges and optimizations that it should introduce to improve and popularize the serverless platform.
|
| 20 |
+
|
| 21 |
+
Unlike previous work, in this paper, we take a closer look at the three aforementioned aspects of serverless computing from an application developer's perspective. We assess previous work related to measurements, performance improvement and porting of applications into the serverless computing model, and augment this with our own experimental results and insights. In this paper, we make the following contributions to the SoK:
|
| 22 |
+
|
| 23 |
+
- We categorize the decisions that an application developer can make during one life-cycle of an application into two categories: one-time decisions and online-decisions, and discuss their performance and cost implications.
|
| 24 |
+
|
| 25 |
+
- We show that the quick provisioning time, on-demand scaling, and true "pay as you go" pricing model are key factors for serverless adoption for various classes of applications and discuss potential challenges.
|
| 26 |
+
|
| 27 |
+
- For future research directions, we propose research on building tools and strategies to tune serverless functions, decompose serverless applications, and use serverless in conjunction with other cloud services that application developers can leverage to reduce cost.
|
| 28 |
+
|
| 29 |
+
The rest of the paper is organized as follows. We first describe the serverless computing model and its important features (Section 2). Next, we look at various measurement studies that investigate different aspects of commercial and open-source serverless platforms (Section 3). Then we present an economic model of serverless computing and compare it with traditional Infrastructure-as-a-Service (IaaS), and identify suitable classes of applications that can leverage server-less computing for its performance/cost (Sections 4 & 5). Lastly, we discuss future challenges and research directions to make serverless adoption efficient and easy (Section 6).
|
| 30 |
+
|
| 31 |
+
## 2 Background
|
| 32 |
+
|
| 33 |
+
Serverless computing was initially introduced to handle less frequent and background tasks, such as triggering an action when an infrequent update happens to a database. However, the ease of development, deployment, and management of an application and the evolution of commercial and open-source serverless platforms have intrigued the research community to study the feasibility of the serverless computing model for a variety of applications [57, 76, 103, 104]. Moreover, there are systems whose aim is to help developers port their applications to a serverless programming model [93].
|
| 34 |
+
|
| 35 |
+
In a serverless computing model, a developer implements the application logic in the form of stateless functions (henceforth referred to as serverless functions) in the higher-level language specified by the serverless platforms (popular platforms are shown in Table 1). The code is then packaged together with its dependencies and submitted to the serverless platform. A developer can associate different triggers with each function, so that a trigger would cause the execution of the function in a sandbox environment (mostly containers) with specified resources, i.e. memory, CPU-power, etc. The output of the serverless function is then returned as the response to the trigger. The serverless computing model is different from traditional dedicated servers or VMs in a way that these functions are launched only when the trigger is activated, while in the traditional model, the application is always running (hence the term "serverless").
|
| 36 |
+
|
| 37 |
+
Serverless computing abstracts away the complexities of server management in two ways. First, a developer only writes the logic of an application in a high-level language, without worrying about the underlying resources or having to configure servers. Second, in case the demand for an application increases, a serverless platform scales up the instances of the application without any additional configuration or cost and has the ability to scale back to zero (discussed in Section 3.3). On the contrary, in IaaS, an application developer not only has to specify the additional scaling policies but there can be an additional cost for deploying such autoscaling services.
|
| 38 |
+
|
| 39 |
+
Another important feature of the serverless computing model is that serverless platforms follow the "pay as you go" pricing model. This means a user will only pay for the time a serverless function is running. This model charges a user for the execution time of the serverless function based on the resources configured for the function. A user will not be charged for deploying the function or for idle times. Even though all of the cloud providers follow a similar pricing model, the price for the unit time ( ${100}\mathrm{\;{ms}}$ or $1\mathrm{\;{ms}}$ ) of execution can vary significantly from one cloud provider to another. In Table 1, we show some of the key features of popular serverless platforms.
|
| 40 |
+
|
| 41 |
+
In the serverless computing model, the abstraction of infrastructure management comes at the cost of little to no control over the execution environment (and underlying infrastructure) of the serverless functions. A user can control limited configurable parameters, namely memory, CPU-power, and location. Since the introduction of serverless platforms, there has been a large body of research work that aims to demystify the underlying infrastructure, resource provisioning, and eviction policies for commercial serverless platforms. Besides, these works have also looked at different aspects of performance, namely cold-starts, concurrency, elasticity, network, and I/O bandwidth shares. These research studies are helpful for the research and developer community to find a suitable serverless platform for their application and also inspire future research. In this paper, we describe and classify various measurement studies in detail and also look at the implications (dependent parameters) of various choices (control parameters) a developer can make (shown in Table 2).
|
| 42 |
+
|
| 43 |
+
<table><tr><td/><td>AWS Lambda</td><td>Google Cloud Function</td><td>IBM Cloud Function</td><td>Microsoft Azure Function</td></tr><tr><td>Memory (MB)</td><td>$\{ {128}\ldots {10},{240}\}$</td><td>${128} \times i$ $i \in \{ 1,2,4,8,{16},{32}\}$</td><td>$\left\{ {{256}\ldots {2048}}\right\}$</td><td>upto 1536</td></tr><tr><td>Runtime Language</td><td>Node.js, Python, Java, C#, Go, PowerShell Ruby</td><td>Node.js, Python, Go</td><td>Node.js, Python, Java, C#, Swift, PHP, Docker</td><td>C#, F#, Node.js, PHP, TypeScript, Batch, Bash, PowerShell, Java</td></tr><tr><td>Billing</td><td>Execution time based on memory</td><td>Execution time based on memory & CPU-power</td><td>Execution time based on memory</td><td>Execution time based on memory used</td></tr><tr><td>Billing Interval</td><td>100ms</td><td>100ms</td><td>100ms</td><td>1ms</td></tr><tr><td>Configurable Resource</td><td>memory</td><td>memory & CPU-power</td><td>memory</td><td>$\mathrm{n}/\mathrm{a}$</td></tr></table>
|
| 44 |
+
|
| 45 |
+
Table 1: Serverless platforms
|
| 46 |
+
|
| 47 |
+
Even though the serverless computing model has provided much-needed agility to cloud-based application development, there are still challenges that need to be addressed to make serverless adoption easy and efficient. In this paper (Section 6 ), we identify such challenges and present our ideas to tackle these problems backed by other measurement studies and our own preliminary results on commercial and open-source serverless platforms.
|
| 48 |
+
|
| 49 |
+
## 3 Measurement Studies
|
| 50 |
+
|
| 51 |
+
Serverless platforms are largely black-boxes for application developers, who submit the code of their application (with a few configurations) and in turn, the code gets executed upon the specified triggers. A user has little to no control over the execution environment, underlying resource provisioning policies, hardware, and isolation. A user has control over limited configurations through which they can control the performance of their serverless application. In what follows we categorize the decisions a developer can make for their server-less applications to get the desired performance or optimize their cost.
|
| 52 |
+
|
| 53 |
+
One-Time Decisions: These are the decisions that a developer can make before developing and deploying an application, and include selecting the serverless platform, programming language, and location of deployment. These decisions can be dictated by the features that a serverless platform offers such as underlying infrastructure, pricing model, elasticity, or performance metrics - for example, certain languages may have lower cold-start latency or the location of deployment can affect the latency to access the application. We believe changing any of these aspects would incur significant development and deployment cost, hence a developer can make such a decision only once in the life-cycle of the application.
|
| 54 |
+
|
| 55 |
+
Online Decisions: A developer has more freedom to change other parameters without a serious effort, including resources (memory, CPU) and concurrency limit. As we show later in this section, these parameters can affect the performance and cost of a serverless application. A developer can employ a more proactive technique to configure her serverless function based on the desired performance metric. Configuring these parameters is also important as serverless platforms provide no Service Level Agreement (SLA), i.e. guarantee on the performance of the serverless function, and a developer's only recourse to get the desired performance is through the careful configuration of these parameters. Later in Section 6, we discuss the challenges of designing proactive approaches by employing feedback control systems. These systems would continually monitor the performance of a serverless application and make these online decisions for the application, as shown in Figure 1.
|
| 56 |
+
|
| 57 |
+
There have been several measurement studies conducted by academic researchers and independent developers that have attempted to demystify different aspects of commercial and open-source serverless platforms. These studies help a developer make one-time decisions by identifying the underlying resources, i.e. operating system, CPUs, virtualization technique, and by benchmarking various performance aspects of serverless platforms. Moreover, these studies also look at the effect of configurable parameters (online decisions) on the performance and cost of serverless functions establishing the need to configure these parameters carefully.
|
| 58 |
+
|
| 59 |
+

|
| 60 |
+
|
| 61 |
+
Figure 1: Feedback control systems to configure serverless functions
|
| 62 |
+
|
| 63 |
+
In Table 2, we present a classification of previous measurement studies. In this classification, we correlate the decisions (both one-time and online) that a developer or a researcher can make in terms of picking the serverless platform, scripting language, and configurations, with different performance aspects, such as cold-start delay, runtime, cost, etc. Every cell in the table indicates the peer-reviewed studies that have looked at the relationship between the controlled variable (decision) and the dependent parameters (performance). In what follows, we describe in greater detail the findings of these measurement studies and explain the effect of choices on different performance aspects.
|
| 64 |
+
|
| 65 |
+
### 3.1 Cold Starts
|
| 66 |
+
|
| 67 |
+
Cold start is the time from receiving a request until the sandbox-environment is ready for the execution of a server-less function to begin. Cold start can comprise the time to start the sandbox environment, load code-dependencies, and copy the application code. ${}^{1}$
|
| 68 |
+
|
| 69 |
+
This is perhaps the most studied aspect of serverless platforms. There have been several peer-reviewed studies attempting to quantify and remedy the effect of cold starts. The cold start comes from the fact that if a function is not being invoked recently for an amount of time set by the platform (called instance recycling time, and discussed later in this section), the platform destroys the sandbox environment (i.e. container) to free up the resources. On a subsequent new request, the platform will (re-)initialize the sandbox environment and execute the function, hence an extra delay would be incurred. Studies have found that cold starts can be affected by various online and one-time decisions.
|
| 70 |
+
|
| 71 |
+
- Choice of language: These studies show that usually, scripting languages (Python, Ruby, Javascript) have significantly less(100x)cold-start delays as compared to compiled runtimes (Java, .NET, etc.) [77, 100].
|
| 72 |
+
|
| 73 |
+
- Serverless provider: Studies have shown that different providers can have different cold-start delays depending on their underlying infrastructure or resource provisioning strategy $\left\lbrack {{69},{77},{79},{100}}\right\rbrack$ .
|
| 74 |
+
|
| 75 |
+
- Resources: Cold start is also impacted by the resources available to the function, i.e. memory/CPU [77, 100]. This can be because of the fact that more resources lead to faster setup of the execution environment [100].
|
| 76 |
+
|
| 77 |
+
The above insights can help a user develop an application in a particular language, and also configure resources based on the application needs. If an application is latency-sensitive, a developer may choose to use a scripting language and configure more resources for the serverless function. One has to be careful with configuring more resources for the serverless function to remedy cold start, as it can increase the cost of running the serverless function. Based on the finding reported in [100] on commercial serverless platforms, AWS Lambda has the least cold-start delays. Approaches to circumvent the cold start can be divided into two categories:
|
| 78 |
+
|
| 79 |
+
1) For serverless platforms: Serverless platforms can improve the cold-start latency by having fast sandboxing techniques or by keeping the sandbox instances warm for a longer time. While the latter approach can be significantly expensive for the platform as it can potentially lead to resource under-utilization (discussed in more detail in Section 3.5), there has been a significant body of research focused on improving the cold-start latency through advanced container-management/sandboxing techniques [34, 43, 50, 79, 82, 92]. These approaches employ container reuse, loose isolation between function instances ${}^{2}$ , and memory snapshotting and restoring to achieve a cold-start latency that is as low as ${10}\mathrm{\;{ms}}$ or less [92].
|
| 80 |
+
|
| 81 |
+
2) For the developers: The aforementioned fast sandboxing approaches will only work if a developer has complete control over the serverless platform. In case a developer is using a commercial serverless platform, their approach to mitigate cold start will be different. In addition to carefully selecting the language and serverless platform to develop and deploy their application, they can also control cold start through carefully configuring resources for the application. There are several articles published $\left\lbrack {5,{15},{24},{31},{68}}\right\rbrack$ , which suggest certain design changes in the application to avoid unnecessary cold starts such as sending dummy requests to the server-less function that perform early exit without performing any computation. While these approaches may keep the function warm, they can also introduce extra cost as there is a fixed cost charged for each request and most serverless platforms round up the execution time to the nearest ${100}\mathrm{\;{ms}}$ , so even if the function performs early exit, the user would be charged some cost. A recent feature from serverless platforms, such as AWS Lambda [28] and Azure Function [8], allows their user to specify a minimum number of function instances to be kept warm all the time to avoid unnecessary cold starts but a user is charged for enabling this feature.
|
| 82 |
+
|
| 83 |
+
---
|
| 84 |
+
|
| 85 |
+
${}^{1}$ While this can vary based on the serverless platform’s policies, our experiments on AWS Lambda show that the billed duration can include cold-start time if the initialization takes more than 10 seconds (approximately).
|
| 86 |
+
|
| 87 |
+
${}^{2}$ Function instance refers to the sandbox environment executing the code of a serverless function.
|
| 88 |
+
|
| 89 |
+
---
|
| 90 |
+
|
| 91 |
+
<table><tr><td>Parameters Control $\rightarrow$ Measured $\downarrow$</td><td>Serverless Platform</td><td>Language</td><td>Memory/CPU</td><td>Location</td></tr><tr><td>Cold Start</td><td>$\left\lbrack {{34},{69},{74},{77},{79},{100}}\right\rbrack$</td><td>$\left\lbrack {{74},{77},{100}}\right\rbrack$</td><td>$\left\lbrack {{69},{77},{100}}\right\rbrack$</td><td>$X$</td></tr><tr><td>Runtime/Cost</td><td>$\left\lbrack {{34},{39},{67},{69},{74},{77},{79},{100}}\right\rbrack$</td><td>[100]</td><td>$\left\lbrack {{33},{70},{100},{102}}\right\rbrack$</td><td>[33,53]</td></tr><tr><td>Concurrency</td><td>$\left\lbrack {{67},{69},{70},{100}}\right\rbrack$</td><td>[74]</td><td>$X$</td><td>$X$</td></tr><tr><td>I/O throughput</td><td>[67,100]</td><td>$X$</td><td>[33, 100]</td><td>$X$</td></tr><tr><td>Network throughput</td><td>[67,100]</td><td>$X$</td><td>[100]</td><td>$x$</td></tr><tr><td>Instance Lifetime</td><td>[70, 100]</td><td>$X$</td><td>[100]</td><td>$X$</td></tr><tr><td>Underlying Infrastructure</td><td>[69, 100]</td><td>$x$</td><td>[70]</td><td>$x$</td></tr></table>
|
| 92 |
+
|
| 93 |
+
Table 2: Measurement Studies - each cell identifies the studies establishing relation between the respective column (decision) and row (performance/platform characteristics) - 'x' means no documented relation between decision and performance
|
| 94 |
+
|
| 95 |
+
Summary: Cold start can be impacted by the virtualiza-tion techniques and function eviction policies employed by the serverless platform. From a developer's perspective, the impact of cold start can be controlled through the configurable resources and careful choice of the programming language.
|
| 96 |
+
|
| 97 |
+
### 3.2 Cost and Performance
|
| 98 |
+
|
| 99 |
+
The cost of cloud usage for serverless functions on a commercial serverless platform $p$ can be calculated as follows:
|
| 100 |
+
|
| 101 |
+
$$
|
| 102 |
+
\operatorname{cost} = T\left( m\right) \times C\left( {p, m}\right) + G\left( p\right) \tag{1}
|
| 103 |
+
$$
|
| 104 |
+
|
| 105 |
+
where $T\left( m\right)$ is the run time of the serverless function given resources $m$ and $C\left( {p, m}\right)$ is cost per unit time of resources $m$ from the platform $p.G\left( p\right)$ denotes the fixed cost such as API gateway for AWS Lambda; if there is no fixed cost, $G\left( p\right)$ can be considered zero. Equation (1) shows that the cost of cloud usage directly depends on the run time of the serverless function and the price per unit time for resources $m\left\lbrack {3,9,{17},{20}}\right\rbrack$ . Hence all the factors that can impact the run time of a function will also impact the cost of cloud usage. To observe the effect
|
| 106 |
+
|
| 107 |
+

|
| 108 |
+
|
| 109 |
+
Figure 2: Performance and cost on AWS Lambda
|
| 110 |
+
|
| 111 |
+
of configurable resources (e.g., memory and CPU power) on the performance of a serverless function, we deployed various (I/O-intensive, memory-intensive, and CPU-intensive) functions on Amazon Lambda and invoked them with varying resource configurations. We show the observed trends in the performance and cost with respect to the resources in Figure 2. It can be seen that more resources lead to faster execution of the serverless function but the performance gain is limited after a certain point. This observation also confirms previous findings made in $\left\lbrack {{33},{51},{74}}\right\rbrack$ , which report a similar effect of resources on the performance.
|
| 112 |
+
|
| 113 |
+
Other factors that can affect the performance are summarized below:
|
| 114 |
+
|
| 115 |
+
Cold Starts: A serverless platform may decide to terminate the sandbox environment if it has been inactive for a certain amount of time, as explained in Section 3.5. Hence, serverless functions with less frequent invocations may incur the extra latency of cold start.
|
| 116 |
+
|
| 117 |
+
Concurrency: Previous studies $\left\lbrack {{67},{69},{70},{100}}\right\rbrack$ looked at the effect of concurrency on the performance of serverless functions and found that the performance can be negatively impacted by a higher concurrency level. This is due to the particular resource provisioning policies of the serverless platforms as reported in [100]. In particular, AWS Lambda and Azure Function try to co-locate the function instances, hence causing more contention for resources. Recent work [88] shows that concurrency configurations can also impact the performance of serverless functions running on the open-source serverless platform Knative [22].
|
| 118 |
+
|
| 119 |
+
Co-location: Previous studies $\left\lbrack {{33},{100}}\right\rbrack$ show that co-location of serverless functions on the same underlying resource can also result in significant performance degradation. Our preliminary experiment on OpenWhisk also confirms these findings.
|
| 120 |
+
|
| 121 |
+
Underlying Infrastructure and Policies: As discussed in Section 3.6, the underlying infrastructure of commercial serverless platforms consist of diverse resources, and in addition to that, resource provisioning policies for the execution of a serverless function can also vary significantly from one platform to the other [100]. Hence, these aspects can also introduce significant uncertainty in performance.
|
| 122 |
+
|
| 123 |
+
Keeping in mind the tightly coupled nature of performance and cost of serverless functions, it is really important to find the "best" configuration of parameters (online decisions), e.g. memory, CPU, concurrency, such that they not only meet performance expectations but also optimize the cost of cloud usage. Previous approaches $\left\lbrack {{33},{51},{53},{88}}\right\rbrack$ use various machine learning and statistical learning approaches to configure parameters, e.g. memory, CPU, concurrency and location, for serverless applications deployed on commercial and open-source serverless platforms. We discuss these approaches in more detail in Section 6.1.
|
| 124 |
+
|
| 125 |
+
Summary: The performance of a serverless function can be impacted by its configurable resources, choice of programming language, and the choice of serverless platform. The usage cost is calculated based on the configurable resources, the execution time, and the unit-time cost specified by the serverless platform.
|
| 126 |
+
|
| 127 |
+
### 3.3 Concurrency or Elasticity
|
| 128 |
+
|
| 129 |
+
Concurrency is the number of function instances serving requests for the serverless function at a given time. On-demand scaling by the serverless platforms - i.e. in case the demand for the serverless application increases, the serverless platform initializes more function instances to serve these requests concurrently - is one of the distinct features of the serverless computing model. Unlike IaaS, a user does not have to specify the scaling policies, rather the serverless platforms provision more function instances of the serverless function to cater to increasing demand. Most serverless platforms can scale up to a certain limit and en-queue any subsequent requests until one of the previous requests finishes execution and resources are freed. A platform's ability to scale quickly, and the maximum concurrency level that it can achieve, can be very critical to applications with fluctuating demand. To observe the maximum concurrency level that a commercial platform can support, Wang et al. [100] performed a comprehensive measurement study on three major cloud providers: AWS Lambda, GCF, and Azure Function. They found that out of all three, AWS Lambda was the best, achieving a maximum concurrency level of ${200}^{3}$ , while GCF and Azure Functions were unable to achieve advertised concurrency levels. FaaSdom [74], a recent benchmarking suite for serverless platforms, also found that AWS Lambda achieves the best latency in the face of an increased request rate for a serverless application - demonstrating its ability to quickly scale out. They also found that one-time decisions, such as language and underlying operating system, can also affect the scalability of a serverless application. Another study [67] found that AWS Lambda and GCF perform better for varying demand when compared to IBM Cloud Function and Azure Function. We believe a platform's inability to scale well can come from the fact that scale-out is decided based on measured CPU load, a queue length, or the age of a queued message, which can take time to be logged. On the other hand, AWS Lambda launches a new function instance for a new request if current function instances are busy processing requests, as reported in $\left\lbrack {{10},{67}}\right\rbrack$ . Using this proactive approach, AWS Lambda can scale out quickly without relying on any other measured indicator. As elasticity is one of the most advertised features of serverless computing, commercial serverless platforms are striving to improve their service by offering higher concurrency limits. AWS Lambda's recent documentation indicates that concurrency limits have increased significantly $\left( { > {3000}}\right)$ and a user can request further increase [25].
|
| 130 |
+
|
| 131 |
+
Serverless platforms, such as Apache Openwhisk and Kna-tive from Kubernetes, allow a user to configure a container-level concurrency limit, i.e. number of requests that a function instance can serve in parallel (where each request runs as a separate thread) [23, 27]. On the other hand, Azure Function allows a user to configure a maximum number of function instances that can be launched on a single VM to avoid the possibility of running out of underlying VM resources [11]. Schuler et al. [78] show that the container-level concurrency limit can affect the application's performance. They also suggest an AI-based (reinforcement learning) technique to configure the concurrency limit for Knative. The fact that a user can configure this particular concurrency limit on the fly also makes this limit an online decision. A user should be careful with configuring the container-level concurrency limit, as function instances running prior to making the configuration change will keep running with the old configuration (until terminated by the platform based on its settings), and only the new instances will assume the new concurrency limit. A user should wait for the system to be stable with the new configuration (i.e., all function instances with the old configuration are terminated) before making any further changes.
|
| 132 |
+
|
| 133 |
+
Summary: Serverless applications can elastically scale without any additional configurations. The maximum number of function instances that can run in parallel is determined by the serverless platform and can vary based on the cloud provider. Studies have found that among commercial serverless platforms, AWS Lambda scales best in terms of throughput.
|
| 134 |
+
|
| 135 |
+
### 3.4 CPU, Network and I/O
|
| 136 |
+
|
| 137 |
+
Most of the commercial and open-source serverless platforms allow limited to no control over the execution environment of serverless functions. While a user can only configure certain parameters, e.g. memory, CPU-power, location, and concurrency, other resources such as CPU-, network- and I/O-share are decided by the serverless platform. In [100], the authors find that in case there is no contention, empirical results show that AWS Lambda puts an upper bound on the CPU share for a function with memory $m$ of ${2m}/{3328}$ , while in the case of co-location, function instances share the CPU fairly and each instance's share becomes slightly less than, but still close to the upper bound. Similarly, Google also allocates the CPU share according to the memory allocated to the function. CPU allocation in proportion to memory assigned to a function is also specified in AWS Lambda and GCF's documentation [6]. Contrary to GCF and AWS Lambda, IBM Function does not allocate the CPU share in proportion to memory allocated to the function, as reported in [74], rather it keeps it constant as an increase in memory does not affect the performance of the function.
|
| 138 |
+
|
| 139 |
+
---
|
| 140 |
+
|
| 141 |
+
${}^{3}$ This study was conducted in 2018. We believe higher concurrency levels can be achieved now given system upgrades.
|
| 142 |
+
|
| 143 |
+
---
|
| 144 |
+
|
| 145 |
+
On the other hand, with Azure Function, the CPU share allocated to a function was found to be variable with the serverless function getting the highest CPU share when placed on 4-vCPU VMs ${}^{4}$ . In the case of co-location, the CPU share of co-located instances can drop. Similar to CPU share, I/O and network performance can also be affected by the resources configured for the serverless function and co-location, as reported in $\left\lbrack {{33},{67},{100}}\right\rbrack$ . Our preliminary experiments also confirm this for the I/O performance, where the performance of I/O-intensive serverless functions improves when allocated more memory, as illustrated in Figure 2.
|
| 146 |
+
|
| 147 |
+
Summary: The CPU, Network and I/O bandwidth of a serverless function can be impacted by the co-location of multiple functions on the same underlying resource (VM) and the instance placement policies of the server-less platform. An application developer can run various benchmarks (or consult measurement studies) to find the most suitable provider for her application.
|
| 148 |
+
|
| 149 |
+
### 3.5 Instance Recycling Time and Lifetime
|
| 150 |
+
|
| 151 |
+
When a serverless function is first executed, the serverless platform creates the sandbox environment, loads the function's code in it, and executes the code to return the results. After the results are returned, the sandbox environment is kept in a warm state for a certain amount of time (called instance-recycling-time) to serve any subsequent request for the same function. If during that time, no subsequent request arrives, the sandbox environment is terminated so as to reuse the resources. A serverless platform may decide to terminate the sandbox environment after it has been in use for a certain period regardless of the usage. This time is called instance-lifetime.
|
| 152 |
+
|
| 153 |
+
Both instance-recycling-time and instance-lifetime are very critical values to configure for not only the serverless platform but also for the users. A low value for these variables would mean that a serverless platform can free the resources quickly and re-purpose them for other applications while increasing the utilization of underlying resources, but for users, it can be devastating as the serverless functions would experience unnecessary cold starts hence degrading the performance of their serverless application. For a commercial serverless platform, it can lead to potential revenue loss by losing customers. While from the user's perspective, longer values would be ideal as their application would always find their serverless functions warm, hence reducing the latencies, but this may end up reducing the utilization of the underlying resource for the serverless platform ${}^{5}$ .
|
| 154 |
+
|
| 155 |
+
For open-source serverless platforms $\left\lbrack {{21},{90}}\right\rbrack$ , a user can configure these values on their own and there have been studies suggesting using popularity analysis to configure these values on a per-application basis [90]. But in commercial serverless platforms, these values are decided by the platform and a user has no control over the instance-recycling-time and instance-lifetime. There have been several peer-reviewed studies that looked at this aspect of commercial serverless platforms. Most of these studies followed a similar technique to infer the values for instance-recycling-time and instance-lifetime. Commercial serverless platforms allow a serverless function to use a limited amount of persistent storage for the time a sandbox environment is in use. Previous studies $\left\lbrack {{69},{100}}\right\rbrack$ use this storage to store an identifier for the serverless function when the function is invoked for the first time. Later they invoke the same function again and check if the identifier is still present; if it is not, then the sandbox environment was destroyed and the latter execution was done in a new environment. They show that different serverless platforms have different instance-recycling times, with Google Cloud Function having the longest of all (more than ${120}\mathrm{\;{min}}$ - utes). AWS Lambda's recycling time is reported to be around 26 minutes. The authors could not find a consistent value for Azure Functions. While another recent study [14] claims this value to be 20-30 min for Azure Function, 5-7 min for AWS Lambda and 15 min for Google Cloud Function. Hence, if a serverless function stays inactive for this instance-recycling-time, the subsequent request would incur an extra delay equal to a cold start.
|
| 156 |
+
|
| 157 |
+
In an independent study [29], the authors established a relation between instance-recycling-time and resources (i.e. memory) configured for the serverless function on AWS Lambda. They found that a large value of memory configured for the serverless function tends to give it a small instance-recycling-time ${}^{6}$ .
|
| 158 |
+
|
| 159 |
+
Regarding instance-lifetime, in [100], using a similar technique, the authors found that Azure Function has the longest instance-lifetime as compared to AWS Lambda and Google Cloud Function. They also found that in case of Google Cloud Function, the lifetime of an instance can be affected by the resources configured for the function. It is reported that instance-lifetime of an instance with ${128}\mathrm{{MB}}$ and 2,048 MB memory is 3-31 minutes and 19-580 minutes, respectively.
|
| 160 |
+
|
| 161 |
+
---
|
| 162 |
+
|
| 163 |
+
${}^{4}$ Placement of function instances on VMs can be random from a user’s perspective. Also, notice Azure Function does not allow users to configure any resources for the serverless function.
|
| 164 |
+
|
| 165 |
+
${}^{5}$ Remember a user does not pay for idle times in serverless computing. hence this is a lose-lose situation for the serverless platform or cloud provider.
|
| 166 |
+
|
| 167 |
+
${}^{6}$ We could not find any peer-reviewed study to validate this claim.
|
| 168 |
+
|
| 169 |
+
---
|
| 170 |
+
|
| 171 |
+
Summary: For a serverless function, instance-recycling-time is decided by the serverless platform. A serverless platform can employ more pro-active approaches to configure instance-recycling-time based on the application's popularity, as suggested in [90]. For an application developer, a low value for instance-recycling-time would affect performance by incurring extra cold-start delays. A developer can reduce the effect of cold starts by carefully choosing the language of the application and configurable resources.
|
| 172 |
+
|
| 173 |
+
### 3.6 Underlying Infrastructure
|
| 174 |
+
|
| 175 |
+
In a serverless computing model, a user only focuses on writing the code, and it is the serverless platform's responsibility to execute this code on any infrastructure/hardware. A user has no control over the underlying resources (types of VM where the application code would be executed). A developer may be interested in knowing the underlying infrastructure where their serverless application would be running to optimize the performance of their applications or to make other assumptions about the running environment of their application.
|
| 176 |
+
|
| 177 |
+
There have been several studies that tried to demystify the underlying virtual infrastructure for commercial serverless platforms. Lloyd et al. [69] discovered that serverless functions have access to the "/proc" file system of underlying VMs running the Linux operating system. By inspecting "/proc/cpuinfo", the authors discovered that the underlying VMs run Amazon Linux [4] and use CPUs that are similar to those of EC2 instances. Wang et al. [100] went one step further and using a similar approach, the authors conducted a wide study on all the big commercial serverless platforms, i.e. AWS Lambda, Google Cloud Function, and Azure Functions. They found that Google Cloud Function successfully hides the underlying resources and the only information they could obtain was that there are four unique types of underlying resources. By inspecting "/proc/cpuinfo" and "/proc/meminfo", they found that AWS Lambda uses five different types of VMs having different vCPUs and memory configurations, mostly 2 vCPUs and 3.75GB physical RAM, which is the same as c4.large instances from EC2. The authors also noticed that Azure Function has the most diverse underlying infrastructure. While inspecting the contents of "/proc/*", they came across VMs with 1, 2, or 4 vCPUs, and the vCPU is either Intel or AMD model.
|
| 178 |
+
|
| 179 |
+
Knowing the underlying infrastructure can be helpful for developers to identify various performance-related issues. One example of that could be, a serverless function, running on Azure Function, placed on a VM with 4 vCPUs, can have more CPU share as compared to when placed on other types of VMs. Also, knowing the diversity of the underlying infrastructure can help the researcher explain the variability in performance for a given serverless platform.
|
| 180 |
+
|
| 181 |
+
Summary: Serverless platforms have diverse underlying infrastructure and this can introduce a significant variability in the performance of a serverless function even when executed with the same configurable resources. Careful selection of the serverless platform by the application developer, and the usage of more pro-active approaches such as COSE [33] to dynamically configure resources for serverless functions, can mitigate this variability in performance.
|
| 182 |
+
|
| 183 |
+
## 4 Serverless Economic Model
|
| 184 |
+
|
| 185 |
+
Commercial serverless platforms follow "pay as you go" pricing models. This means, a user only pays for the time the code is executing and not the idle time. On the other hand, other cloud services, like Amazon's EC2 and Google's VM, have pricing models that not only charge based on minutes and seconds of usage but also have a different price per unit time as compared to their serverless counterparts. In addition to the price factor, these VMs take extra labor to configure and maintain. On the contrary, a serverless function takes minimal effort to configure and maintain. Another key benefit of using the serverless programming model is that serverless platforms assume the responsibility of scaling the application, unlike VM-based infrastructures where users have to specify scaling policies.
|
| 186 |
+
|
| 187 |
+
Given the execution model of serverless platforms for a certain application, pricing model, and demand (request arrival rate), one can estimate the cost of deploying a serverless application on a commercial serverless platform. Similarly, a user can calculate the cost of deploying a cloud application by renting VMs from a commercial cloud provider. In [13], the authors present an economic model of deploying an application on commercial serverless platforms (FaaS), such as Amazon Lambda, and compare it with the economic model when only IaaS resources (VMs) are used to deploy the application. Specifically, the FaaS economical model can be described by:
|
| 188 |
+
|
| 189 |
+
$$
|
| 190 |
+
{COST}_{FaaS} = {\text{EconomicModel}}_{FaaS}\text{(rate, exec_time,}
|
| 191 |
+
$$
|
| 192 |
+
|
| 193 |
+
$$
|
| 194 |
+
\text{req_res, fixed_cost, unit_price)} \tag{2}
|
| 195 |
+
$$
|
| 196 |
+
|
| 197 |
+
where ${COST}_{FaaS}$ is the total cost of running an application on a serverless platform. This cost depends on the rate of function invocations (rate), execution time (exec_time) with resources (req_res) configured for each request (e.g. memory, CPU), and unit_price, which is the price of per unit time of execution for the specified resources. fixed_cost indicates any additional fixed cost such as that of an API Gateway.
|
| 198 |
+
|
| 199 |
+
Similarly, the cost for IaaS based deployment $\left( {\mathrm{{COST}}}_{\text{IaaS }}\right)$ can be calculated as follows:
|
| 200 |
+
|
| 201 |
+
$$
|
| 202 |
+
{COST}_{IaaS} = {\text{EconomicModel}}_{IaaS}\text{(rate, exec_time,}
|
| 203 |
+
$$
|
| 204 |
+
|
| 205 |
+
$$
|
| 206 |
+
\text{req_res, vm_cost, vm_config, max_vm_reqs)} \tag{3}
|
| 207 |
+
$$
|
| 208 |
+
|
| 209 |
+
where rate is the request arrival rate for the IaaS based deployment, ${vm}\_ {cost}$ is the cost of renting a particular VM with configurations vm_config, and max_vm_reqs is the maximum number of requests that one VM can handle at a given time without violating the SLA.
|
| 210 |
+
|
| 211 |
+
The key takeaways from the study in [13], following the economic models given by (2) and (3), are:
|
| 212 |
+
|
| 213 |
+
- Serverless platforms are cost-effective to deploy an application when the demand (request arrival rate) is below a certain threshold, referred to as Break-Even Point (BEP). Beyond BEP, IaaS resources are cheaper to use for their relatively lower cost per unit time.
|
| 214 |
+
|
| 215 |
+
- The authors also consider the different execution time and resources allocated to each request for the application on both IaaS and FaaS, and show that resources allocated for the execution of each request can also affect the value of BEP. Previous studies such as $\left\lbrack {{33},{35}}\right\rbrack$ address the issue of finding the optimal resources for an application in the FaaS and the IaaS model.
|
| 216 |
+
|
| 217 |
+
The economic model study presented in [13] confirms that serverless platforms are better suited for applications with low demand and short-lived computations.
|
| 218 |
+
|
| 219 |
+
Summary: Serverless is more economical for applications with low rate and bursty demand.
|
| 220 |
+
|
| 221 |
+
## 5 Serverless Usage
|
| 222 |
+
|
| 223 |
+
Even though serverless computing is a relatively new paradigm and still evolving, there have been several attempts from independent developers and researchers to deploy various applications using this computing model. We believe that the following distinct features of serverless computing are the main reasons for its adoption and increasing popularity.
|
| 224 |
+
|
| 225 |
+
- Pricing Model: As mentioned earlier, serverless platforms offer a unique "pay as you go" pricing model. A user does not pay for deploying their application or for idle times. Whereas in an IaaS model, if a user has rented a VM, she pays regardless of the usage.
|
| 226 |
+
|
| 227 |
+
- No Back-end Maintenance: The serverless computing model offloads a lot of back-end management from the application developer to the serverless platform, which is responsible for the set-up and maintenance of underlying resources as well as scalability.
|
| 228 |
+
|
| 229 |
+
- Quick Provisioning: Serverless platforms use advanced virtualization techniques, such as containers, to provision new instances of the application, which can be provisioned in the order of ${10}\mathrm{\;s}$ of milliseconds $\left\lbrack {{34},{43},{50},{82},{92},{100}}\right\rbrack$ . This feature allows a serverless application to scale out, in case of increasing demand, without suffering from performance degradation.
|
| 230 |
+
|
| 231 |
+
- On-Demand Scalability: Unlike IaaS, where a developer has to configure scaling policies, serverless platforms assume the responsibility of scaling an application in case there is an increase in demand.
|
| 232 |
+
|
| 233 |
+
Considering the above cost, performance, and management advantages, serverless computing is becoming a popular choice to build cloud applications. We next look at various classes of applications that are best suited for the serverless computing model. We also discuss the challenges and open issues that must be addressed to leverage the full potential of serverless computing for these classes of applications.
|
| 234 |
+
|
| 235 |
+
### 5.1 Scientific Workflows
|
| 236 |
+
|
| 237 |
+
The scientific workflow is a popular management system designed to compose and execute computations for a variety of scientific problem-solving purposes. Recently, there have been several proposals suggesting the use of a serverless computing model to implement scientific workflows.
|
| 238 |
+
|
| 239 |
+
Most serverless platforms offer an interface to build applications in a high-level scripting language such as Python and JavaScript. This feature can be particularly helpful for researchers with no technical background as serverless programming has less of a learning curve than an IaaS model where, in addition to learning the development model, they have to manage the infrastructure as well. In addition to the ease of development, the particular pricing model and on-demand elasticity of serverless computing can benefit such applications both in terms of cost and performance. For example, consider the workflow shown in Figure 3. In an IaaS based deployment, a static allocation of resources would lead to either resource under-utilization (higher cost) or performance degradation (lower cost) as various stages need varying resources. In the FaaS model, benefiting from on-demand scalability, the workflow can spawn any number of processes at any stage while only paying for the actual execution time.
|
| 240 |
+
|
| 241 |
+
Malawski et al. [76] discuss the potential of using server-less computing for scientific workflows. They also implement an astronomical workflow called Montage [26] using Google Cloud Function in conjunction with HyperFlow [41]. Their programming model can be easily extended to other work-flows and serverless platforms. The authors in $\left\lbrack {{38},{91},{101}}\right\rbrack$ , show that serverless computing can be employed to solve various mathematical and optimization problems. Moreover, $\left\lbrack {{62},{65},{66}}\right\rbrack$ show that on-demand computation and scal-ability provided by serverless computing can be leveraged by biomedical applications.
|
| 242 |
+
|
| 243 |
+

|
| 244 |
+
|
| 245 |
+
Figure 3: Example Workflow
|
| 246 |
+
|
| 247 |
+
However, the stateless nature of serverless functions can adversely affect the cost and performance of such applications. In scientific workflows, an intermediate computation stage may need access to the results from any previous stages, hence each stage may have to persist the computation results in an external database. This can introduce a significant overhead for storing and retrieving data, while also adding the cost of using the database service. Recent approaches, such as SAND [34], suggest the reuse/sharing of containers for the execution of functions that belong to the same application. The reuse/sharing of containers can help with reducing cold starts, as creating a new thread (function instance) takes significantly less time than starting a new container and shared libraries need to be loaded into memory only once. On the other hand, local caches (on the VMs) are helpful wherein serverless functions share data with other functions or access data from an external database $\left\lbrack {{94},{95}}\right\rbrack$ . We believe both container reuse/sharing and local caching can benefit the serverless implementation of scientific workflows.
|
| 248 |
+
|
| 249 |
+
### 5.2 Machine Learning and Data Processing
|
| 250 |
+
|
| 251 |
+
Since the advent of serverless computing, there have been several efforts exploring the possibility of using this computing model to deploy machine learning applications for its performance and elasticity. Frameworks such as MArk [103], Spock [59], Cirrus [44] and others [63, 96] explore deploying various machine learning applications using serverless platforms. The authors in $\left\lbrack {{55},{99}}\right\rbrack$ leverage the higher level of parallelism offered by serverless platforms to train machine learning models. While 'pay as you go' pricing, on-demand scaling, and minimal cold start, make serverless computing a good fit to deploy machine learning models, a developer should be careful opting for serverless computing as it provides no SLA on the performance and these models (particularly inference models $\left\lbrack {{59},{103}}\right\rbrack$ ) may have strict performance requirements. We will address this issue in more detail in Section 6 and show how pro-active approaches to configure serverless functions can achieve the desired performance. We also believe that the introduction of more features, such as GPU enabled hardware, in serverless platforms would make serverless computing more lucrative to deploy machine learning models.
|
| 252 |
+
|
| 253 |
+
Serverless computing for its on-demand, cost-effective computation power and elasticity has also been explored to deploy stream processing applications [30,71]. Video processing is one such example, where a user may want to extract useful information from an incoming video stream (video frames), where for each new incoming frame a serverless function can be spawned. Recent works $\left\lbrack {{37},{56},{104}}\right\rbrack$ describe the implementation of video processing frameworks using serverless functions. Moreover, there have been several articles showing that various data processing approaches, such as MapReduce, can also leverage serverless computing [58, 86].
|
| 254 |
+
|
| 255 |
+
We believe that the stateless nature and arbitrary placement of serverless functions without considering data locality can pose a significant performance challenge. Training of a machine learning model may need access to data from an external database or may need to repeatedly access the same data. For example, for a regression model or neural network, every time the model weights are updated, the test (validation) dataset needs to be retrieved to evaluate the accuracy of the model. In this case, placing the serverless function closer to the data source would benefit the application (i.e., shipping the computation to the data, as opposed to shipping data to the computation). While previous approaches, such as SAND [34] and Lambdata [95], address the data locality issue by introducing local data caches for subsequent use, we did not come across any approach that considered data locality for serverless function placement in the case of external data sources.
|
| 256 |
+
|
| 257 |
+
### 5.3 Internet of Things (IoT)
|
| 258 |
+
|
| 259 |
+
IoT lets the devices around us in our daily life, such as medical devices, sensors around the home and city, monitoring systems, and personal devices such as Amazon Alexa, connect and improve our quality of life. These devices are usually low-powered with minimal computation power and may need access to computing power to make important decisions. Serverless is a natural fit for IoT devices/applications as it provides on-demand, cost-effective computation power. Server-less platforms are already allowing a user to deploy serverless functions on the edge [2], making access to these functions much faster. Both on-demand computation and access at much lower latency make serverless computing an ideal candidate to run IoT applications. Recent approaches $\left\lbrack {{47},{83},{84}}\right\rbrack$ explore the possibility of using serverless computing for IoT applications and services. Pinto et al. [85] look at the feasibility of using serverless functions for IoT devices and provide a framework to optimize performance. Amazon's Alexa offers a unique and interesting use case [7] where a user can build the desired functionality for Alexa devices using Amazon Lambda's computation power.
|
| 260 |
+
|
| 261 |
+
Serverless applications for IoT devices may require a performance guarantee (SLA) to meet certain QoS standards. For example, a voice command needs to be analyzed within a certain amount of time for a better user experience. We quantify the performance of such an application with two parameters: access latency (propagation delays), and execution time of the application on the serverless platform. As mentioned earlier, serverless platforms are already making an active effort to reduce the access latency by allowing a user to deploy their serverless applications on the edge infrastructure, but this deployment may follow a different pricing model and has different resource limits as compared to the standard (core-) infrastructure $\left\lbrack {3,{53}}\right\rbrack$ . To deal with the limited resource and different pricing models, a developer may decide to distribute her application across the edge- and core-infrastructure. This is a challenging problem to solve considering the trade-off between access latency and execution time. A serverless function can be accessed faster on the edge infrastructure but may have a longer execution time because of limited resources. While a serverless function may execute faster on the core-infrastructure but suffer from large access latency. We believe that approaches similar to Costless [53] and COSE [33] can help a developer with an efficient and cost-effective division of computation across the spectrum of edge- and core- infrastructure. These approaches not only consider the performance of serverless functions on the platform but also incorporate the total response time in their models.
|
| 262 |
+
|
| 263 |
+
### 5.4 Virtual Communication Networks
|
| 264 |
+
|
| 265 |
+
To meet the increasing demand for communication networks, researchers have developed software-defined networking (SDN) and network function virtualization (NFV), which decouple the various networking functionalities from the hardware management and allow a greater degree of freedom for the service to evolve and be robust. Both NFV and SDN can run over any cloud computing service. Aditya et al. [32] present a set of general requirements that a cloud computing service must satisfy to effectively host SDN- and NFV-based services. The authors believe that along with other features, serverless computing, for its elasticity, performance, event-driven nature, and ease of management is a good fit to host some SDN- and NFV-based services, e.g. SDN controllers and network anomaly detection and media processing functions. Moreover, Chaudhry et al. [46] present an approach to improve QoS on the edge by employing virtual network functions using serverless computing.
|
| 266 |
+
|
| 267 |
+
However, porting SDN- and NFV-based services to server-less computing poses a new set of challenges for the research community. A user has to be careful about the pricing model as most of the serverless functions (implementing an NFV service) can be short-lived (in the order of a few milliseconds). As most commercial serverless platforms round the execution time to the nearest ${100}\mathrm{\;{ms}}$ to charge a user, this extra cost because of rounding up can quickly grow when the application performs many executions. One such example can be an anomaly detection system where thousands of network packets need to be analyzed. In addition to cost, function cold starts, statelessness, and arbitrary function placement can also reduce the QoS of delay-sensitive SDN- and NFV-based services. As described in Section 3, a user may have to rely on the serverless platform to implement various optimizations, e. $g$ . advanced virtualization techniques, local caches, and container re-use to circumvent these performance limitations.
|
| 268 |
+
|
| 269 |
+
### 5.5 Improving QoS of Cloud Applications
|
| 270 |
+
|
| 271 |
+
Serverless functions can be implemented and deployed quickly. Moreover, a user does not have to worry about scal-ability. A serverless platform, based on the number of invocations and configurations, provisions more instances of the same serverless function to cater to the dynamic demand. In addition to automatic scaling, the provisioning of these server-less function instances is much faster than traditional cloud resources, e.g. VMs, because serverless functions execute in lightweight sandbox environments. These features of server-less computing have intrigued researchers to study the feasibility of serverless functions as backup resources to cater to the transient demand for an application, while VM resources are being provisioned. Recently introduced frameworks, such as MArk [103], Spock [59] and FEAT [81], leverage serverless functions in conjunction with traditional cloud services to deploy delay-sensitive applications with strict SLAs. They show that using both IaaS and FaaS based resources can decrease the SLA violations significantly. Moreover, there have been suggestions to deploy lightweight components of an application requiring high elasticity and computation throughput as serverless functions, while keeping the rest of the application on traditional resources $\left\lbrack {{37},{75}}\right\rbrack$ . We discuss the advantages and challenges of building such frameworks and approaches in Section 6.
|
| 272 |
+
|
| 273 |
+
Summary: The main driving factors for serverless adoption are quick-provisioning time, on-demand scaling, and true "pay as you go" pricing model. While server-less adoption is increasing, there are certain challenges that need to be addressed. An application developer would benefit from tools that can help her translate an application into the serverless programming model, find a suitable serverless platform for a given application, and configure resources for serverless functions. On the other hand, a cloud provider can improve their serverless offering by providing predictable performance, less cold-start latencies, efficient function placement and state management/data caching across multiple instances of a serverless function.
|
| 274 |
+
|
| 275 |
+
## 6 Future Research
|
| 276 |
+
|
| 277 |
+
In the previous section, we discussed the suitability of the serverless computing model for various classes of cloud applications and potential challenges a user may face to port a particular application into this computing model. In this section, we will take a closer look at some of those challenges and present our ideas to address them. We will particularly focus our discussion on the issues that a developer can address with the limited control (one-time and online decisions) they have over serverless platforms and application re-design. We believe that application decomposition can help a developer design their serverless application better, while parameter tuning can help with fine-tuning resources and making online decisions for the individual serverless functions to get the desired performance. Lastly, a multi-cloud scenario can help applications with fluctuating demand, without compromising on cost and performance. Next, we discuss these challenges and possible solutions in more detail.
|
| 278 |
+
|
| 279 |
+
### 6.1 Parameter Tuning
|
| 280 |
+
|
| 281 |
+
In a serverless computing model, a user has limited control over the function's run-time environment, i.e. hardware, operating system, CPU-type, etc. On commercial serverless platforms, a user can only specify limited configurable parameters, such as memory, CPU, and location, for a serverless function. In Section 3, various measurement studies show that these configurable parameters can affect the cost of cloud-usage and the performance of serverless functions. As server-less platforms do not provide any guarantee (SLA) on the performance of serverless functions, configuring the parameters becomes even more crucial to get the desired performance of an application.
|
| 282 |
+
|
| 283 |
+
We propose research on designing feedback control systems, as illustrated in Figure 1, which continually monitor the performance of serverless applications and configure these parameters on the fly if needed. There are a number of challenges for designing such systems: 1) serverless platforms have varying underlying infrastructure, resource provisioning policies, sandboxing techniques, and every time a server-less function is invoked, even with the same configurable parameters, performance can vary based on the co-location of functions and underlying resources. This makes it hard to predict the performance of the serverless function; 2) Our experiences with GCF and Kubernetes Knative, show that there can be a significant delay in the feedback loop, i.e. after the configuration is changed and until the new configuration takes effect (up to minutes as mentioned in Section 3.3). This excessive feedback delay can lead to performance instability as the state of the system might change during that time; 3) The impact of the changes in allocated resources on the performance of a serverless function can vary depending on the underlying serverless platform. In our experiments, we noticed that while an increase in allocated memory/CPU improves the performance of a serverless function on AWS Lambda and GCF, it did not significantly affect the performance on Apache Open-Whisk and IBM Function. Maissen et al. [74] make a similar observation about IBM Cloud Functions.
|
| 284 |
+
|
| 285 |
+
Previously there have been a number of proposals suggesting various offline and online techniques to configure these parameters. Costless [53], given a workflow consisting of multiple functions, proposes a technique to efficiently distribute these functions across the edge- and core-cloud while reducing the cost of cloud usage and meeting the performance requirement. This approach relies on (one-time) profiling of the performance of a serverless function in the workflow under possible memory configurations. It suggests suitable configurable parameters (memory) based on the profiling results, however, it fails to capture the dynamicity of the execution model. In [88], the authors show that the per-container concurrency limit in Knative can affect the throughput and latency of serverless functions. They suggest a reinforcement learning-based approach to find the optimal concurrency limit for a given deployment of the application. Even though this approach is adaptive, it only targets configuring the concurrency limit, but as discussed earlier, other parameters such as memory, CPU, and location can also impact performance. Moreover, we noticed that the authors did not address the feedback delay issue, which for Knative, in our experience, can be up to several minutes depending on the configuration. Sizeless [51] uses resource-consumption data from thousands of synthetic serverless functions to build a representative performance model. Then, using the performance model and performance logs of the target function, it suggests the best memory configuration. This approach may incur significant cost overhead for running thousands of synthetic functions to get the required data to build the performance model. This approach also requires changes in the serverless application to collect the performance logs and only targets configuring memory for a function written in Node.js and deployed over AWS Lambda.
|
| 286 |
+
|
| 287 |
+
COSE [33] is an online statistical learning technique to configure various configurable parameters for delay-bounded chains of serverless functions or single functions. COSE not only achieves the desired performance for a serverless application but also reduces the cost of cloud usage. It can capture the dynamic changes in the execution model stemming from co-location and variable underlying infrastructure. Currently, COSE only configures memory and location (edge or core) for serverless functions on AWS Lambda but can be easily extended to configure other parameters such as concurrency and CPU power. COSE can be easily adapted for other parameters and platforms because it works as a stand-alone system that requires no changes to the serverless application. It retrieves the execution logs of a serverless function from the serverless platform and configures it with optimal/near-optimal parameter configurations. Hence, COSE can be extended to any platform that provides an API to retrieve the execution logs and configure parameters for the serverless function.
|
| 288 |
+
|
| 289 |
+
### 6.2 Decomposing Serverless Applications
|
| 290 |
+
|
| 291 |
+
Over the past decade, major commercial cloud providers have introduced their serverless platforms. These platforms offer diverse features, e.g. elasticity limits, supported languages, configurable parameters, pricing models, etc. Moreover, as we have seen in Section 3, these platforms have varying underlying infrastructure and resource provisioning policies [100]. As a result, the performance and cost of the same application can vary significantly across different serverless platforms. In [39], the authors show that serverless functions with different bottlenecks, such as memory and computation, may have an ideal serverless platform on which they perform the best. This shows that serverless platforms are not one-for-all. Considering an application, which comprises multiple serverless functions with varying compute, memory, $\mathrm{I}/\mathrm{O}$ bottlenecks, one platform may not suit all of the individual functions. We suggest investigating this idea further, where automated tools may help developers decompose their application into multiple serverless functions and then find the ideal serverless platform for each serverless function. This may require a sophisticated tool to perform code analysis [42] and measurement tools [74, 102] which can benchmark serverless platforms for different kinds of workload/computations.
|
| 292 |
+
|
| 293 |
+
Moreover, serverless platforms allow users to configure resources for each component of an application (if deployed as separate serverless functions), which may not be possible for a monolithic application deployed over a VM. In [102], the authors show that decomposing a monolithic application into multiple micro-services, instead of deploying the whole application as one unit, can lead to significant performance and cost gains. The authors also show an example application where decomposition leads to better performance and less cost. We also believe that decomposing an application would allow developers to cost-effectively fine-tune resources for various parts of the application.
|
| 294 |
+
|
| 295 |
+
To the best of our knowledge, we did not come across any previous work that suggests decomposing monolithic serverless applications to optimize the cost or performance. Costless [53] is the closest approach which suggests deploying a serverless application split across two platforms (edge and core) but it assumes that the application is already decomposed into multiple serverless functions.
|
| 296 |
+
|
| 297 |
+
### 6.3 Multi-Cloud Usage
|
| 298 |
+
|
| 299 |
+
Serverless functions are executed in light-weight sandbox environments, which can be launched in as few as 10 s of milliseconds. So, in case an application experiences a sudden increase in demand, it can seamlessly scale-out to cater to the increasing demand. This is a feature of serverless computing that has been leveraged by previous approaches, such as MArk [103], Spock [59], and FEAT [81], to hide the SLA violations for cloud applications deployed using traditional cloud services such as VMs. These approaches redirect a portion of the demand to the serverless counterpart of the application while scaling up traditional cloud resources which can take up to minutes to start up. These approaches may improve the performance of an application by reducing the number of SLA violations during scaling, at the expense of introducing a substantial development cost for a developer to build the serverless counterpart of the application. To reduce the development cost, a developer can employ an automated approach to build the serverless version of the application, similar to the approach suggested in [93]. Another limitation of these approaches is that they suggest a one-time configuration of resources for the serverless version of the application, which can lead to variations in the performance as explained in Section 6.1. As the goal of such approaches is to reduce the SLA violations, this variation in performance can adversely affect the application.
|
| 300 |
+
|
| 301 |
+

|
| 302 |
+
|
| 303 |
+
Figure 4: A balanced approach
|
| 304 |
+
|
| 305 |
+
We believe that in addition to performance, serverless computing also offers a unique pricing model, and as discussed in Section 4, serverless computing can be cost-effective for certain demand. For applications with large variations in demand, deploying them on VMs for the periods when demand is low can lead to sub-optimal cost. We propose to build a hybrid framework (Load Balancer, as illustrated in Figure 4) that leverages both aforementioned features of server-less computing, i.e. performance and economic model, by using serverless computing as: (1) an alternative to traditional cloud resources for a certain portion of the demand (consistently instead of only while scaling up VM resources), and (2) a fallback to the serverless counterpart of the application when demand is below BEP. To address the performance uncertainty in the serverless platform, we suggest that in addition to the multi-cloud framework, the developer should also employ more pro-active approaches, similar to COSE [33], to configure resources for the serverless counterpart of the application. COSE suggests the configuration for a serverless application that not only reduces the cost of cloud usage but also meets the specified SLA.
|
| 306 |
+
|
| 307 |
+
We also believe that serverless functions can be used as an alternative to VMs to offload the lightweight computations in a distributed application such as scientific workflows [76], where small tasks requiring more concurrency and elasticity can be implemented as serverless functions while keeping the tasks with longer computation time and requiring more resources on VMs. One can leverage the "utilization" of the computation, i.e. how long the computation is and how often it needs to be executed, to decide whether the computation should be directed (and executed) over a dedicated VM or a serverless platform. The problem is how to optimally distribute computations to minimize the total cost. This is a challenging problem given the inherent performance-cost tradeoffs: VMs are cheaper for high-utilization (long-running and frequent) computations, on the other hand, serverless platforms are cheaper for low-utilization (short-running and less frequent) computations and have the advantage of elasticity.
|
| 308 |
+
|
| 309 |
+
Finally, developers have indeed started to leverage services from different cloud providers. A case study is presented in [16], where an invoicing application is built using various best-in-class services from different commercial cloud providers. The application is built using Google's AI and image recognition services along with two of Amazon's services (Lambda and API Gateway).
|
| 310 |
+
|
| 311 |
+
## 7 Conclusion
|
| 312 |
+
|
| 313 |
+
Serverless computing has gained significant popularity in recent years. It offers an easy development model, back-end management, along with key performance benefits and a "pay as you go" pricing model. There is a significant amount of research articles addressing various aspects of serverless computing such as benchmarking/improving performance of commercial and open-source serverless platforms, new vir-tualization techniques for the execution environment, and studying the feasibility of serverless computing for a variety of cloud applications. In this paper, we look at these studies from an application developer's perspective and discuss how these studies can help her make informed decisions regarding her serverless application. We argue that serverless computing is becoming a popular choice to deploy various cloud applications for its distinct cost and performance benefits. While serverless adoption is pacing up, there are still a number of challenges that need to be addressed. We identify potential challenges and open issues that must be addressed to make serverless computing a viable option to deploy cloud applications. We argue that pro-active approaches to configure resources for serverless functions can address the performance uncertainty issue, while frameworks to decompose serverless applications and to leverage various cloud services at the same time can reduce the operational cost as well as enhance the performance of cloud applications.
|
| 314 |
+
|
| 315 |
+
## References
|
| 316 |
+
|
| 317 |
+
[1] Amazon Lambda. https://docs.aws.amazon.com/ lambda. Access Date: Feb 23, 2021.
|
| 318 |
+
|
| 319 |
+
[2] Amazon Lambda at Edge. ht https: //docs.aws.amazon.com/lambda/latest/dg/ lambda-edge.html. Access Date: Feb 23, 2021.
|
| 320 |
+
|
| 321 |
+
[3] Amazon Lambda Pricing. https:// aws.amazon.com/lambda/pricing/. Access Date: Feb 23, 2021.
|
| 322 |
+
|
| 323 |
+
[4] Amazon Linux. https://docs.aws.amazon.com/ AWSEC2/latest/UserGuide/amazon-linux-ami-basics.html. Access Date: Feb 23, 2021.
|
| 324 |
+
|
| 325 |
+
[5] Avoid Cold Start. https://www.serverless.com/ blog/keep-your-lambdas-warm. Access Date: Feb 23, 2021.
|
| 326 |
+
|
| 327 |
+
[6] AWS Lambda CPU-share. https: //docs.aws.amazon.com/lambda/latest/dg/ resource-model.html. Access Date: Feb 23, 2021.
|
| 328 |
+
|
| 329 |
+
[7] AWS Lambda with Alexa. https: //docs.aws.amazon.com/lambda/latest/dg/ services-alexa.html. Access Date: Feb 23, 2021.
|
| 330 |
+
|
| 331 |
+
[8] Azure Function Premium Plan. https: //docs.microsoft.com/en-us/azure/azure-functions/functions-scale. Access Date: Feb 23, 2021.
|
| 332 |
+
|
| 333 |
+
[9] Azure Function Pricing. https:// azure.microsoft.com/en-us/pricing/ details/functions/. Access Date: Feb 23, 2021.
|
| 334 |
+
|
| 335 |
+
[10] Azure Function Scaling. https:// docs.microsoft.com/en-us/azure/azure-functions/functions-scale. Access Date: Feb 23, 2021.
|
| 336 |
+
|
| 337 |
+
[11] Azure Function VM-concurrency. https: //docs.microsoft.com/en-us/azure/azure-functions/durable/durable-functions-perf-and-scale. Access Date: Feb 23, 2021.
|
| 338 |
+
|
| 339 |
+
[12] Azure Functions. https://azure.microsoft.com/ en-us/services/functions/. Access Date: Feb 23, 2021.
|
| 340 |
+
|
| 341 |
+
[13] Economics of Serverless. https://www.bbva.com/ en/economics-of-serverless/. Access Date: Feb 23, 2021.
|
| 342 |
+
|
| 343 |
+
[14] Functions Lifetime. https://mikhail.io/ serverless/coldstarts/. Access Date: Feb 27, 2021.
|
| 344 |
+
|
| 345 |
+
[15] GCF Cold Start. https://cloud.google.com/ functions/docs/bestpractices/tips. Access Date: Feb 23, 2021.
|
| 346 |
+
|
| 347 |
+
[16] Google + AWS . https://serverless.com/blog/ shamrock-transacts-billions/. Access Date: Feb 23, 2021.
|
| 348 |
+
|
| 349 |
+
[17] Google Cloud Function Pricing. https:// cloud.google.com/functions/pricing. Access Date: Feb 23, 2021.
|
| 350 |
+
|
| 351 |
+
[18] Google Cloud Functions. https://
|
| 352 |
+
|
| 353 |
+
cloud.google.com/functions. Access Date: Feb 23, 2021.
|
| 354 |
+
|
| 355 |
+
[19] IBM Cloud Functions. https://www.ibm.com/ cloud/functions. Access Date: Feb 23, 2021.
|
| 356 |
+
|
| 357 |
+
[20] IBM Function Pricing. https://cloud.ibm.com/ functions/learn/pricing. Access Date: Feb 23, 2021.
|
| 358 |
+
|
| 359 |
+
[21] Knative Scale to Zero. https://knative.dev/ docs/serving/autoscaling/scale-to-zero/. Access Date: Feb 23, 2021.
|
| 360 |
+
|
| 361 |
+
[22] Kubernetes Knative. https://cloud.google.com/ knative. Access Date: Feb 23, 2021.
|
| 362 |
+
|
| 363 |
+
[23] Kubernetes Knative Concurrency. https: //knative.dev/docs/serving/autoscaling/ concurrency/. Access Date: Feb 23, 2021.
|
| 364 |
+
|
| 365 |
+
[24] Lambda Cold Start. https://lumigo.io/ aws-lambda-performance-optimization/ how-to-improve-aws-lambda-cold-start-performance/. Access Date: Feb 23, 2021.
|
| 366 |
+
|
| 367 |
+
[25] Lambda Elasticity. https://docs.aws.amazon.com/ lambda/latest/dg/invocation-scaling.html. Access Date: Feb 23, 2021.
|
| 368 |
+
|
| 369 |
+
[26] Montage Workflow. http:// montage.ipac.caltech.edu/. Access Date: Feb 23, 2021.
|
| 370 |
+
|
| 371 |
+
[27] OpenWhisk Concurrency. https://github.com/ apache/openwhisk/blob/master/docs/ concurrency.md. Access Date: Feb 23, 2021.
|
| 372 |
+
|
| 373 |
+
[28] Provisioned Concurrency. https:// aws.amazon.com/blogs/aws/new-provisioned-concurrency-for-lambda-functions/. Access Date: Feb 23, 2021.
|
| 374 |
+
|
| 375 |
+
[29] Resources and Lambda Lifetime. https: //read.acloud.guru/how-long-does-aws-lambda-keep-your-idle-functions-around-before-a-cold-start-bf715d3b810. Access Date: Feb 23, 2021.
|
| 376 |
+
|
| 377 |
+
[30] Serverless for Stream. https://aws.amazon.com/ lambda/data-processing/. Access Date: Feb 23, 2021.
|
| 378 |
+
|
| 379 |
+
[31] Serverless Warm up Plugin. https://github.com/ FidelLimited/serverless-plugin-warmup. Access Date: Feb 23, 2021.
|
| 380 |
+
|
| 381 |
+
[32] P. Aditya, I. E. Akkus, A. Beck, R. Chen, V. Hilt, I. Rimac, K. Satzke, and M. Stein. Will serverless computing revolutionize NFV? Proceedings of the IEEE, 107(4):667-678, 2019.
|
| 382 |
+
|
| 383 |
+
[33] N. Akhtar, A. Raza, V. Ishakian, and I. Matta. COSE: configuring serverless functions using statistical learning. In IEEE INFOCOM 2020 - IEEE Conference on Computer Communications, pages 129-138, 2020.
|
| 384 |
+
|
| 385 |
+
[34] Istemi Ekin Akkus, Ruichuan Chen, Ivica Rimac, Manuel Stein, Klaus Satzke, Andre Beck, Paarijaat Aditya, and Volker Hilt. SAND: Towards high-performance serverless computing. In 2018 USENIX Annual Technical Conference (USENIXATC 18), pages 923-935, Boston, MA, July 2018. USENIX Association.
|
| 386 |
+
|
| 387 |
+
[35] Omid Alipourfard, Hongqiang Harry Liu, Jianshu Chen, Shivaram Venkataraman, Minlan Yu, and Ming Zhang. CherryPick: adaptively unearthing the best cloud configurations for big data analytics. In 14th USENIX Symposium on Networked Systems Design and Implementation (NSDI 17), pages 469-482, Boston, MA, March 2017. USENIX Association.
|
| 388 |
+
|
| 389 |
+
[36] Leonardo Aniello, Silvia Bonomi, Federico Lombardi, Alessandro Zelli, and Roberto Baldoni. An architecture for automatic scaling of replicated services. In Network Systems (NETYS), 8593. Springer, August 2014.
|
| 390 |
+
|
| 391 |
+
[37] Lixiang Ao, Liz Izhikevich, Geoffrey M. Voelker, and George Porter. Sprocket: A serverless video processing framework. In Proceedings of the ACM Symposium
|
| 392 |
+
|
| 393 |
+
on Cloud Computing, SoCC '18, pages 263-274, New York, NY, USA, 2018. ACM.
|
| 394 |
+
|
| 395 |
+
[38] Arda Aytekin and Mikael Johansson. Harnessing the power of serverless runtimes for large-scale optimization. arXiv preprint arXiv:1901.03161, 2019.
|
| 396 |
+
|
| 397 |
+
[39] Timon Back. Hybrid serverless and virtual machine deployment model for cost minimization of cloud applications. Master's thesis, Faculty of Science and Engineering, University of Groningen, 2018.
|
| 398 |
+
|
| 399 |
+
[40] Ioana Baldini, Paul Castro, Kerry Chang, Perry Cheng, Stephen Fink, Vatche Ishakian, Nick Mitchell, Vinod Muthusamy, Rodric Rabbah, Aleksander Slominski, and Philippe Suter. Serverless Computing: Current Trends and Open Problems, pages 1-20. Springer Singapore, Singapore, 2017.
|
| 400 |
+
|
| 401 |
+
[41] Bartosz Balis. HyperFlow: A model of computation, programming approach and enactment engine for complex distributed workflows. Future Generation Computer Systems, 55, 09 2015.
|
| 402 |
+
|
| 403 |
+
[42] Emery D. Berger. Scalene: Scripting-language aware profiling for python. arXiv preprint arXiv:2006.03879, 2020.
|
| 404 |
+
|
| 405 |
+
[43] James Cadden, Thomas Unger, Yara Awad, Han Dong, Orran Krieger, and Jonathan Appavoo. SEUSS: skip redundant paths to make serverless fast. In Proceedings of the Fifteenth European Conference on Computer Systems, EuroSys '20, New York, NY, USA, 2020. Association for Computing Machinery.
|
| 406 |
+
|
| 407 |
+
[44] J. Carreira, P. Fonseca, Alexey Tumanov, A. Zhang, and R. Katz. Cirrus: a serverless framework for end-to-end ml workflows. Proceedings of the ACM Symposium on Cloud Computing, 2019.
|
| 408 |
+
|
| 409 |
+
[45] Paul Castro, Vatche Ishakian, Vinod Muthusamy, and Aleksander Slominski. The rise of serverless computing. Commun. ACM, 62(12):44-54, November 2019.
|
| 410 |
+
|
| 411 |
+
[46] S. R. Chaudhry, A. Palade, A. Kazmi, and S. Clarke. Improved QoS at the edge using serverless computing to deploy virtual network functions. IEEE Internet of Things Journal, 7(10):10673-10683, 2020.
|
| 412 |
+
|
| 413 |
+
[47] B. Cheng, J. Fuerst, G. Solmaz, and T. Sanada. Fog function: Serverless fog computing for data intensive iot services. In 2019 IEEE International Conference on Services Computing (SCC), pages 28-35, 2019.
|
| 414 |
+
|
| 415 |
+
[48] Christina Delimitrou and Christos Kozyrakis. QoS-aware scheduling in heterogeneous datacenters with paragon. ACM Trans. Comput. Syst., 31(4):12:1- 12:34, December 2013.
|
| 416 |
+
|
| 417 |
+
[49] Christina Delimitrou and Christos Kozyrakis. Quasar:
|
| 418 |
+
|
| 419 |
+
Resource-efficient and QoS-aware Cluster Management. In Proceedings of the 19th International Conference on Architectural Support for Programming Languages and Operating Systems, ASPLOS '14, pages 127-144, New York, NY, USA, 2014. ACM.
|
| 420 |
+
|
| 421 |
+
[50] Dong Du, Tianyi Yu, Yubin Xia, Binyu Zang, Guanglu Yan, Chenggang Qin, Qixuan Wu, and Haibo Chen. Catalyzer: Sub-millisecond startup for serverless computing with initialization-less booting. In Proceedings of the Twenty-Fifth International Conference on Architectural Support for Programming Languages and Operating Systems, ASPLOS '20, page 467-481, New York, NY, USA, 2020. Association for Computing Machinery.
|
| 422 |
+
|
| 423 |
+
[51] Simon Eismann, Long Bui, Johannes Grohmann, Cristina L. Abad, Nikolas Herbst, and Samuel Kounev. Sizeless: Predicting the optimal size of serverless functions. arXiv preprint arXiv:2010.15162, 2020.
|
| 424 |
+
|
| 425 |
+
[52] Simon Eismann, Joel Scheuner, Erwin van Eyk, Maximilian Schwinger, Johannes Grohmann, Nikolas Herbst, Cristina L. Abad, and Alexandru Iosup. A review of serverless use cases and their characteristics. arXiv preprint arXiv:2008.11110, 2021.
|
| 426 |
+
|
| 427 |
+
[53] T. Elgamal. Costless: Optimizing cost of server-less computing through function fusion and placement. In 2018 IEEE/ACM Symposium on Edge Computing (SEC), pages 300-312, 2018.
|
| 428 |
+
|
| 429 |
+
[54] W. Fang, Z. Lu, J. Wu, and Z. Cao. RPPS: a novel resource prediction and provisioning scheme in cloud data center. In 2012 IEEE SCC, Honolulu, HI.
|
| 430 |
+
|
| 431 |
+
[55] Lang Feng, Prabhakar Kudva, Dilma Silva, and Jiang Hu. Exploring serverless computing for neural network training. In 2018 IEEE 11th International Conference on Cloud Computing (CLOUD), pages 334-341, 07 2018.
|
| 432 |
+
|
| 433 |
+
[56] Sadjad Fouladi, Riad S. Wahby, Brennan Shack-lett, Karthikeyan Vasuki Balasubramaniam, William Zeng, Rahul Bhalerao, Anirudh Sivaraman, George Porter, and Keith Winstein. Encoding, fast and slow: Low-latency video processing using thousands of tiny threads. In 14th USENIX Symposium on Networked Systems Design and Implementation (NSDI 17), pages 363-376, Boston, MA, March 2017. USENIX Association.
|
| 434 |
+
|
| 435 |
+
[57] Geoffrey C Fox, Vatche Ishakian, Vinod Muthusamy, and Aleksander Slominski. Status of Serverless Computing and Function-as-a-Service (FaaS) in Industry and Research. arXiv preprint arXiv:1708.08028, 2017.
|
| 436 |
+
|
| 437 |
+
[58] V. Giménez-Alventosa, Germán Moltó, and Miguel Caballer. A framework and a performance assessment for serverless mapreduce on aws lambda. Future Generation Computer Systems, 97, 03 2019.
|
| 438 |
+
|
| 439 |
+
[59] J. R. Gunasekaran, P. Thinakaran, M. T. Kandemir, B. Urgaonkar, G. Kesidis, and C. Das. Spock: Exploiting serverless functions for slo and cost aware resource procurement in public cloud. In 2019 IEEE 12th International Conference on Cloud Computing (CLOUD), pages 199-208, July 2019.
|
| 440 |
+
|
| 441 |
+
[60] Joseph M. Hellerstein, Jose Faleiro, Joseph E. Gonzalez, Johann Schleier-Smith, Vikram Sreekanti, Alexey Tumanov, and Chenggang Wu. Serverless computing: One step forward, two steps back. arXiv preprint arXiv:1812.03651, 2018.
|
| 442 |
+
|
| 443 |
+
[61] Benjamin Hindman, Andy Konwinski, Matei Zaharia, Ali Ghodsi, Anthony D. Joseph, Randy Katz, Scott Shenker, and Ion Stoica. Mesos: A Platform for Fine-grained Resource Sharing in the Data Center. In Proceedings of the 8th USENIX Conference on Networked Systems Design and Implementation, NSDI'11, pages 295-308, Berkeley, CA, USA, 2011. USENIX Association.
|
| 444 |
+
|
| 445 |
+
[62] Ling-Hong Hung, Dimitar Kumanov, Xingzhi Niu, Wes Lloyd, and Ka Yee Yeung. Rapid RNA sequencing data analysis using serverless computing. bioRxiv, 2019.
|
| 446 |
+
|
| 447 |
+
[63] V. Ishakian, V. Muthusamy, and A. Slominski. Serving deep learning models in a serverless platform. In 2018 IEEE International Conference on Cloud Engineering (IC2E), pages 257-262, 2018.
|
| 448 |
+
|
| 449 |
+
[64] Eric Jonas, Johann Schleier-Smith, Vikram Sreekanti, Chia-Che Tsai, Anurag Khandelwal, Qifan Pu, Vaishaal Shankar, Joao Menezes Carreira, Karl Krauth, Neeraja Yadwadkar, Joseph Gonzalez, Raluca Ada Popa, Ion Stoica, and David A. Patterson. Cloud programming simplified: A berkeley view on server-less computing. Number UCB/EECS-2019-3, Feb 2019.
|
| 450 |
+
|
| 451 |
+
[65] Dimitar Kumanov, Ling-Hong Hung, Wes Lloyd, and Ka Yee Yeung. Serverless computing provides on-demand high performance computing for biomedical research. arXiv preprint arXiv:1807.11659, 2018.
|
| 452 |
+
|
| 453 |
+
[66] Benjamin D Lee, Michael A Timony, and Pablo Ruiz. DNAvisualization.org: a serverless web tool for DNA sequence visualization. Nucleic Acids Research, 47(W1):W20-W25, 06 2019.
|
| 454 |
+
|
| 455 |
+
[67] H. Lee, K. Satyam, and G. Fox. Evaluation of production serverless computing environments. In 2018 IEEE 11th International Conference on Cloud Computing (CLOUD), pages 442-450, Los Alamitos, CA, USA, jul 2018. IEEE Computer Society.
|
| 456 |
+
|
| 457 |
+
[68] Ping-Min Lin and Alex Glikson. Mitigating cold starts in serverless platforms: A pool-based approach. arXiv preprint arXiv:1903.12221, 2019.
|
| 458 |
+
|
| 459 |
+
[69] W. Lloyd, S. Ramesh, S. Chinthalapati, L. Ly, and S. Pallickara. Serverless Computing: An Investigation of Factors Influencing Microservice Performance. In 2018 IEEE International Conference on Cloud Engineering (IC2E), pages 159-169, April 2018.
|
| 460 |
+
|
| 461 |
+
[70] W. Lloyd, Minh Vu, Baojia Zhang, O. David, and G. Leavesley. Improving application migration to serverless computing platforms: Latency mitigation with keep-alive workloads. In 2018 IEEE/ACM International Conference on Utility and Cloud Computing Companion (UCC Companion), pages 195-200, 2018.
|
| 462 |
+
|
| 463 |
+
[71] A. Luckow and S. Jha. Performance characterization and modeling of serverless and hpc streaming applications. In 2019 IEEE International Conference on Big Data (Big Data), pages 5688-5696, 2019.
|
| 464 |
+
|
| 465 |
+
[72] T. Lynn, P. Rosati, A. Lejeune, and V. Emeakaroha. A preliminary review of enterprise serverless cloud computing (function-as-a-service) platforms. In 2017 IEEE International Conference on Cloud Computing Technology and Science (CloudCom), pages 162-169, 2017.
|
| 466 |
+
|
| 467 |
+
[73] A. H. Mahmud, Y. He, and S. Ren. BATS: budget-constrained autoscaling for cloud performance optimization. In 2015 IEEE MASCOTS, Atlanta, GA.
|
| 468 |
+
|
| 469 |
+
[74] Pascal Maissen, Pascal Felber, Peter Kropf, and Valerio Schiavoni. FaaSdom: a benchmark suite for serverless computing. In Proceedings of the 14th ACM International Conference on Distributed and Event-Based Systems, DEBS '20, page 73-84, New York, NY, USA, 2020. Association for Computing Machinery.
|
| 470 |
+
|
| 471 |
+
[75] Maciej Malawski. Towards serverless execution of scientific workflows - hyperflow case study. In WORKS@SC, 2016.
|
| 472 |
+
|
| 473 |
+
[76] Maciej Malawski, Adam Gajek, Adam Zima, Bartosz Balis, and Kamil Figiela. Serverless execution of scientific workflows: Experiments with HyperFlow, AWS Lambda and Google Cloud Functions. Future Generation Computer Systems, 11 2017.
|
| 474 |
+
|
| 475 |
+
[77] J. Manner, M. Endreß, T. Heckel, and G. Wirtz. Cold start influencing factors in function as a service. In 2018 IEEE/ACM International Conference on Utility and Cloud Computing Companion (UCC Companion), pages 181-188, Dec 2018.
|
| 476 |
+
|
| 477 |
+
[78] G. McGrath and P. R. Brenner. Serverless computing: Design, implementation, and performance. In 2017 IEEE 37th International Conference on Distributed Computing Systems Workshops (ICDCSW), pages 405- 410, 2017.
|
| 478 |
+
|
| 479 |
+
[79] Anup Mohan, Harshad Sane, Kshitij Doshi, Saikrishna Edupuganti, Naren Nayak, and Vadim Sukhomlinov. Agile cold starts for scalable serverless. In 11th USENIX Workshop on Hot Topics in Cloud Computing (HotCloud 19), Renton, WA, July 2019. USENIX Association.
|
| 480 |
+
|
| 481 |
+
[80] A. Y. Nikravesh, S. A. Ajila, and C. Lung. Towards an autonomic auto-scaling prediction system for cloud resource provisioning. In 2015 IEEE/ACM SEAMS, Firenze, Italy.
|
| 482 |
+
|
| 483 |
+
[81] J. H. Novak, S. K. Kasera, and R. Stutsman. Cloud functions for fast and robust resource auto-scaling. In 2019 11th International Conference on Communication Systems Networks (COMSNETS), pages 133-140, Jan 2019.
|
| 484 |
+
|
| 485 |
+
[82] Edward Oakes, Leon Yang, Dennis Zhou, Kevin Houck, Tyler Harter, Andrea Arpaci-Dusseau, and Remzi Arpaci-Dusseau. SOCK: Rapid task provisioning with serverless-optimized containers. In 2018 USENIX Annual Technical Conference (USENIX ATC 18), pages 57-70, Boston, MA, July 2018. USENIX Association.
|
| 486 |
+
|
| 487 |
+
[83] Pangkaj Paul, John Loane, Fergal Mccaffery, and Gilbert Regan. A Serverless Architecture for Wireless Body Area Network Applications, pages 239-254. 10 2019.
|
| 488 |
+
|
| 489 |
+
[84] Per Persson and Ola Angelsmark. Kappa: Serverless iot deployment. In Proceedings of the 2nd International Workshop on Serverless Computing, WoSC '17, page 16-21, New York, NY, USA, 2017. Association for Computing Machinery.
|
| 490 |
+
|
| 491 |
+
[85] Duarte Pinto, Joao Pedro Dias, and Hugo Sereno Ferreira. Dynamic allocation of serverless functions in IoT environments. In 2018 IEEE 16th International Conference on Embedded and Ubiquitous Computing (EUC), Oct 2018.
|
| 492 |
+
|
| 493 |
+
[86] Qifan Pu, Shivaram Venkataraman, and Ion Stoica. Shuffling, fast and slow: Scalable analytics on server-less infrastructure. In 16th USENIX Symposium on Networked Systems Design and Implementation (NSDI 19), pages 193-206, Boston, MA, February 2019. USENIX Association.
|
| 494 |
+
|
| 495 |
+
[87] N. Roy, A. Dubey, and A. Gokhale. Efficient autoscal-ing in the cloud using predictive models for workload forecasting. In 2011 IEEE 4th International Conference on Cloud Computing, pages 500-507, 2011.
|
| 496 |
+
|
| 497 |
+
[88] Lucia Schuler, Somaya Jamil, and Niklas Kühl. AI-based resource allocation: Reinforcement learning for adaptive auto-scaling in serverless environments. arXiv preprint arXiv:2005.14410, 2020.
|
| 498 |
+
|
| 499 |
+
[89] Hossein Shafiei, Ahmad Khonsari, and Payam Mousavi. Serverless computing: A survey of opportunities, challenges and applications. arXiv preprint arXiv:1911.01296, 2019.
|
| 500 |
+
|
| 501 |
+
[90] Mohammad Shahrad, Rodrigo Fonseca, Inigo Goiri, Gohar Chaudhry, Paul Batum, Jason Cooke, Eduardo Laureano, Colby Tresness, Mark Russinovich, and Ricardo Bianchini. Serverless in the wild: Characterizing and optimizing the serverless workload at a large cloud provider. In 2020 USENIX Annual Technical Conference (USENIX ATC 20), pages 205-218. USENIX Association, July 2020.
|
| 502 |
+
|
| 503 |
+
[91] Vaishaal Shankar, Karl Krauth, Qifan Pu, Eric Jonas, Shivaram Venkataraman, Ion Stoica, Benjamin Recht, and Jonathan Ragan-Kelley. numpywren: serverless linear algebra. arXiv preprint arXiv:1810.09679, 2018.
|
| 504 |
+
|
| 505 |
+
[92] Simon Shillaker and Peter Pietzuch. Faasm: Lightweight isolation for efficient stateful serverless computing. In 2020 USENIX Annual Technical Conference (USENIX ATC 20), pages 419-433. USENIX Association, July 2020.
|
| 506 |
+
|
| 507 |
+
[93] Josef Spillner. Transformation of Python Applications into Function-as-a-Service Deployments. arXiv preprint arXiv:1705.08169, 2017.
|
| 508 |
+
|
| 509 |
+
[94] Vikram Sreekanti, Chenggang Wu, Xiayue Charles Lin, Johann Schleier-Smith, Joseph E. Gonzalez, Joseph M. Hellerstein, and Alexey Tumanov. Cloudburst. Proceedings of the VLDB Endowment, 13(12):2438-2452, Aug 2020.
|
| 510 |
+
|
| 511 |
+
[95] Y. Tang and J. Yang. Lambdata: Optimizing server-less computing by making data intents explicit. In 2020 IEEE 13th International Conference on Cloud Computing (CLOUD), pages 294-303, 2020.
|
| 512 |
+
|
| 513 |
+
[96] Zhucheng Tu, Mengping Li, and Jimmy Lin. Pay-per-request deployment of neural network models using serverless architectures. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Demonstrations, pages 6-10, New Orleans, Louisiana, June 2018. Association for Computational Linguistics.
|
| 514 |
+
|
| 515 |
+
[97] Erwin van Eyk, Alexandru Iosup, Simon Seif, and Markus Thömmes. The spec cloud group's research vision on faas and serverless architectures. In Proceedings of the 2nd International Workshop on Serverless Computing, WoSC '17, page 1-4, New York, NY, USA, 2017. Association for Computing Machinery.
|
| 516 |
+
|
| 517 |
+
[98] Abhishek Verma, Luis Pedrosa, Madhukar Korupolu, David Oppenheimer, Eric Tune, and John Wilkes. Large-scale Cluster Management at Google with Borg. In Proceedings of the Tenth European Conference on Computer Systems, EuroSys '15, pages 18:1-18:17, New York, NY, USA, 2015. ACM.
|
| 518 |
+
|
| 519 |
+
[99] H. Wang, D. Niu, and B. Li. Distributed machine learning with a serverless architecture. In IEEE INFOCOM 2019 - IEEE Conference on Computer Communications, pages 1288-1296, 2019.
|
| 520 |
+
|
| 521 |
+
[100] Liang Wang, Mengyuan Li, Yinqian Zhang, Thomas Ristenpart, and Michael Swift. Peeking behind the curtains of serverless platforms. In 2018 USENIX Annual Technical Conference (USENIXATC 18), pages 133-146, Boston, MA, 2018. USENIX Association.
|
| 522 |
+
|
| 523 |
+
[101] S. Werner, J. Kuhlenkamp, M. Klems, J. Müller, and S. Tai. Serverless big data processing using matrix multiplication as example. In 2018 IEEE International Conference on Big Data (Big Data), pages 358-365, 2018.
|
| 524 |
+
|
| 525 |
+
[102] Tianyi Yu, Qingyuan Liu, Dong Du, Yubin Xia, Binyu Zang, Ziqian Lu, Pingchao Yang, Chenggang Qin, and Haibo Chen. Characterizing serverless platforms with Serverlessbench. In Proceedings of the 11th ACM Symposium on Cloud Computing, SoCC '20, page 30-44, New York, NY, USA, 2020. Association for Computing Machinery.
|
| 526 |
+
|
| 527 |
+
[103] Chengliang Zhang, Minchen Yu, Wei Wang, and Feng Yan. MArk: exploiting cloud services for cost-effective, slo-aware machine learning inference serving. In 2019 USENIX Annual Technical Conference (USENIX ATC 19), pages 1049-1062, Renton, WA, July 2019. USENIX Association.
|
| 528 |
+
|
| 529 |
+
[104] Miao Zhang, Yifei Zhu, Cong Zhang, and Jiangchuan Liu. Video processing with serverless computing: A measurement study. In Proceedings of the 29th ACM Workshop on Network and Operating Systems Support for Digital Audio and Video, NOSSDAV '19, pages 61-66, New York, NY, USA, 2019. ACM.
|
papers/JSYS/JSYS 2021/JSYS 2021 Mar_Papers/VdWaMgaTKtX/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,345 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ SERVERLESS COMPUTING: FROM AN APPLICATION DEVELOPER’S PERSPECTIVE
|
| 2 |
+
|
| 3 |
+
Anonymous authors
|
| 4 |
+
|
| 5 |
+
Paper under double-blind review
|
| 6 |
+
|
| 7 |
+
§ ABSTRACT
|
| 8 |
+
|
| 9 |
+
In the past few years, serverless computing has gained significant popularity and became a go-to choice for deploying cloud applications and micro-services. Serverless computing with its unique 'pay as you go' pricing model and key performance benefits over other cloud services, offers an easy and intuitive programming model to build cloud applications. In this model, a developer focuses on writing the code of the application while infrastructure management is left to the cloud provider who is responsible for the underlying resources, security, isolation, and scaling of the application. Recently, a number of commercial and open-source serverless platforms have emerged, offering a wide range of features to application developers. In this paper, first, we present measurement studies demystifying various features and performance of commercial and open-source serverless platforms that can help developers with deploying and configuring their server-less applications. Second, we discuss the distinct performance and cost benefits of serverless computing and present a set of potential applications that can leverage the performance, cost, or both aspects of serverless computing. In the end, we discuss future research directions for serverless computing and suggest building tools and technologies, which would not only make serverless usage efficient but will also accelerate serverless adoption.
|
| 10 |
+
|
| 11 |
+
§ 1 INTRODUCTION
|
| 12 |
+
|
| 13 |
+
Serverless computing has emerged as a new paradigm that makes the cloud-based application development model simple and hassle-free. In the serverless model, an application developer focuses on writing code and producing new features without worrying about infrastructure management, which is left to the cloud provider. Serverless computing was first introduced by Amazon in 2014 as Amazon Lambda [1], and since then, other commercial cloud providers have introduced their serverless platforms, i.e. Google Cloud Function (GCF) [18] from Google, Azure Function [12] from Microsoft, and IBM Cloud Function [19] from IBM. There are also several open-source projects like Apache OpenWhisk, Knative, OpenLambda, Fission, and others.
|
| 14 |
+
|
| 15 |
+
At the time of the inception of the Internet, applications were built and deployed using dedicated hardware acting as servers, which needed a high degree of maintenance and often lead to under-utilization of resources [48,49]. Moreover, adding/removing physical resources to scale to varying demand, and debugging an application, was a cumbersome task. Under-utilization of resources and higher cost of maintenance led to the invention of new technologies like virtualization and container-based approaches. These approaches not only increased resource utilization but also made it easy to develop, deploy, and manage applications. Tools such as $\left\lbrack {{48},{49},{61},{98}}\right\rbrack$ were built to help users orchestrate resources and manage the application. Although virtualization and container-based approaches lead to higher utilization of resources and ease of building applications, developers still have to manage and scale the underlying infrastructure of an application, i.e. virtual machines (VMs) or containers, despite the availability of a number of approaches that would perform reactive or predictive scaling $\left\lbrack {{36},{54},{73},{80},{87},{103}}\right\rbrack$ . To abstract away the complexities of infrastructure management and application scaling, serverless computing emerged as a new paradigm to build, deploy, and manage cloud applications. The serverless computing model allows a developer to focus on writing code in a high-level language (as shown in Table 1) and producing new features of the application, while leaving various logistical aspects like the server configuration, management, and maintenance to the serverless platform [100].
|
| 16 |
+
|
| 17 |
+
Even though serverless computing has been around for only a few years, this field has produced a significant volume of research. This research addresses various aspects of serverless computing from benchmarking/improving the performance of various serverless platforms/applications, porting new applications into a serverless model, to suggesting altogether new serverless platforms. As serverless computing is still an evolving field, there is a significant need for systematization of the knowledge (SoK) particularly from the perspective of an application developer. We believe that for an application developer, an ideal SoK paper should address three main aspects: 1) current state of serverless platforms, e. $g$ . performance and features,2) what makes serverless computing ideal for certain classes of applications, and 3) and future research directions for helping a developer leverage the full potential of serverless computing with her limited control over the serverless platform.
|
| 18 |
+
|
| 19 |
+
Previous SoK papers are written from the perspective of the service provider. Castro et al. [45] present an overview of serverless computing and discuss the serverless architecture, development, and deployment model. Hellerstein et al. and Jonas et al. $\left\lbrack {{40},{60},{64}}\right\rbrack$ also provide an overview of serverless computing, and discuss potential challenges that a serverless provider should address for the popularization of serverless computing. Similarly, in [89], challenges and potential research directions for serverless computing are discussed. Eismann et al. [52] perform a systematic review of serverless applications and provide useful insights into the current usage of serverless platforms. Eyk et al. [97] give perspectives on how serverless computing can evolve and identify adoption challenges. Lynn et al. [72] give an overview of various features provided by popular serverless platforms. The aforementioned work take the perspective of a service provider and discuss the challenges and optimizations that it should introduce to improve and popularize the serverless platform.
|
| 20 |
+
|
| 21 |
+
Unlike previous work, in this paper, we take a closer look at the three aforementioned aspects of serverless computing from an application developer's perspective. We assess previous work related to measurements, performance improvement and porting of applications into the serverless computing model, and augment this with our own experimental results and insights. In this paper, we make the following contributions to the SoK:
|
| 22 |
+
|
| 23 |
+
* We categorize the decisions that an application developer can make during one life-cycle of an application into two categories: one-time decisions and online-decisions, and discuss their performance and cost implications.
|
| 24 |
+
|
| 25 |
+
* We show that the quick provisioning time, on-demand scaling, and true "pay as you go" pricing model are key factors for serverless adoption for various classes of applications and discuss potential challenges.
|
| 26 |
+
|
| 27 |
+
* For future research directions, we propose research on building tools and strategies to tune serverless functions, decompose serverless applications, and use serverless in conjunction with other cloud services that application developers can leverage to reduce cost.
|
| 28 |
+
|
| 29 |
+
The rest of the paper is organized as follows. We first describe the serverless computing model and its important features (Section 2). Next, we look at various measurement studies that investigate different aspects of commercial and open-source serverless platforms (Section 3). Then we present an economic model of serverless computing and compare it with traditional Infrastructure-as-a-Service (IaaS), and identify suitable classes of applications that can leverage server-less computing for its performance/cost (Sections 4 & 5). Lastly, we discuss future challenges and research directions to make serverless adoption efficient and easy (Section 6).
|
| 30 |
+
|
| 31 |
+
§ 2 BACKGROUND
|
| 32 |
+
|
| 33 |
+
Serverless computing was initially introduced to handle less frequent and background tasks, such as triggering an action when an infrequent update happens to a database. However, the ease of development, deployment, and management of an application and the evolution of commercial and open-source serverless platforms have intrigued the research community to study the feasibility of the serverless computing model for a variety of applications [57, 76, 103, 104]. Moreover, there are systems whose aim is to help developers port their applications to a serverless programming model [93].
|
| 34 |
+
|
| 35 |
+
In a serverless computing model, a developer implements the application logic in the form of stateless functions (henceforth referred to as serverless functions) in the higher-level language specified by the serverless platforms (popular platforms are shown in Table 1). The code is then packaged together with its dependencies and submitted to the serverless platform. A developer can associate different triggers with each function, so that a trigger would cause the execution of the function in a sandbox environment (mostly containers) with specified resources, i.e. memory, CPU-power, etc. The output of the serverless function is then returned as the response to the trigger. The serverless computing model is different from traditional dedicated servers or VMs in a way that these functions are launched only when the trigger is activated, while in the traditional model, the application is always running (hence the term "serverless").
|
| 36 |
+
|
| 37 |
+
Serverless computing abstracts away the complexities of server management in two ways. First, a developer only writes the logic of an application in a high-level language, without worrying about the underlying resources or having to configure servers. Second, in case the demand for an application increases, a serverless platform scales up the instances of the application without any additional configuration or cost and has the ability to scale back to zero (discussed in Section 3.3). On the contrary, in IaaS, an application developer not only has to specify the additional scaling policies but there can be an additional cost for deploying such autoscaling services.
|
| 38 |
+
|
| 39 |
+
Another important feature of the serverless computing model is that serverless platforms follow the "pay as you go" pricing model. This means a user will only pay for the time a serverless function is running. This model charges a user for the execution time of the serverless function based on the resources configured for the function. A user will not be charged for deploying the function or for idle times. Even though all of the cloud providers follow a similar pricing model, the price for the unit time ( ${100}\mathrm{\;{ms}}$ or $1\mathrm{\;{ms}}$ ) of execution can vary significantly from one cloud provider to another. In Table 1, we show some of the key features of popular serverless platforms.
|
| 40 |
+
|
| 41 |
+
In the serverless computing model, the abstraction of infrastructure management comes at the cost of little to no control over the execution environment (and underlying infrastructure) of the serverless functions. A user can control limited configurable parameters, namely memory, CPU-power, and location. Since the introduction of serverless platforms, there has been a large body of research work that aims to demystify the underlying infrastructure, resource provisioning, and eviction policies for commercial serverless platforms. Besides, these works have also looked at different aspects of performance, namely cold-starts, concurrency, elasticity, network, and I/O bandwidth shares. These research studies are helpful for the research and developer community to find a suitable serverless platform for their application and also inspire future research. In this paper, we describe and classify various measurement studies in detail and also look at the implications (dependent parameters) of various choices (control parameters) a developer can make (shown in Table 2).
|
| 42 |
+
|
| 43 |
+
max width=
|
| 44 |
+
|
| 45 |
+
X AWS Lambda Google Cloud Function IBM Cloud Function Microsoft Azure Function
|
| 46 |
+
|
| 47 |
+
1-5
|
| 48 |
+
Memory (MB) $\{ {128}\ldots {10},{240}\}$ ${128} \times i$ $i \in \{ 1,2,4,8,{16},{32}\}$ $\left\{ {{256}\ldots {2048}}\right\}$ upto 1536
|
| 49 |
+
|
| 50 |
+
1-5
|
| 51 |
+
Runtime Language Node.js, Python, Java, C#, Go, PowerShell Ruby Node.js, Python, Go Node.js, Python, Java, C#, Swift, PHP, Docker C#, F#, Node.js, PHP, TypeScript, Batch, Bash, PowerShell, Java
|
| 52 |
+
|
| 53 |
+
1-5
|
| 54 |
+
Billing Execution time based on memory Execution time based on memory & CPU-power Execution time based on memory Execution time based on memory used
|
| 55 |
+
|
| 56 |
+
1-5
|
| 57 |
+
Billing Interval 100ms 100ms 100ms 1ms
|
| 58 |
+
|
| 59 |
+
1-5
|
| 60 |
+
Configurable Resource memory memory & CPU-power memory $\mathrm{n}/\mathrm{a}$
|
| 61 |
+
|
| 62 |
+
1-5
|
| 63 |
+
|
| 64 |
+
Table 1: Serverless platforms
|
| 65 |
+
|
| 66 |
+
Even though the serverless computing model has provided much-needed agility to cloud-based application development, there are still challenges that need to be addressed to make serverless adoption easy and efficient. In this paper (Section 6 ), we identify such challenges and present our ideas to tackle these problems backed by other measurement studies and our own preliminary results on commercial and open-source serverless platforms.
|
| 67 |
+
|
| 68 |
+
§ 3 MEASUREMENT STUDIES
|
| 69 |
+
|
| 70 |
+
Serverless platforms are largely black-boxes for application developers, who submit the code of their application (with a few configurations) and in turn, the code gets executed upon the specified triggers. A user has little to no control over the execution environment, underlying resource provisioning policies, hardware, and isolation. A user has control over limited configurations through which they can control the performance of their serverless application. In what follows we categorize the decisions a developer can make for their server-less applications to get the desired performance or optimize their cost.
|
| 71 |
+
|
| 72 |
+
One-Time Decisions: These are the decisions that a developer can make before developing and deploying an application, and include selecting the serverless platform, programming language, and location of deployment. These decisions can be dictated by the features that a serverless platform offers such as underlying infrastructure, pricing model, elasticity, or performance metrics - for example, certain languages may have lower cold-start latency or the location of deployment can affect the latency to access the application. We believe changing any of these aspects would incur significant development and deployment cost, hence a developer can make such a decision only once in the life-cycle of the application.
|
| 73 |
+
|
| 74 |
+
Online Decisions: A developer has more freedom to change other parameters without a serious effort, including resources (memory, CPU) and concurrency limit. As we show later in this section, these parameters can affect the performance and cost of a serverless application. A developer can employ a more proactive technique to configure her serverless function based on the desired performance metric. Configuring these parameters is also important as serverless platforms provide no Service Level Agreement (SLA), i.e. guarantee on the performance of the serverless function, and a developer's only recourse to get the desired performance is through the careful configuration of these parameters. Later in Section 6, we discuss the challenges of designing proactive approaches by employing feedback control systems. These systems would continually monitor the performance of a serverless application and make these online decisions for the application, as shown in Figure 1.
|
| 75 |
+
|
| 76 |
+
There have been several measurement studies conducted by academic researchers and independent developers that have attempted to demystify different aspects of commercial and open-source serverless platforms. These studies help a developer make one-time decisions by identifying the underlying resources, i.e. operating system, CPUs, virtualization technique, and by benchmarking various performance aspects of serverless platforms. Moreover, these studies also look at the effect of configurable parameters (online decisions) on the performance and cost of serverless functions establishing the need to configure these parameters carefully.
|
| 77 |
+
|
| 78 |
+
< g r a p h i c s >
|
| 79 |
+
|
| 80 |
+
Figure 1: Feedback control systems to configure serverless functions
|
| 81 |
+
|
| 82 |
+
In Table 2, we present a classification of previous measurement studies. In this classification, we correlate the decisions (both one-time and online) that a developer or a researcher can make in terms of picking the serverless platform, scripting language, and configurations, with different performance aspects, such as cold-start delay, runtime, cost, etc. Every cell in the table indicates the peer-reviewed studies that have looked at the relationship between the controlled variable (decision) and the dependent parameters (performance). In what follows, we describe in greater detail the findings of these measurement studies and explain the effect of choices on different performance aspects.
|
| 83 |
+
|
| 84 |
+
§ 3.1 COLD STARTS
|
| 85 |
+
|
| 86 |
+
Cold start is the time from receiving a request until the sandbox-environment is ready for the execution of a server-less function to begin. Cold start can comprise the time to start the sandbox environment, load code-dependencies, and copy the application code. ${}^{1}$
|
| 87 |
+
|
| 88 |
+
This is perhaps the most studied aspect of serverless platforms. There have been several peer-reviewed studies attempting to quantify and remedy the effect of cold starts. The cold start comes from the fact that if a function is not being invoked recently for an amount of time set by the platform (called instance recycling time, and discussed later in this section), the platform destroys the sandbox environment (i.e. container) to free up the resources. On a subsequent new request, the platform will (re-)initialize the sandbox environment and execute the function, hence an extra delay would be incurred. Studies have found that cold starts can be affected by various online and one-time decisions.
|
| 89 |
+
|
| 90 |
+
* Choice of language: These studies show that usually, scripting languages (Python, Ruby, Javascript) have significantly less(100x)cold-start delays as compared to compiled runtimes (Java, .NET, etc.) [77, 100].
|
| 91 |
+
|
| 92 |
+
* Serverless provider: Studies have shown that different providers can have different cold-start delays depending on their underlying infrastructure or resource provisioning strategy $\left\lbrack {{69},{77},{79},{100}}\right\rbrack$ .
|
| 93 |
+
|
| 94 |
+
* Resources: Cold start is also impacted by the resources available to the function, i.e. memory/CPU [77, 100]. This can be because of the fact that more resources lead to faster setup of the execution environment [100].
|
| 95 |
+
|
| 96 |
+
The above insights can help a user develop an application in a particular language, and also configure resources based on the application needs. If an application is latency-sensitive, a developer may choose to use a scripting language and configure more resources for the serverless function. One has to be careful with configuring more resources for the serverless function to remedy cold start, as it can increase the cost of running the serverless function. Based on the finding reported in [100] on commercial serverless platforms, AWS Lambda has the least cold-start delays. Approaches to circumvent the cold start can be divided into two categories:
|
| 97 |
+
|
| 98 |
+
1) For serverless platforms: Serverless platforms can improve the cold-start latency by having fast sandboxing techniques or by keeping the sandbox instances warm for a longer time. While the latter approach can be significantly expensive for the platform as it can potentially lead to resource under-utilization (discussed in more detail in Section 3.5), there has been a significant body of research focused on improving the cold-start latency through advanced container-management/sandboxing techniques [34, 43, 50, 79, 82, 92]. These approaches employ container reuse, loose isolation between function instances ${}^{2}$ , and memory snapshotting and restoring to achieve a cold-start latency that is as low as ${10}\mathrm{\;{ms}}$ or less [92].
|
| 99 |
+
|
| 100 |
+
2) For the developers: The aforementioned fast sandboxing approaches will only work if a developer has complete control over the serverless platform. In case a developer is using a commercial serverless platform, their approach to mitigate cold start will be different. In addition to carefully selecting the language and serverless platform to develop and deploy their application, they can also control cold start through carefully configuring resources for the application. There are several articles published $\left\lbrack {5,{15},{24},{31},{68}}\right\rbrack$ , which suggest certain design changes in the application to avoid unnecessary cold starts such as sending dummy requests to the server-less function that perform early exit without performing any computation. While these approaches may keep the function warm, they can also introduce extra cost as there is a fixed cost charged for each request and most serverless platforms round up the execution time to the nearest ${100}\mathrm{\;{ms}}$ , so even if the function performs early exit, the user would be charged some cost. A recent feature from serverless platforms, such as AWS Lambda [28] and Azure Function [8], allows their user to specify a minimum number of function instances to be kept warm all the time to avoid unnecessary cold starts but a user is charged for enabling this feature.
|
| 101 |
+
|
| 102 |
+
${}^{1}$ While this can vary based on the serverless platform’s policies, our experiments on AWS Lambda show that the billed duration can include cold-start time if the initialization takes more than 10 seconds (approximately).
|
| 103 |
+
|
| 104 |
+
${}^{2}$ Function instance refers to the sandbox environment executing the code of a serverless function.
|
| 105 |
+
|
| 106 |
+
max width=
|
| 107 |
+
|
| 108 |
+
Parameters Control $\rightarrow$ Measured $\downarrow$ Serverless Platform Language Memory/CPU Location
|
| 109 |
+
|
| 110 |
+
1-5
|
| 111 |
+
Cold Start $\left\lbrack {{34},{69},{74},{77},{79},{100}}\right\rbrack$ $\left\lbrack {{74},{77},{100}}\right\rbrack$ $\left\lbrack {{69},{77},{100}}\right\rbrack$ $X$
|
| 112 |
+
|
| 113 |
+
1-5
|
| 114 |
+
Runtime/Cost $\left\lbrack {{34},{39},{67},{69},{74},{77},{79},{100}}\right\rbrack$ [100] $\left\lbrack {{33},{70},{100},{102}}\right\rbrack$ [33,53]
|
| 115 |
+
|
| 116 |
+
1-5
|
| 117 |
+
Concurrency $\left\lbrack {{67},{69},{70},{100}}\right\rbrack$ [74] $X$ $X$
|
| 118 |
+
|
| 119 |
+
1-5
|
| 120 |
+
I/O throughput [67,100] $X$ [33, 100] $X$
|
| 121 |
+
|
| 122 |
+
1-5
|
| 123 |
+
Network throughput [67,100] $X$ [100] $x$
|
| 124 |
+
|
| 125 |
+
1-5
|
| 126 |
+
Instance Lifetime [70, 100] $X$ [100] $X$
|
| 127 |
+
|
| 128 |
+
1-5
|
| 129 |
+
Underlying Infrastructure [69, 100] $x$ [70] $x$
|
| 130 |
+
|
| 131 |
+
1-5
|
| 132 |
+
|
| 133 |
+
Table 2: Measurement Studies - each cell identifies the studies establishing relation between the respective column (decision) and row (performance/platform characteristics) - 'x' means no documented relation between decision and performance
|
| 134 |
+
|
| 135 |
+
Summary: Cold start can be impacted by the virtualiza-tion techniques and function eviction policies employed by the serverless platform. From a developer's perspective, the impact of cold start can be controlled through the configurable resources and careful choice of the programming language.
|
| 136 |
+
|
| 137 |
+
§ 3.2 COST AND PERFORMANCE
|
| 138 |
+
|
| 139 |
+
The cost of cloud usage for serverless functions on a commercial serverless platform $p$ can be calculated as follows:
|
| 140 |
+
|
| 141 |
+
$$
|
| 142 |
+
\operatorname{cost} = T\left( m\right) \times C\left( {p,m}\right) + G\left( p\right) \tag{1}
|
| 143 |
+
$$
|
| 144 |
+
|
| 145 |
+
where $T\left( m\right)$ is the run time of the serverless function given resources $m$ and $C\left( {p,m}\right)$ is cost per unit time of resources $m$ from the platform $p.G\left( p\right)$ denotes the fixed cost such as API gateway for AWS Lambda; if there is no fixed cost, $G\left( p\right)$ can be considered zero. Equation (1) shows that the cost of cloud usage directly depends on the run time of the serverless function and the price per unit time for resources $m\left\lbrack {3,9,{17},{20}}\right\rbrack$ . Hence all the factors that can impact the run time of a function will also impact the cost of cloud usage. To observe the effect
|
| 146 |
+
|
| 147 |
+
< g r a p h i c s >
|
| 148 |
+
|
| 149 |
+
Figure 2: Performance and cost on AWS Lambda
|
| 150 |
+
|
| 151 |
+
of configurable resources (e.g., memory and CPU power) on the performance of a serverless function, we deployed various (I/O-intensive, memory-intensive, and CPU-intensive) functions on Amazon Lambda and invoked them with varying resource configurations. We show the observed trends in the performance and cost with respect to the resources in Figure 2. It can be seen that more resources lead to faster execution of the serverless function but the performance gain is limited after a certain point. This observation also confirms previous findings made in $\left\lbrack {{33},{51},{74}}\right\rbrack$ , which report a similar effect of resources on the performance.
|
| 152 |
+
|
| 153 |
+
Other factors that can affect the performance are summarized below:
|
| 154 |
+
|
| 155 |
+
Cold Starts: A serverless platform may decide to terminate the sandbox environment if it has been inactive for a certain amount of time, as explained in Section 3.5. Hence, serverless functions with less frequent invocations may incur the extra latency of cold start.
|
| 156 |
+
|
| 157 |
+
Concurrency: Previous studies $\left\lbrack {{67},{69},{70},{100}}\right\rbrack$ looked at the effect of concurrency on the performance of serverless functions and found that the performance can be negatively impacted by a higher concurrency level. This is due to the particular resource provisioning policies of the serverless platforms as reported in [100]. In particular, AWS Lambda and Azure Function try to co-locate the function instances, hence causing more contention for resources. Recent work [88] shows that concurrency configurations can also impact the performance of serverless functions running on the open-source serverless platform Knative [22].
|
| 158 |
+
|
| 159 |
+
Co-location: Previous studies $\left\lbrack {{33},{100}}\right\rbrack$ show that co-location of serverless functions on the same underlying resource can also result in significant performance degradation. Our preliminary experiment on OpenWhisk also confirms these findings.
|
| 160 |
+
|
| 161 |
+
Underlying Infrastructure and Policies: As discussed in Section 3.6, the underlying infrastructure of commercial serverless platforms consist of diverse resources, and in addition to that, resource provisioning policies for the execution of a serverless function can also vary significantly from one platform to the other [100]. Hence, these aspects can also introduce significant uncertainty in performance.
|
| 162 |
+
|
| 163 |
+
Keeping in mind the tightly coupled nature of performance and cost of serverless functions, it is really important to find the "best" configuration of parameters (online decisions), e.g. memory, CPU, concurrency, such that they not only meet performance expectations but also optimize the cost of cloud usage. Previous approaches $\left\lbrack {{33},{51},{53},{88}}\right\rbrack$ use various machine learning and statistical learning approaches to configure parameters, e.g. memory, CPU, concurrency and location, for serverless applications deployed on commercial and open-source serverless platforms. We discuss these approaches in more detail in Section 6.1.
|
| 164 |
+
|
| 165 |
+
Summary: The performance of a serverless function can be impacted by its configurable resources, choice of programming language, and the choice of serverless platform. The usage cost is calculated based on the configurable resources, the execution time, and the unit-time cost specified by the serverless platform.
|
| 166 |
+
|
| 167 |
+
§ 3.3 CONCURRENCY OR ELASTICITY
|
| 168 |
+
|
| 169 |
+
Concurrency is the number of function instances serving requests for the serverless function at a given time. On-demand scaling by the serverless platforms - i.e. in case the demand for the serverless application increases, the serverless platform initializes more function instances to serve these requests concurrently - is one of the distinct features of the serverless computing model. Unlike IaaS, a user does not have to specify the scaling policies, rather the serverless platforms provision more function instances of the serverless function to cater to increasing demand. Most serverless platforms can scale up to a certain limit and en-queue any subsequent requests until one of the previous requests finishes execution and resources are freed. A platform's ability to scale quickly, and the maximum concurrency level that it can achieve, can be very critical to applications with fluctuating demand. To observe the maximum concurrency level that a commercial platform can support, Wang et al. [100] performed a comprehensive measurement study on three major cloud providers: AWS Lambda, GCF, and Azure Function. They found that out of all three, AWS Lambda was the best, achieving a maximum concurrency level of ${200}^{3}$ , while GCF and Azure Functions were unable to achieve advertised concurrency levels. FaaSdom [74], a recent benchmarking suite for serverless platforms, also found that AWS Lambda achieves the best latency in the face of an increased request rate for a serverless application - demonstrating its ability to quickly scale out. They also found that one-time decisions, such as language and underlying operating system, can also affect the scalability of a serverless application. Another study [67] found that AWS Lambda and GCF perform better for varying demand when compared to IBM Cloud Function and Azure Function. We believe a platform's inability to scale well can come from the fact that scale-out is decided based on measured CPU load, a queue length, or the age of a queued message, which can take time to be logged. On the other hand, AWS Lambda launches a new function instance for a new request if current function instances are busy processing requests, as reported in $\left\lbrack {{10},{67}}\right\rbrack$ . Using this proactive approach, AWS Lambda can scale out quickly without relying on any other measured indicator. As elasticity is one of the most advertised features of serverless computing, commercial serverless platforms are striving to improve their service by offering higher concurrency limits. AWS Lambda's recent documentation indicates that concurrency limits have increased significantly $\left( { > {3000}}\right)$ and a user can request further increase [25].
|
| 170 |
+
|
| 171 |
+
Serverless platforms, such as Apache Openwhisk and Kna-tive from Kubernetes, allow a user to configure a container-level concurrency limit, i.e. number of requests that a function instance can serve in parallel (where each request runs as a separate thread) [23, 27]. On the other hand, Azure Function allows a user to configure a maximum number of function instances that can be launched on a single VM to avoid the possibility of running out of underlying VM resources [11]. Schuler et al. [78] show that the container-level concurrency limit can affect the application's performance. They also suggest an AI-based (reinforcement learning) technique to configure the concurrency limit for Knative. The fact that a user can configure this particular concurrency limit on the fly also makes this limit an online decision. A user should be careful with configuring the container-level concurrency limit, as function instances running prior to making the configuration change will keep running with the old configuration (until terminated by the platform based on its settings), and only the new instances will assume the new concurrency limit. A user should wait for the system to be stable with the new configuration (i.e., all function instances with the old configuration are terminated) before making any further changes.
|
| 172 |
+
|
| 173 |
+
Summary: Serverless applications can elastically scale without any additional configurations. The maximum number of function instances that can run in parallel is determined by the serverless platform and can vary based on the cloud provider. Studies have found that among commercial serverless platforms, AWS Lambda scales best in terms of throughput.
|
| 174 |
+
|
| 175 |
+
§ 3.4 CPU, NETWORK AND I/O
|
| 176 |
+
|
| 177 |
+
Most of the commercial and open-source serverless platforms allow limited to no control over the execution environment of serverless functions. While a user can only configure certain parameters, e.g. memory, CPU-power, location, and concurrency, other resources such as CPU-, network- and I/O-share are decided by the serverless platform. In [100], the authors find that in case there is no contention, empirical results show that AWS Lambda puts an upper bound on the CPU share for a function with memory $m$ of ${2m}/{3328}$ , while in the case of co-location, function instances share the CPU fairly and each instance's share becomes slightly less than, but still close to the upper bound. Similarly, Google also allocates the CPU share according to the memory allocated to the function. CPU allocation in proportion to memory assigned to a function is also specified in AWS Lambda and GCF's documentation [6]. Contrary to GCF and AWS Lambda, IBM Function does not allocate the CPU share in proportion to memory allocated to the function, as reported in [74], rather it keeps it constant as an increase in memory does not affect the performance of the function.
|
| 178 |
+
|
| 179 |
+
${}^{3}$ This study was conducted in 2018. We believe higher concurrency levels can be achieved now given system upgrades.
|
| 180 |
+
|
| 181 |
+
On the other hand, with Azure Function, the CPU share allocated to a function was found to be variable with the serverless function getting the highest CPU share when placed on 4-vCPU VMs ${}^{4}$ . In the case of co-location, the CPU share of co-located instances can drop. Similar to CPU share, I/O and network performance can also be affected by the resources configured for the serverless function and co-location, as reported in $\left\lbrack {{33},{67},{100}}\right\rbrack$ . Our preliminary experiments also confirm this for the I/O performance, where the performance of I/O-intensive serverless functions improves when allocated more memory, as illustrated in Figure 2.
|
| 182 |
+
|
| 183 |
+
Summary: The CPU, Network and I/O bandwidth of a serverless function can be impacted by the co-location of multiple functions on the same underlying resource (VM) and the instance placement policies of the server-less platform. An application developer can run various benchmarks (or consult measurement studies) to find the most suitable provider for her application.
|
| 184 |
+
|
| 185 |
+
§ 3.5 INSTANCE RECYCLING TIME AND LIFETIME
|
| 186 |
+
|
| 187 |
+
When a serverless function is first executed, the serverless platform creates the sandbox environment, loads the function's code in it, and executes the code to return the results. After the results are returned, the sandbox environment is kept in a warm state for a certain amount of time (called instance-recycling-time) to serve any subsequent request for the same function. If during that time, no subsequent request arrives, the sandbox environment is terminated so as to reuse the resources. A serverless platform may decide to terminate the sandbox environment after it has been in use for a certain period regardless of the usage. This time is called instance-lifetime.
|
| 188 |
+
|
| 189 |
+
Both instance-recycling-time and instance-lifetime are very critical values to configure for not only the serverless platform but also for the users. A low value for these variables would mean that a serverless platform can free the resources quickly and re-purpose them for other applications while increasing the utilization of underlying resources, but for users, it can be devastating as the serverless functions would experience unnecessary cold starts hence degrading the performance of their serverless application. For a commercial serverless platform, it can lead to potential revenue loss by losing customers. While from the user's perspective, longer values would be ideal as their application would always find their serverless functions warm, hence reducing the latencies, but this may end up reducing the utilization of the underlying resource for the serverless platform ${}^{5}$ .
|
| 190 |
+
|
| 191 |
+
For open-source serverless platforms $\left\lbrack {{21},{90}}\right\rbrack$ , a user can configure these values on their own and there have been studies suggesting using popularity analysis to configure these values on a per-application basis [90]. But in commercial serverless platforms, these values are decided by the platform and a user has no control over the instance-recycling-time and instance-lifetime. There have been several peer-reviewed studies that looked at this aspect of commercial serverless platforms. Most of these studies followed a similar technique to infer the values for instance-recycling-time and instance-lifetime. Commercial serverless platforms allow a serverless function to use a limited amount of persistent storage for the time a sandbox environment is in use. Previous studies $\left\lbrack {{69},{100}}\right\rbrack$ use this storage to store an identifier for the serverless function when the function is invoked for the first time. Later they invoke the same function again and check if the identifier is still present; if it is not, then the sandbox environment was destroyed and the latter execution was done in a new environment. They show that different serverless platforms have different instance-recycling times, with Google Cloud Function having the longest of all (more than ${120}\mathrm{\;{min}}$ - utes). AWS Lambda's recycling time is reported to be around 26 minutes. The authors could not find a consistent value for Azure Functions. While another recent study [14] claims this value to be 20-30 min for Azure Function, 5-7 min for AWS Lambda and 15 min for Google Cloud Function. Hence, if a serverless function stays inactive for this instance-recycling-time, the subsequent request would incur an extra delay equal to a cold start.
|
| 192 |
+
|
| 193 |
+
In an independent study [29], the authors established a relation between instance-recycling-time and resources (i.e. memory) configured for the serverless function on AWS Lambda. They found that a large value of memory configured for the serverless function tends to give it a small instance-recycling-time ${}^{6}$ .
|
| 194 |
+
|
| 195 |
+
Regarding instance-lifetime, in [100], using a similar technique, the authors found that Azure Function has the longest instance-lifetime as compared to AWS Lambda and Google Cloud Function. They also found that in case of Google Cloud Function, the lifetime of an instance can be affected by the resources configured for the function. It is reported that instance-lifetime of an instance with ${128}\mathrm{{MB}}$ and 2,048 MB memory is 3-31 minutes and 19-580 minutes, respectively.
|
| 196 |
+
|
| 197 |
+
${}^{4}$ Placement of function instances on VMs can be random from a user’s perspective. Also, notice Azure Function does not allow users to configure any resources for the serverless function.
|
| 198 |
+
|
| 199 |
+
${}^{5}$ Remember a user does not pay for idle times in serverless computing. hence this is a lose-lose situation for the serverless platform or cloud provider.
|
| 200 |
+
|
| 201 |
+
${}^{6}$ We could not find any peer-reviewed study to validate this claim.
|
| 202 |
+
|
| 203 |
+
Summary: For a serverless function, instance-recycling-time is decided by the serverless platform. A serverless platform can employ more pro-active approaches to configure instance-recycling-time based on the application's popularity, as suggested in [90]. For an application developer, a low value for instance-recycling-time would affect performance by incurring extra cold-start delays. A developer can reduce the effect of cold starts by carefully choosing the language of the application and configurable resources.
|
| 204 |
+
|
| 205 |
+
§ 3.6 UNDERLYING INFRASTRUCTURE
|
| 206 |
+
|
| 207 |
+
In a serverless computing model, a user only focuses on writing the code, and it is the serverless platform's responsibility to execute this code on any infrastructure/hardware. A user has no control over the underlying resources (types of VM where the application code would be executed). A developer may be interested in knowing the underlying infrastructure where their serverless application would be running to optimize the performance of their applications or to make other assumptions about the running environment of their application.
|
| 208 |
+
|
| 209 |
+
There have been several studies that tried to demystify the underlying virtual infrastructure for commercial serverless platforms. Lloyd et al. [69] discovered that serverless functions have access to the "/proc" file system of underlying VMs running the Linux operating system. By inspecting "/proc/cpuinfo", the authors discovered that the underlying VMs run Amazon Linux [4] and use CPUs that are similar to those of EC2 instances. Wang et al. [100] went one step further and using a similar approach, the authors conducted a wide study on all the big commercial serverless platforms, i.e. AWS Lambda, Google Cloud Function, and Azure Functions. They found that Google Cloud Function successfully hides the underlying resources and the only information they could obtain was that there are four unique types of underlying resources. By inspecting "/proc/cpuinfo" and "/proc/meminfo", they found that AWS Lambda uses five different types of VMs having different vCPUs and memory configurations, mostly 2 vCPUs and 3.75GB physical RAM, which is the same as c4.large instances from EC2. The authors also noticed that Azure Function has the most diverse underlying infrastructure. While inspecting the contents of "/proc/*", they came across VMs with 1, 2, or 4 vCPUs, and the vCPU is either Intel or AMD model.
|
| 210 |
+
|
| 211 |
+
Knowing the underlying infrastructure can be helpful for developers to identify various performance-related issues. One example of that could be, a serverless function, running on Azure Function, placed on a VM with 4 vCPUs, can have more CPU share as compared to when placed on other types of VMs. Also, knowing the diversity of the underlying infrastructure can help the researcher explain the variability in performance for a given serverless platform.
|
| 212 |
+
|
| 213 |
+
Summary: Serverless platforms have diverse underlying infrastructure and this can introduce a significant variability in the performance of a serverless function even when executed with the same configurable resources. Careful selection of the serverless platform by the application developer, and the usage of more pro-active approaches such as COSE [33] to dynamically configure resources for serverless functions, can mitigate this variability in performance.
|
| 214 |
+
|
| 215 |
+
§ 4 SERVERLESS ECONOMIC MODEL
|
| 216 |
+
|
| 217 |
+
Commercial serverless platforms follow "pay as you go" pricing models. This means, a user only pays for the time the code is executing and not the idle time. On the other hand, other cloud services, like Amazon's EC2 and Google's VM, have pricing models that not only charge based on minutes and seconds of usage but also have a different price per unit time as compared to their serverless counterparts. In addition to the price factor, these VMs take extra labor to configure and maintain. On the contrary, a serverless function takes minimal effort to configure and maintain. Another key benefit of using the serverless programming model is that serverless platforms assume the responsibility of scaling the application, unlike VM-based infrastructures where users have to specify scaling policies.
|
| 218 |
+
|
| 219 |
+
Given the execution model of serverless platforms for a certain application, pricing model, and demand (request arrival rate), one can estimate the cost of deploying a serverless application on a commercial serverless platform. Similarly, a user can calculate the cost of deploying a cloud application by renting VMs from a commercial cloud provider. In [13], the authors present an economic model of deploying an application on commercial serverless platforms (FaaS), such as Amazon Lambda, and compare it with the economic model when only IaaS resources (VMs) are used to deploy the application. Specifically, the FaaS economical model can be described by:
|
| 220 |
+
|
| 221 |
+
$$
|
| 222 |
+
{COST}_{FaaS} = {\text{ EconomicModel }}_{FaaS}\text{ (rate, exec\_time, }
|
| 223 |
+
$$
|
| 224 |
+
|
| 225 |
+
$$
|
| 226 |
+
\text{ req\_res, fixed\_cost, unit\_price) } \tag{2}
|
| 227 |
+
$$
|
| 228 |
+
|
| 229 |
+
where ${COST}_{FaaS}$ is the total cost of running an application on a serverless platform. This cost depends on the rate of function invocations (rate), execution time (exec_time) with resources (req_res) configured for each request (e.g. memory, CPU), and unit_price, which is the price of per unit time of execution for the specified resources. fixed_cost indicates any additional fixed cost such as that of an API Gateway.
|
| 230 |
+
|
| 231 |
+
Similarly, the cost for IaaS based deployment $\left( {\mathrm{{COST}}}_{\text{ IaaS }}\right)$ can be calculated as follows:
|
| 232 |
+
|
| 233 |
+
$$
|
| 234 |
+
{COST}_{IaaS} = {\text{ EconomicModel }}_{IaaS}\text{ (rate, exec\_time, }
|
| 235 |
+
$$
|
| 236 |
+
|
| 237 |
+
$$
|
| 238 |
+
\text{ req\_res, vm\_cost, vm\_config, max\_vm\_reqs) } \tag{3}
|
| 239 |
+
$$
|
| 240 |
+
|
| 241 |
+
where rate is the request arrival rate for the IaaS based deployment, ${vm}\_ {cost}$ is the cost of renting a particular VM with configurations vm_config, and max_vm_reqs is the maximum number of requests that one VM can handle at a given time without violating the SLA.
|
| 242 |
+
|
| 243 |
+
The key takeaways from the study in [13], following the economic models given by (2) and (3), are:
|
| 244 |
+
|
| 245 |
+
* Serverless platforms are cost-effective to deploy an application when the demand (request arrival rate) is below a certain threshold, referred to as Break-Even Point (BEP). Beyond BEP, IaaS resources are cheaper to use for their relatively lower cost per unit time.
|
| 246 |
+
|
| 247 |
+
* The authors also consider the different execution time and resources allocated to each request for the application on both IaaS and FaaS, and show that resources allocated for the execution of each request can also affect the value of BEP. Previous studies such as $\left\lbrack {{33},{35}}\right\rbrack$ address the issue of finding the optimal resources for an application in the FaaS and the IaaS model.
|
| 248 |
+
|
| 249 |
+
The economic model study presented in [13] confirms that serverless platforms are better suited for applications with low demand and short-lived computations.
|
| 250 |
+
|
| 251 |
+
Summary: Serverless is more economical for applications with low rate and bursty demand.
|
| 252 |
+
|
| 253 |
+
§ 5 SERVERLESS USAGE
|
| 254 |
+
|
| 255 |
+
Even though serverless computing is a relatively new paradigm and still evolving, there have been several attempts from independent developers and researchers to deploy various applications using this computing model. We believe that the following distinct features of serverless computing are the main reasons for its adoption and increasing popularity.
|
| 256 |
+
|
| 257 |
+
* Pricing Model: As mentioned earlier, serverless platforms offer a unique "pay as you go" pricing model. A user does not pay for deploying their application or for idle times. Whereas in an IaaS model, if a user has rented a VM, she pays regardless of the usage.
|
| 258 |
+
|
| 259 |
+
* No Back-end Maintenance: The serverless computing model offloads a lot of back-end management from the application developer to the serverless platform, which is responsible for the set-up and maintenance of underlying resources as well as scalability.
|
| 260 |
+
|
| 261 |
+
* Quick Provisioning: Serverless platforms use advanced virtualization techniques, such as containers, to provision new instances of the application, which can be provisioned in the order of ${10}\mathrm{\;s}$ of milliseconds $\left\lbrack {{34},{43},{50},{82},{92},{100}}\right\rbrack$ . This feature allows a serverless application to scale out, in case of increasing demand, without suffering from performance degradation.
|
| 262 |
+
|
| 263 |
+
* On-Demand Scalability: Unlike IaaS, where a developer has to configure scaling policies, serverless platforms assume the responsibility of scaling an application in case there is an increase in demand.
|
| 264 |
+
|
| 265 |
+
Considering the above cost, performance, and management advantages, serverless computing is becoming a popular choice to build cloud applications. We next look at various classes of applications that are best suited for the serverless computing model. We also discuss the challenges and open issues that must be addressed to leverage the full potential of serverless computing for these classes of applications.
|
| 266 |
+
|
| 267 |
+
§ 5.1 SCIENTIFIC WORKFLOWS
|
| 268 |
+
|
| 269 |
+
The scientific workflow is a popular management system designed to compose and execute computations for a variety of scientific problem-solving purposes. Recently, there have been several proposals suggesting the use of a serverless computing model to implement scientific workflows.
|
| 270 |
+
|
| 271 |
+
Most serverless platforms offer an interface to build applications in a high-level scripting language such as Python and JavaScript. This feature can be particularly helpful for researchers with no technical background as serverless programming has less of a learning curve than an IaaS model where, in addition to learning the development model, they have to manage the infrastructure as well. In addition to the ease of development, the particular pricing model and on-demand elasticity of serverless computing can benefit such applications both in terms of cost and performance. For example, consider the workflow shown in Figure 3. In an IaaS based deployment, a static allocation of resources would lead to either resource under-utilization (higher cost) or performance degradation (lower cost) as various stages need varying resources. In the FaaS model, benefiting from on-demand scalability, the workflow can spawn any number of processes at any stage while only paying for the actual execution time.
|
| 272 |
+
|
| 273 |
+
Malawski et al. [76] discuss the potential of using server-less computing for scientific workflows. They also implement an astronomical workflow called Montage [26] using Google Cloud Function in conjunction with HyperFlow [41]. Their programming model can be easily extended to other work-flows and serverless platforms. The authors in $\left\lbrack {{38},{91},{101}}\right\rbrack$ , show that serverless computing can be employed to solve various mathematical and optimization problems. Moreover, $\left\lbrack {{62},{65},{66}}\right\rbrack$ show that on-demand computation and scal-ability provided by serverless computing can be leveraged by biomedical applications.
|
| 274 |
+
|
| 275 |
+
< g r a p h i c s >
|
| 276 |
+
|
| 277 |
+
Figure 3: Example Workflow
|
| 278 |
+
|
| 279 |
+
However, the stateless nature of serverless functions can adversely affect the cost and performance of such applications. In scientific workflows, an intermediate computation stage may need access to the results from any previous stages, hence each stage may have to persist the computation results in an external database. This can introduce a significant overhead for storing and retrieving data, while also adding the cost of using the database service. Recent approaches, such as SAND [34], suggest the reuse/sharing of containers for the execution of functions that belong to the same application. The reuse/sharing of containers can help with reducing cold starts, as creating a new thread (function instance) takes significantly less time than starting a new container and shared libraries need to be loaded into memory only once. On the other hand, local caches (on the VMs) are helpful wherein serverless functions share data with other functions or access data from an external database $\left\lbrack {{94},{95}}\right\rbrack$ . We believe both container reuse/sharing and local caching can benefit the serverless implementation of scientific workflows.
|
| 280 |
+
|
| 281 |
+
§ 5.2 MACHINE LEARNING AND DATA PROCESSING
|
| 282 |
+
|
| 283 |
+
Since the advent of serverless computing, there have been several efforts exploring the possibility of using this computing model to deploy machine learning applications for its performance and elasticity. Frameworks such as MArk [103], Spock [59], Cirrus [44] and others [63, 96] explore deploying various machine learning applications using serverless platforms. The authors in $\left\lbrack {{55},{99}}\right\rbrack$ leverage the higher level of parallelism offered by serverless platforms to train machine learning models. While 'pay as you go' pricing, on-demand scaling, and minimal cold start, make serverless computing a good fit to deploy machine learning models, a developer should be careful opting for serverless computing as it provides no SLA on the performance and these models (particularly inference models $\left\lbrack {{59},{103}}\right\rbrack$ ) may have strict performance requirements. We will address this issue in more detail in Section 6 and show how pro-active approaches to configure serverless functions can achieve the desired performance. We also believe that the introduction of more features, such as GPU enabled hardware, in serverless platforms would make serverless computing more lucrative to deploy machine learning models.
|
| 284 |
+
|
| 285 |
+
Serverless computing for its on-demand, cost-effective computation power and elasticity has also been explored to deploy stream processing applications [30,71]. Video processing is one such example, where a user may want to extract useful information from an incoming video stream (video frames), where for each new incoming frame a serverless function can be spawned. Recent works $\left\lbrack {{37},{56},{104}}\right\rbrack$ describe the implementation of video processing frameworks using serverless functions. Moreover, there have been several articles showing that various data processing approaches, such as MapReduce, can also leverage serverless computing [58, 86].
|
| 286 |
+
|
| 287 |
+
We believe that the stateless nature and arbitrary placement of serverless functions without considering data locality can pose a significant performance challenge. Training of a machine learning model may need access to data from an external database or may need to repeatedly access the same data. For example, for a regression model or neural network, every time the model weights are updated, the test (validation) dataset needs to be retrieved to evaluate the accuracy of the model. In this case, placing the serverless function closer to the data source would benefit the application (i.e., shipping the computation to the data, as opposed to shipping data to the computation). While previous approaches, such as SAND [34] and Lambdata [95], address the data locality issue by introducing local data caches for subsequent use, we did not come across any approach that considered data locality for serverless function placement in the case of external data sources.
|
| 288 |
+
|
| 289 |
+
§ 5.3 INTERNET OF THINGS (IOT)
|
| 290 |
+
|
| 291 |
+
IoT lets the devices around us in our daily life, such as medical devices, sensors around the home and city, monitoring systems, and personal devices such as Amazon Alexa, connect and improve our quality of life. These devices are usually low-powered with minimal computation power and may need access to computing power to make important decisions. Serverless is a natural fit for IoT devices/applications as it provides on-demand, cost-effective computation power. Server-less platforms are already allowing a user to deploy serverless functions on the edge [2], making access to these functions much faster. Both on-demand computation and access at much lower latency make serverless computing an ideal candidate to run IoT applications. Recent approaches $\left\lbrack {{47},{83},{84}}\right\rbrack$ explore the possibility of using serverless computing for IoT applications and services. Pinto et al. [85] look at the feasibility of using serverless functions for IoT devices and provide a framework to optimize performance. Amazon's Alexa offers a unique and interesting use case [7] where a user can build the desired functionality for Alexa devices using Amazon Lambda's computation power.
|
| 292 |
+
|
| 293 |
+
Serverless applications for IoT devices may require a performance guarantee (SLA) to meet certain QoS standards. For example, a voice command needs to be analyzed within a certain amount of time for a better user experience. We quantify the performance of such an application with two parameters: access latency (propagation delays), and execution time of the application on the serverless platform. As mentioned earlier, serverless platforms are already making an active effort to reduce the access latency by allowing a user to deploy their serverless applications on the edge infrastructure, but this deployment may follow a different pricing model and has different resource limits as compared to the standard (core-) infrastructure $\left\lbrack {3,{53}}\right\rbrack$ . To deal with the limited resource and different pricing models, a developer may decide to distribute her application across the edge- and core-infrastructure. This is a challenging problem to solve considering the trade-off between access latency and execution time. A serverless function can be accessed faster on the edge infrastructure but may have a longer execution time because of limited resources. While a serverless function may execute faster on the core-infrastructure but suffer from large access latency. We believe that approaches similar to Costless [53] and COSE [33] can help a developer with an efficient and cost-effective division of computation across the spectrum of edge- and core- infrastructure. These approaches not only consider the performance of serverless functions on the platform but also incorporate the total response time in their models.
|
| 294 |
+
|
| 295 |
+
§ 5.4 VIRTUAL COMMUNICATION NETWORKS
|
| 296 |
+
|
| 297 |
+
To meet the increasing demand for communication networks, researchers have developed software-defined networking (SDN) and network function virtualization (NFV), which decouple the various networking functionalities from the hardware management and allow a greater degree of freedom for the service to evolve and be robust. Both NFV and SDN can run over any cloud computing service. Aditya et al. [32] present a set of general requirements that a cloud computing service must satisfy to effectively host SDN- and NFV-based services. The authors believe that along with other features, serverless computing, for its elasticity, performance, event-driven nature, and ease of management is a good fit to host some SDN- and NFV-based services, e.g. SDN controllers and network anomaly detection and media processing functions. Moreover, Chaudhry et al. [46] present an approach to improve QoS on the edge by employing virtual network functions using serverless computing.
|
| 298 |
+
|
| 299 |
+
However, porting SDN- and NFV-based services to server-less computing poses a new set of challenges for the research community. A user has to be careful about the pricing model as most of the serverless functions (implementing an NFV service) can be short-lived (in the order of a few milliseconds). As most commercial serverless platforms round the execution time to the nearest ${100}\mathrm{\;{ms}}$ to charge a user, this extra cost because of rounding up can quickly grow when the application performs many executions. One such example can be an anomaly detection system where thousands of network packets need to be analyzed. In addition to cost, function cold starts, statelessness, and arbitrary function placement can also reduce the QoS of delay-sensitive SDN- and NFV-based services. As described in Section 3, a user may have to rely on the serverless platform to implement various optimizations, e. $g$ . advanced virtualization techniques, local caches, and container re-use to circumvent these performance limitations.
|
| 300 |
+
|
| 301 |
+
§ 5.5 IMPROVING QOS OF CLOUD APPLICATIONS
|
| 302 |
+
|
| 303 |
+
Serverless functions can be implemented and deployed quickly. Moreover, a user does not have to worry about scal-ability. A serverless platform, based on the number of invocations and configurations, provisions more instances of the same serverless function to cater to the dynamic demand. In addition to automatic scaling, the provisioning of these server-less function instances is much faster than traditional cloud resources, e.g. VMs, because serverless functions execute in lightweight sandbox environments. These features of server-less computing have intrigued researchers to study the feasibility of serverless functions as backup resources to cater to the transient demand for an application, while VM resources are being provisioned. Recently introduced frameworks, such as MArk [103], Spock [59] and FEAT [81], leverage serverless functions in conjunction with traditional cloud services to deploy delay-sensitive applications with strict SLAs. They show that using both IaaS and FaaS based resources can decrease the SLA violations significantly. Moreover, there have been suggestions to deploy lightweight components of an application requiring high elasticity and computation throughput as serverless functions, while keeping the rest of the application on traditional resources $\left\lbrack {{37},{75}}\right\rbrack$ . We discuss the advantages and challenges of building such frameworks and approaches in Section 6.
|
| 304 |
+
|
| 305 |
+
Summary: The main driving factors for serverless adoption are quick-provisioning time, on-demand scaling, and true "pay as you go" pricing model. While server-less adoption is increasing, there are certain challenges that need to be addressed. An application developer would benefit from tools that can help her translate an application into the serverless programming model, find a suitable serverless platform for a given application, and configure resources for serverless functions. On the other hand, a cloud provider can improve their serverless offering by providing predictable performance, less cold-start latencies, efficient function placement and state management/data caching across multiple instances of a serverless function.
|
| 306 |
+
|
| 307 |
+
§ 6 FUTURE RESEARCH
|
| 308 |
+
|
| 309 |
+
In the previous section, we discussed the suitability of the serverless computing model for various classes of cloud applications and potential challenges a user may face to port a particular application into this computing model. In this section, we will take a closer look at some of those challenges and present our ideas to address them. We will particularly focus our discussion on the issues that a developer can address with the limited control (one-time and online decisions) they have over serverless platforms and application re-design. We believe that application decomposition can help a developer design their serverless application better, while parameter tuning can help with fine-tuning resources and making online decisions for the individual serverless functions to get the desired performance. Lastly, a multi-cloud scenario can help applications with fluctuating demand, without compromising on cost and performance. Next, we discuss these challenges and possible solutions in more detail.
|
| 310 |
+
|
| 311 |
+
§ 6.1 PARAMETER TUNING
|
| 312 |
+
|
| 313 |
+
In a serverless computing model, a user has limited control over the function's run-time environment, i.e. hardware, operating system, CPU-type, etc. On commercial serverless platforms, a user can only specify limited configurable parameters, such as memory, CPU, and location, for a serverless function. In Section 3, various measurement studies show that these configurable parameters can affect the cost of cloud-usage and the performance of serverless functions. As server-less platforms do not provide any guarantee (SLA) on the performance of serverless functions, configuring the parameters becomes even more crucial to get the desired performance of an application.
|
| 314 |
+
|
| 315 |
+
We propose research on designing feedback control systems, as illustrated in Figure 1, which continually monitor the performance of serverless applications and configure these parameters on the fly if needed. There are a number of challenges for designing such systems: 1) serverless platforms have varying underlying infrastructure, resource provisioning policies, sandboxing techniques, and every time a server-less function is invoked, even with the same configurable parameters, performance can vary based on the co-location of functions and underlying resources. This makes it hard to predict the performance of the serverless function; 2) Our experiences with GCF and Kubernetes Knative, show that there can be a significant delay in the feedback loop, i.e. after the configuration is changed and until the new configuration takes effect (up to minutes as mentioned in Section 3.3). This excessive feedback delay can lead to performance instability as the state of the system might change during that time; 3) The impact of the changes in allocated resources on the performance of a serverless function can vary depending on the underlying serverless platform. In our experiments, we noticed that while an increase in allocated memory/CPU improves the performance of a serverless function on AWS Lambda and GCF, it did not significantly affect the performance on Apache Open-Whisk and IBM Function. Maissen et al. [74] make a similar observation about IBM Cloud Functions.
|
| 316 |
+
|
| 317 |
+
Previously there have been a number of proposals suggesting various offline and online techniques to configure these parameters. Costless [53], given a workflow consisting of multiple functions, proposes a technique to efficiently distribute these functions across the edge- and core-cloud while reducing the cost of cloud usage and meeting the performance requirement. This approach relies on (one-time) profiling of the performance of a serverless function in the workflow under possible memory configurations. It suggests suitable configurable parameters (memory) based on the profiling results, however, it fails to capture the dynamicity of the execution model. In [88], the authors show that the per-container concurrency limit in Knative can affect the throughput and latency of serverless functions. They suggest a reinforcement learning-based approach to find the optimal concurrency limit for a given deployment of the application. Even though this approach is adaptive, it only targets configuring the concurrency limit, but as discussed earlier, other parameters such as memory, CPU, and location can also impact performance. Moreover, we noticed that the authors did not address the feedback delay issue, which for Knative, in our experience, can be up to several minutes depending on the configuration. Sizeless [51] uses resource-consumption data from thousands of synthetic serverless functions to build a representative performance model. Then, using the performance model and performance logs of the target function, it suggests the best memory configuration. This approach may incur significant cost overhead for running thousands of synthetic functions to get the required data to build the performance model. This approach also requires changes in the serverless application to collect the performance logs and only targets configuring memory for a function written in Node.js and deployed over AWS Lambda.
|
| 318 |
+
|
| 319 |
+
COSE [33] is an online statistical learning technique to configure various configurable parameters for delay-bounded chains of serverless functions or single functions. COSE not only achieves the desired performance for a serverless application but also reduces the cost of cloud usage. It can capture the dynamic changes in the execution model stemming from co-location and variable underlying infrastructure. Currently, COSE only configures memory and location (edge or core) for serverless functions on AWS Lambda but can be easily extended to configure other parameters such as concurrency and CPU power. COSE can be easily adapted for other parameters and platforms because it works as a stand-alone system that requires no changes to the serverless application. It retrieves the execution logs of a serverless function from the serverless platform and configures it with optimal/near-optimal parameter configurations. Hence, COSE can be extended to any platform that provides an API to retrieve the execution logs and configure parameters for the serverless function.
|
| 320 |
+
|
| 321 |
+
§ 6.2 DECOMPOSING SERVERLESS APPLICATIONS
|
| 322 |
+
|
| 323 |
+
Over the past decade, major commercial cloud providers have introduced their serverless platforms. These platforms offer diverse features, e.g. elasticity limits, supported languages, configurable parameters, pricing models, etc. Moreover, as we have seen in Section 3, these platforms have varying underlying infrastructure and resource provisioning policies [100]. As a result, the performance and cost of the same application can vary significantly across different serverless platforms. In [39], the authors show that serverless functions with different bottlenecks, such as memory and computation, may have an ideal serverless platform on which they perform the best. This shows that serverless platforms are not one-for-all. Considering an application, which comprises multiple serverless functions with varying compute, memory, $\mathrm{I}/\mathrm{O}$ bottlenecks, one platform may not suit all of the individual functions. We suggest investigating this idea further, where automated tools may help developers decompose their application into multiple serverless functions and then find the ideal serverless platform for each serverless function. This may require a sophisticated tool to perform code analysis [42] and measurement tools [74, 102] which can benchmark serverless platforms for different kinds of workload/computations.
|
| 324 |
+
|
| 325 |
+
Moreover, serverless platforms allow users to configure resources for each component of an application (if deployed as separate serverless functions), which may not be possible for a monolithic application deployed over a VM. In [102], the authors show that decomposing a monolithic application into multiple micro-services, instead of deploying the whole application as one unit, can lead to significant performance and cost gains. The authors also show an example application where decomposition leads to better performance and less cost. We also believe that decomposing an application would allow developers to cost-effectively fine-tune resources for various parts of the application.
|
| 326 |
+
|
| 327 |
+
To the best of our knowledge, we did not come across any previous work that suggests decomposing monolithic serverless applications to optimize the cost or performance. Costless [53] is the closest approach which suggests deploying a serverless application split across two platforms (edge and core) but it assumes that the application is already decomposed into multiple serverless functions.
|
| 328 |
+
|
| 329 |
+
§ 6.3 MULTI-CLOUD USAGE
|
| 330 |
+
|
| 331 |
+
Serverless functions are executed in light-weight sandbox environments, which can be launched in as few as 10 s of milliseconds. So, in case an application experiences a sudden increase in demand, it can seamlessly scale-out to cater to the increasing demand. This is a feature of serverless computing that has been leveraged by previous approaches, such as MArk [103], Spock [59], and FEAT [81], to hide the SLA violations for cloud applications deployed using traditional cloud services such as VMs. These approaches redirect a portion of the demand to the serverless counterpart of the application while scaling up traditional cloud resources which can take up to minutes to start up. These approaches may improve the performance of an application by reducing the number of SLA violations during scaling, at the expense of introducing a substantial development cost for a developer to build the serverless counterpart of the application. To reduce the development cost, a developer can employ an automated approach to build the serverless version of the application, similar to the approach suggested in [93]. Another limitation of these approaches is that they suggest a one-time configuration of resources for the serverless version of the application, which can lead to variations in the performance as explained in Section 6.1. As the goal of such approaches is to reduce the SLA violations, this variation in performance can adversely affect the application.
|
| 332 |
+
|
| 333 |
+
< g r a p h i c s >
|
| 334 |
+
|
| 335 |
+
Figure 4: A balanced approach
|
| 336 |
+
|
| 337 |
+
We believe that in addition to performance, serverless computing also offers a unique pricing model, and as discussed in Section 4, serverless computing can be cost-effective for certain demand. For applications with large variations in demand, deploying them on VMs for the periods when demand is low can lead to sub-optimal cost. We propose to build a hybrid framework (Load Balancer, as illustrated in Figure 4) that leverages both aforementioned features of server-less computing, i.e. performance and economic model, by using serverless computing as: (1) an alternative to traditional cloud resources for a certain portion of the demand (consistently instead of only while scaling up VM resources), and (2) a fallback to the serverless counterpart of the application when demand is below BEP. To address the performance uncertainty in the serverless platform, we suggest that in addition to the multi-cloud framework, the developer should also employ more pro-active approaches, similar to COSE [33], to configure resources for the serverless counterpart of the application. COSE suggests the configuration for a serverless application that not only reduces the cost of cloud usage but also meets the specified SLA.
|
| 338 |
+
|
| 339 |
+
We also believe that serverless functions can be used as an alternative to VMs to offload the lightweight computations in a distributed application such as scientific workflows [76], where small tasks requiring more concurrency and elasticity can be implemented as serverless functions while keeping the tasks with longer computation time and requiring more resources on VMs. One can leverage the "utilization" of the computation, i.e. how long the computation is and how often it needs to be executed, to decide whether the computation should be directed (and executed) over a dedicated VM or a serverless platform. The problem is how to optimally distribute computations to minimize the total cost. This is a challenging problem given the inherent performance-cost tradeoffs: VMs are cheaper for high-utilization (long-running and frequent) computations, on the other hand, serverless platforms are cheaper for low-utilization (short-running and less frequent) computations and have the advantage of elasticity.
|
| 340 |
+
|
| 341 |
+
Finally, developers have indeed started to leverage services from different cloud providers. A case study is presented in [16], where an invoicing application is built using various best-in-class services from different commercial cloud providers. The application is built using Google's AI and image recognition services along with two of Amazon's services (Lambda and API Gateway).
|
| 342 |
+
|
| 343 |
+
§ 7 CONCLUSION
|
| 344 |
+
|
| 345 |
+
Serverless computing has gained significant popularity in recent years. It offers an easy development model, back-end management, along with key performance benefits and a "pay as you go" pricing model. There is a significant amount of research articles addressing various aspects of serverless computing such as benchmarking/improving performance of commercial and open-source serverless platforms, new vir-tualization techniques for the execution environment, and studying the feasibility of serverless computing for a variety of cloud applications. In this paper, we look at these studies from an application developer's perspective and discuss how these studies can help her make informed decisions regarding her serverless application. We argue that serverless computing is becoming a popular choice to deploy various cloud applications for its distinct cost and performance benefits. While serverless adoption is pacing up, there are still a number of challenges that need to be addressed. We identify potential challenges and open issues that must be addressed to make serverless computing a viable option to deploy cloud applications. We argue that pro-active approaches to configure resources for serverless functions can address the performance uncertainty issue, while frameworks to decompose serverless applications and to leverage various cloud services at the same time can reduce the operational cost as well as enhance the performance of cloud applications.
|
papers/JSYS/JSYS 2021/JSYS 2021 Mar_Papers/bXe1agiq9LN/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
papers/JSYS/JSYS 2021/JSYS 2021 Mar_Papers/bXe1agiq9LN/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,567 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ [SOLUTION] DONUT PAXOS: A RECONFIGURABLE CONSENSUS PROTOCOL
|
| 2 |
+
|
| 3 |
+
Anonymous authors
|
| 4 |
+
|
| 5 |
+
Paper under double-blind review
|
| 6 |
+
|
| 7 |
+
§ ABSTRACT
|
| 8 |
+
|
| 9 |
+
State machine replication protocols, like MultiPaxos and Raft, are at the heart of numerous distributed systems. To tolerate machine failures, these protocols must replace failed machines with new machines, a process known as reconfiguration. Reconfiguration has become increasingly important over time as the need for frequent reconfiguration has grown. Despite this, reconfiguration has largely been neglected in the literature. In this paper, we present Donut Paxos and Donut MultiPaxos, a reconfigurable consensus and state machine replication protocol respectively. Our protocols can perform a reconfiguration with little to no impact on the latency or throughput of command processing; they can perform a reconfiguration in a few milliseconds; and they present a framework that can be generalized to other replication protocols in a way that previous reconfiguration techniques can not. We provide proofs of correctness for the protocols and optimizations, and present empirical results from an open source implementation.
|
| 10 |
+
|
| 11 |
+
§ 1 INTRODUCTION
|
| 12 |
+
|
| 13 |
+
Many distributed systems $\left\lbrack {4,6,7,{11},{13}}\right\rbrack$ rely on a state machine replication protocol, like MultiPaxos [14] or Raft [29], to keep multiple replicas of their data in sync. Over time, machines fail, and if too many machines in a state machine replication protocol fail, the protocol grinds to a halt. Thus, state machine replication protocols have to replace failed machines with new machines as the protocol runs, a process known as reconfiguration.
|
| 14 |
+
|
| 15 |
+
Reconfiguration is an essential component of state machine replication. It is not an optimization or an afterthought. Without a reconfiguration protocol in place, a state machine replication protocol will inevitably stop working; it's just a matter of when. Despite this, reconfiguration has largely been neglected by current academic literature. Researchers have invented dozens of state machine replication protocols, yet many papers either discuss reconfiguration briefly with no evaluation [27, 31-33], propose theoretically safe but inefficient reconfiguration protocols $\left\lbrack {{15},{22}}\right\rbrack$ , or do not discuss reconfiguration at all $\left\lbrack {2,3,{16},{24},{25}}\right\rbrack$ .
|
| 16 |
+
|
| 17 |
+
Ignoring reconfiguration has never been ideal, but we have largely been able to get away with it. Historically, state machine replication protocols were deployed on a fixed set of machines, and reconfiguration was used only to replace failed machines with new machines - an infrequent occurrence. This made it easy to leave reconfiguration out of sight, out of mind. Recently however, systems have become increasingly elastic, and the need for frequent reconfiguration has grown. These elastic systems don't just perform reconfigurations reactively when machines fail; they reconfigure proactively. For example, cloud databases can proactively request more resources to handle workload spikes, and orchestration tools like Kuber-netes [12] are making it easier to build these types of elastic systems. Similarly, in environments with short-lived cloud instances-as with serverless computing and spot instances-and in mobile edge and Internet of Things settings, protocols must adapt to a changing set of machines much more frequently. This frequent need for reconfiguration makes it hard to ignore reconfiguration any longer.
|
| 18 |
+
|
| 19 |
+
In this paper, we present a reconfigurable consensus protocol and a reconfigurable state machine replication protocol: Donut Paxos and Donut MultiPaxos. Compared to existing reconfigurable protocols, our protocols have the following desirable properties.
|
| 20 |
+
|
| 21 |
+
Little to No Performance Degradation. Donut Multi-Paxos can perform a reconfiguration without significantly degrading the throughput or latency of processing client commands. For example, we show that reconfiguration has less than a $4\%$ effect on the median of throughput and latency measurements (Section 7).
|
| 22 |
+
|
| 23 |
+
Quick Reconfiguration. Donut MultiPaxos can perform a reconfiguration quickly. Reconfiguring to a new set of machines takes one round trip of communication in the normal case (Section 4). Empirically, this requires only a few milliseconds within a single data center (Section 7).
|
| 24 |
+
|
| 25 |
+
Theoretical Insights. Donut Paxos generalizes Vertical Paxos [19], it is the first protocol to achieve the theoretical lower bound on Fast Paxos [16] quorum sizes, and it corrects errors in DPaxos [28] (Section 6).
|
| 26 |
+
|
| 27 |
+
Proven Safe. We describe Donut Paxos and Donut Multi-Paxos precisely and prove that both are safe (Sections 3,4,5, A, B). Unfortunately, this is not often done for reconfiguration protocols $\left\lbrack {{26},{31} - {33}}\right\rbrack$ .
|
| 28 |
+
|
| 29 |
+
In a nutshell, our protocols work by leveraging two key design ideas. The first is to decouple reconfiguration from the standard processing path. Many replication protocols $\left\lbrack {{20},{22},{27},{29}}\right\rbrack$ have machines that are responsible for both processing commands and for orchestrating reconfigurations. By contrast, Donut Paxos introduces a set of distinguished matchmaker machines that are solely responsible for managing reconfigurations. These matchmakers act as a source of truth; they always know the current configuration. This decoupling, along with a number of novel protocol optimizations, allow us to perform reconfiguration quickly in the background, without degrading performance.
|
| 30 |
+
|
| 31 |
+
The second design point is to reconfigure across rounds, a technique known as vertical reconfiguration [19]. With vertical reconfiguration, every round of consensus can execute using a different configuration. Replication protocols based on classical MultiPaxos instead assume a totally ordered log of chosen commands and reconfigure across log entries, known as horizontal reconfiguration. Many state machine replication protocol do not have logs and cannot perform horizontal reconfiguration $\left\lbrack {2,8,{27},{30},{33}}\right\rbrack$ . Vertical reconfiguration, on the other hand, is more generally applicable and can be more easily used by other replication protocols.
|
| 32 |
+
|
| 33 |
+
§ 2 BACKGROUND
|
| 34 |
+
|
| 35 |
+
§ 2.1 SYSTEM MODEL
|
| 36 |
+
|
| 37 |
+
Throughout the paper, we assume an asynchronous network model in which messages can be arbitrarily dropped, delayed, and reordered. We assume machines can fail by crashing but do not act maliciously. We assume that machines operate at arbitrary speeds, and we do not assume clock synchronization. We assume a discovery service that nodes can use to find each other, but do not require that this service be strongly consistent. A node can safely communicate with outdated nodes. A system like DNS would suffice. Every protocol discussed in this paper assumes (for liveness) that at most $f$ machines will fail for some configurable $f$ .
|
| 38 |
+
|
| 39 |
+
§ 2.2 PAXOS
|
| 40 |
+
|
| 41 |
+
A consensus protocol is a protocol that selects a single value from a set of proposed values. Paxos $\left\lbrack {{14},{17}}\right\rbrack$ is one of the oldest and most popular consensus protocols. A Paxos deployment that tolerates $f$ faults consists of an arbitrary number of clients, $f + 1$ nodes called proposers, and ${2f} + 1$ nodes called acceptors, as illustrated in Figure 1. To reach consensus on a value, an execution of Paxos is divided into a number of rounds, each round having two phases: Phase 1 and Phase 2. Every round is orchestrated by a single pre-determined proposer. The set of rounds can be any unbounded, totally ordered set. It is common to let the set of rounds be the set of lexicographically ordered integer pairs(r, id)where $r$ is an integer and ${id}$ is a unique proposer id, where a proposer is responsible for executing every round that contains its id.
|
| 42 |
+
|
| 43 |
+
When a proposer executes a round, say round $i$ , it attempts to get some value $x$ chosen in that round. Paxos is a consensus protocol, so it must only choose a single value. Thus, Paxos must ensure that if a value $x$ is chosen in round $i$ , then no other value besides $x$ can ever be chosen in any round less than $i$ . This is the purpose of Paxos’ two phases. In Phase 1 of round $i$ , the proposer contacts the acceptors to (a) learn of any value that may have already been chosen in any round less than $i$ and (b) prevent any new values from being chosen in any round less than $i$ . In Phase 2, the proposer proposes a value to the acceptors, and the acceptors vote on whether or not to choose it. In Phase 2, the proposer will only propose a value $x$ if it learned through Phase 1 that no other value has been or will be chosen in a previous round.
|
| 44 |
+
|
| 45 |
+
< g r a p h i c s >
|
| 46 |
+
|
| 47 |
+
Figure 1: Paxos communication diagram $\left( {f = 1}\right)$ .
|
| 48 |
+
|
| 49 |
+
More concretely, Paxos executes as follows, as illustrated in Figure 1. When a client wants to propose a value $x$ , it sends $x$ to a proposer $p$ . Upon receiving $x,p$ begins executing one round of Paxos, say round $i$ . First, it executes Phase 1 . It sends PHASE $1\mathrm{\;A}\langle i\rangle$ messages to the acceptors. An acceptor ignores a PHASE $1\mathrm{\;A}\langle i\rangle$ message if it has already received a message in a larger round. Otherwise, it replies with a PHASE $1\mathrm{\;B}\langle i,{vr},{vv}\rangle$ message containing the largest round ${vr}$ in which the acceptor voted and the value it voted for, ${vv}$ . If the acceptor hasn’t voted yet, then ${vr} = - 1$ and ${vv} =$ null. When the proposer receives PHASE1B messages from a majority of the acceptors, Phase 1 ends and Phase 2 begins.
|
| 50 |
+
|
| 51 |
+
At the start of Phase 2, the proposer uses the PHASE 1B messages that it received in Phase 1 to select a value $x$ such that no value other than $x$ has been or will be chosen in any round less than $i$ . Specifically $x$ is the vote value associated with the largest received vote round, or any value if no acceptor had voted (see [17] for details). Then, the proposer sends $\operatorname{PHASE}2\mathrm{\;A}\langle i,x\rangle$ messages to the acceptors. An acceptor ignores a PHASE $2\mathrm{\;A}\langle i,x\rangle$ message if it has already received a message in a larger round. Otherwise, it votes for $x$ and sends back a PHASE $2\mathrm{\;B}\langle i\rangle$ message to the proposer. If a majority of acceptors vote for the value, then the value is chosen, and the proposer informs the client.
|
| 52 |
+
|
| 53 |
+
§ 2.3 FLEXIBLE PAXOS
|
| 54 |
+
|
| 55 |
+
Paxos deploys a set of ${2f} + 1$ acceptors, and proposers communicate with at least a majority of the acceptors in Phase 1 and in Phase 2. Flexible Paxos [10] is a Paxos variant that generalizes the notion of a majority to that of a quorum. Specifically, Flexible Paxos introduces the notion of a configuration $C = \left( {A;{P1};{P2}}\right) .A$ is a set of acceptors. ${P1}$ and ${P2}$ are sets of quorums, where each quorum is a subset of $A$ . A configuration satisfies the property that every quorum in ${P1}$ (known as a Phase 1 quorum) intersects every quorum in ${P2}$ (known as a Phase 2 quorum). For a configuration to tolerate $f$ failures, there must exist some Phase 1 quorum and some Phase 2 quorum of non-failed machines despite an arbitrary set of $f$ failures.
|
| 56 |
+
|
| 57 |
+
Flexible Paxos is identical to Paxos with the exception that proposers now communicate with an arbitrary Phase 1 quorum in Phase 1 and an arbitrary Phase 2 quorum in Phase 2. In the remainder of this paper, we assume that all protocols operate using quorums from an arbitrary configuration rather than majorities from a fixed set of ${2f} + 1$ acceptors.
|
| 58 |
+
|
| 59 |
+
§ 3 DONUT PAXOS
|
| 60 |
+
|
| 61 |
+
We now present Donut Paxos. To ease understanding, we first describe a simplified version of Donut Paxos that is easy to understand but is also naively inefficient. We then upgrade the protocol to the complete, efficient version by way of a number of optimizations.
|
| 62 |
+
|
| 63 |
+
§ 3.1 OVERVIEW AND INTUITION
|
| 64 |
+
|
| 65 |
+
Donut Paxos is largely identical to Paxos. Like Paxos, a Donut Paxos deployment includes an arbitrary number of clients, a set of at least $f + 1$ proposers, and some set of acceptors, as illustrated in Figure 2. Paxos assumes that a single, fixed configuration of acceptors is used for every round. The big difference between Paxos and Donut Paxos is that Donut Paxos allows every round to have a different configuration of acceptors. Round 0 may use some configuration ${C}_{0}$ , while round 1 may use some completely different configuration ${C}_{1}$ . This idea was first introduced by Vertical Paxos [19].
|
| 66 |
+
|
| 67 |
+
Recall from Section 2 that a Paxos proposer in round $i$ executes Phase 1 in order to (1) learn of any value that may have been chosen in a round less than $i$ and (2) prevent any new values from being chosen in any round less than $i$ . To do so, the proposer contacts the fixed set of acceptors. A Donut Paxos proposer must also execute Phase 1 to establish that these two properties hold. The difference is that there is no longer a single fixed configuration of acceptors to contact. Instead, a Donut Paxos proposer has to contact all of the configurations used in rounds less than $i$ .
|
| 68 |
+
|
| 69 |
+
However, every round uses a different configuration of acceptors, so how does the proposer of round $i$ know which acceptors to contact in Phase 1? To resolve this question, a Donut Paxos deployment also includes a set of ${2f} + 1$ matchmakers. When a proposer begins executing round $i$ , it selects a configuration ${C}_{i}$ . It sends the configuration ${C}_{i}$ to the matchmakers, and the matchmakers reply with the configurations used in previous rounds. We call this the Matchmaking phase.
|
| 70 |
+
|
| 71 |
+
< g r a p h i c s >
|
| 72 |
+
|
| 73 |
+
Figure 2: Donut Paxos $\left( {f = 1}\right)$ .
|
| 74 |
+
|
| 75 |
+
The proposer then executes Phase 1 of Paxos with these prior configurations, and then executes Phase 2 with configuration ${C}_{i}$ , as illustrated in Figure 2. At first, the extra round trip of communication with the matchmakers and the large number of configurations in Phase 1 make Donut Paxos look slow. This is for ease of explanation. Later, we will eliminate these costs (Section 3.4 - Section 3.6).
|
| 76 |
+
|
| 77 |
+
§ 3.2 DETAILS
|
| 78 |
+
|
| 79 |
+
Every matchmaker maintains a $\log L$ of configurations indexed by round. That is, $L\left\lbrack i\right\rbrack$ stores the configuration of round $i$ . When a proposer receives a request $x$ from a client and begins executing round $i$ , it first selects a configuration ${C}_{i}$ to use in round $i$ . It then sends a MATCHA $\left\langle {i,{C}_{i}}\right\rangle$ message to all of the matchmakers.
|
| 80 |
+
|
| 81 |
+
When a matchmaker receives a MATCHA $\left\langle {i,{C}_{i}}\right\rangle$ message, it checks to see if it had previously received a MATCHA $\left\langle {j,{C}_{j}}\right\rangle$ message for some round $j \geq i$ . If so, the matchmaker ignores the MATCHA $\left\langle {i,{C}_{i}}\right\rangle$ message. Otherwise, it inserts ${C}_{i}$ in log entry $i$ and computes the set ${H}_{i}$ of previous configurations in the log: ${H}_{i} = \left\{ {\left( {j,{C}_{j}}\right) \mid j < i,{C}_{j} \in L}\right\}$ . It then replies to the proposer with a MATCHB $\left\langle {i,{H}_{i}}\right\rangle$ message. Matchmaker pseudocode is given in Algorithm 1. An example execution of a matchmaker is illustrated in Figure 3.
|
| 82 |
+
|
| 83 |
+
When the proposer in round $i$ receives $\operatorname{MATCHB}\left\langle {i,{H}_{i}^{1}}\right\rangle$ , $\ldots ,\operatorname{MATCHB}\left\langle {i,{H}_{i}^{f + 1}}\right\rangle$ from $f + 1$ matchmakers, it computes ${H}_{i} = { \cup }_{j = 1}^{f + 1}{H}_{i}^{j}$ . For example, with $f = 1$ and $i = 2$ , if the proposer in round 2 receives $\operatorname{MATCHB}\left\langle {2,\left\{ \left( {0,{C}_{0}}\right) \right\} }\right\rangle$ and $\operatorname{MATCHB}\left\langle {2,\left\{ \left( {1,{C}_{1}}\right) \right\} }\right\rangle$ , it computes ${H}_{2} = \left\{ {\left( {0,{C}_{0}}\right) ,\left( {1,{C}_{1}}\right) }\right\}$ . Note that every round is statically assigned to a single proposer and that a proposer selects a single configuration for a round, so if two matchmakers return configurations for the same round, they are guaranteed to be the same.
|
| 84 |
+
|
| 85 |
+
< g r a p h i c s >
|
| 86 |
+
|
| 87 |
+
Figure 3: A matchmaker's log over time. (a) Initially, the matchmaker's log is empty. (b) Then, the matchmaker receives $\operatorname{MATCHA}\left\langle {0,{C}_{0}}\right\rangle$ . It inserts ${C}_{0}$ in log entry 0 and returns $\operatorname{MATCHB}\langle 0,0\rangle$ since the log does not contain any configuration in any round less than 0 . (c) The matchmaker then receives $\operatorname{MATCHA}\left\langle {2,{C}_{2}}\right\rangle$ . It inserts ${C}_{2}$ in log entry 2 and returns $\operatorname{MATCHB}\left\langle {2,\left\{ \left( {0,{C}_{0}}\right) \right\} }\right\rangle$ . (d) It then receives MATCHA $\left\langle {3,{C}_{3}}\right\rangle$ , inserts ${C}_{3}$ in log entry 3, and returns $\operatorname{MATCHB}\left\langle {3,\left\{ {\left( {0,{C}_{0}}\right) ,\left( {2,{C}_{2}}\right) }\right\} }\right\rangle$ . At this point, if the matchmaker were to receive $\operatorname{MATCHA}\left\langle {1,{C}_{1}}\right\rangle$ , it would ignore it.
|
| 88 |
+
|
| 89 |
+
Algorithm 1 Matchmaker Pseudocode
|
| 90 |
+
|
| 91 |
+
State: a $\log L$ indexed by round, initially empty
|
| 92 |
+
|
| 93 |
+
upon receiving MATCHA $\left\langle {i,{C}_{i}}\right\rangle$ from proposer $p$ do
|
| 94 |
+
|
| 95 |
+
if $\exists$ a configuration ${C}_{j}$ in round $j \geq i$ in $L$ then
|
| 96 |
+
|
| 97 |
+
ignore the MATCHA $\left\langle {i,{C}_{i}}\right\rangle$ message
|
| 98 |
+
|
| 99 |
+
else
|
| 100 |
+
|
| 101 |
+
${H}_{i} \leftarrow \left\{ {\left( {j,{C}_{j}}\right) \mid {C}_{j} \in L}\right\}$
|
| 102 |
+
|
| 103 |
+
$L\left\lbrack i\right\rbrack \leftarrow {C}_{i}$
|
| 104 |
+
|
| 105 |
+
send MATCHB $\left\langle {i,{H}_{i}}\right\rangle$ to $p$
|
| 106 |
+
|
| 107 |
+
The proposer then ends the Matchmaking phase and begins Phase 1. It sends PHASE1A messages to every acceptor in every configuration in ${H}_{i}$ and waits to receive PHASE $1\mathrm{\;B}$ messages from a Phase 1 quorum from every configuration. Using the previous example, the proposer sends PHASE1A messages to every acceptor in ${C}_{0}$ and ${C}_{1}$ and waits for PHASE $1\mathrm{\;B}$ messages from a Phase 1 quorum of ${C}_{0}$ and a Phase 1 quorum of ${C}_{1}$ . The proposer then runs Phase 2 with ${C}_{i}$ .
|
| 108 |
+
|
| 109 |
+
Acceptor and proposer pseudocode are shown in Algorithm 2 and Algorithm 3 respectively. To keep things simple, we assume that round numbers are integers, but generalizing to an arbitrary totally ordered set is straightforward. A Donut Paxos acceptor is identical to a Paxos acceptor. A Donut Paxos proposer is nearly identical to a Flexible Paxos proposer with the exception of the Matchmaking phase and the configurations used in Phase 1 and Phase 2. For clarity of exposition, we omit straightforward details surrounding re-sending dropped messages and nacking ignored messages.
|
| 110 |
+
|
| 111 |
+
Algorithm 2 Acceptor Pseudocode
|
| 112 |
+
|
| 113 |
+
State: the largest seen round $r$ , initially -1
|
| 114 |
+
|
| 115 |
+
State: the largest round ${vr}$ voted in, initially -1
|
| 116 |
+
|
| 117 |
+
ite: the value ${vv}$ voted for in round ${vr}$ , initially null
|
| 118 |
+
|
| 119 |
+
upon receiving PHASE $1\mathrm{\;A}\langle i\rangle$ from $p$ with $i > r$ do
|
| 120 |
+
|
| 121 |
+
$r \leftarrow i$
|
| 122 |
+
|
| 123 |
+
send PHASE $1\mathrm{\;B}\langle i,{vr},{vv}\rangle$ to $p$
|
| 124 |
+
|
| 125 |
+
upon receiving PHASE $2\mathrm{\;A}\langle i,x\rangle$ from $p$ with $i \geq r$ do
|
| 126 |
+
|
| 127 |
+
$r,{vr},{vv} \leftarrow i,i,x$
|
| 128 |
+
|
| 129 |
+
send PHASE $2\mathrm{\;B}\langle i\rangle$ to $p$
|
| 130 |
+
|
| 131 |
+
Algorithm 3 Proposer Pseudocode. Modifications to a Paxos proposer are underlined and shown in blue.
|
| 132 |
+
|
| 133 |
+
State: a value $x$ , initially null
|
| 134 |
+
|
| 135 |
+
State: a round $i$ , initially -1
|
| 136 |
+
|
| 137 |
+
State: the configuration ${C}_{i}$ for round $i$ , initially null
|
| 138 |
+
|
| 139 |
+
State: the prior configurations ${H}_{i}$ for round $i$ , initially null
|
| 140 |
+
|
| 141 |
+
upon receiving value $y$ from a client do
|
| 142 |
+
|
| 143 |
+
$i \leftarrow$ next largest round owned by this proposer
|
| 144 |
+
|
| 145 |
+
$x \leftarrow y$
|
| 146 |
+
|
| 147 |
+
${C}_{i} \leftarrow$ an arbitrary configuration
|
| 148 |
+
|
| 149 |
+
send MATCHA $\left\langle {i,{C}_{i}}\right\rangle$ to all of the matchmakers
|
| 150 |
+
|
| 151 |
+
upon receiving MATCHB $\left\langle {i,{H}_{i}^{1}}\right\rangle ,\ldots ,$ MATCHB $\left\langle {i,{H}_{i}^{f + 1}}\right\rangle$
|
| 152 |
+
|
| 153 |
+
from $\overline{f + 1}$ matchmakers do
|
| 154 |
+
|
| 155 |
+
${H}_{i} \leftarrow \mathop{\bigcup }\limits_{{j = 1}}^{{f + 1}}{H}_{i}^{j}$
|
| 156 |
+
|
| 157 |
+
send PHASE $1\mathrm{\;A}\langle i\rangle$ to every acceptor in $\underline{{H}_{i}}$
|
| 158 |
+
|
| 159 |
+
upon receiving PHASE $1\mathrm{\;B}\langle i, - , - \rangle$ from a Phase 1 quo-
|
| 160 |
+
|
| 161 |
+
rum from every configuration in ${H}_{i}$ do
|
| 162 |
+
|
| 163 |
+
$k \leftarrow$ the largest ${vr}$ in any PHASE $1\mathrm{\;B}\langle i,{vr},{vv}\rangle$
|
| 164 |
+
|
| 165 |
+
if $k \neq - 1$ then
|
| 166 |
+
|
| 167 |
+
$x \leftarrow$ the corresponding ${vv}$ in round $k$
|
| 168 |
+
|
| 169 |
+
send PHASE $2\mathrm{\;A}\langle i,x\rangle$ to every acceptor in $\underline{C}$
|
| 170 |
+
|
| 171 |
+
upon receiving PHASE $2\mathrm{\;B}\langle i\rangle$ from a Phase 2 quorum do
|
| 172 |
+
|
| 173 |
+
$x$ is chosen, inform the client
|
| 174 |
+
|
| 175 |
+
§ 3.3 PROOF OF SAFETY
|
| 176 |
+
|
| 177 |
+
We now prove that Donut Paxos is safe; i.e. every execution of Donut Paxos chooses at most one value.
|
| 178 |
+
|
| 179 |
+
Proof. Our proof is based on the Paxos safety proof in [16]. We prove, for every round $i$ , the statement $P\left( i\right)$ : "if a proposer proposes a value $v$ in round $i$ (i.e. sends a PHASE2A message for value $v$ in round $i$ ), then no value other than $v$ has been or will be chosen in any round less than $i$ ." At most one value is ever proposed in a given round, so at most one value is ever chosen in a given round. Thus, $P\left( i\right)$ suffices to prove that Donut Paxos is safe for the following reason. Assume for contradiction that Donut Paxos chooses distinct values $x$ and $y$ in rounds $j$ and $i$ with $j < i$ . Some proposer must have proposed $y$ in round $i$ , so $P\left( i\right)$ ensures us that no value other than $y$ could have been chosen in round $j$ . But, $x$ was chosen, a contradiction.
|
| 180 |
+
|
| 181 |
+
We prove $P\left( i\right)$ by strong induction on $i.P\left( 0\right)$ is vacuous because there are no rounds less than 0 . For the general case $P\left( i\right)$ , we assume $P\left( 0\right) ,\ldots ,P\left( {i - 1}\right)$ . We perform a case analysis on the proposer's pseudocode (Algorithm 3). Either $k$ is -1 or it is not (line 11). First, assume it is not. In this case, the proposer proposes $x$ , the value proposed in round $k$ (line 12). We perform a case analysis on round $j$ to show that no value other than $x$ has been or will be chosen in any round $j < i$ .
|
| 182 |
+
|
| 183 |
+
Case 1: $j > k$ . We show that no value has been or will be chosen in round $j$ . Recall that at the end of the Matchmaking phase, the proposer computed the set ${H}_{i}$ of prior configurations using responses from a set ${M}_{i}$ of $f + 1$ matchmakers. Either ${H}_{i}$ contains a configuration ${C}_{j}$ in round $j$ or it doesn’t.
|
| 184 |
+
|
| 185 |
+
First, suppose it does. Then, the proposer sent PHASE $1\mathrm{\;A}\langle i\rangle$ messages to all of the acceptors in ${C}_{j}$ . A Phase 1 quorum of these acceptors, say $Q$ , all received PHASE $1\mathrm{\;A}\langle i\rangle$ messages and replied with PHASE1B messages. Thus, every acceptor in $Q$ set its round $r$ to $i$ , and in doing so, promised to never vote in any round less than $i$ . Moreover, none of the acceptors in $Q$ had voted in any round greater than $k$ . So, every acceptor in $Q$ has not voted and never will vote in round $j$ . For a value ${v}^{\prime }$ to be chosen in round $j$ , it must receive votes from some Phase 2 quorum ${Q}^{\prime }$ of round $j$ acceptors. But, $Q$ and ${Q}^{\prime }$ necessarily intersect, so this is impossible. Thus, no value has been or will be chosen in round $j$ .
|
| 186 |
+
|
| 187 |
+
Now suppose that ${H}_{i}$ does not contain a configuration for round $j.{H}_{i}$ is the union of $f + 1$ MATCHB messages from the $f + 1$ matchmakers in ${M}_{i}$ . Thus, if ${H}_{i}$ does not contain a configuration for round $j$ , then none of the MATCHB messages did either. This means that for every matchmaker $m \in {M}_{i}$ , when $m$ received MATCHA $\left\langle {i,{C}_{i}}\right\rangle$ , it did not contain a configuration for round $j$ in its log. Moreover, by processing the MATCHA $\left\langle {i,{C}_{i}}\right\rangle$ request, the matchmaker is guaranteed to never process a MATCHA $\left\langle {j,{C}_{j}}\right\rangle$ request in the future. Thus, every matchmaker in ${M}_{i}$ has not processed a MATCHA request in round $j$ and never will. For a value to be chosen in round $j$ , the proposer executing round $j$ must first receive replies from $f + 1$ matchmakers, say ${M}_{j}$ , in round $j$ . But, ${M}_{i}$ and ${M}_{j}$ necessarily intersect, so this is impossible. Thus, no value has been or will be chosen in round $j$ .
|
| 188 |
+
|
| 189 |
+
Case 2: $j = k$ . In a given round, at most one value is proposed, let alone chosen. $x$ is the value proposed in round $k$ , so no other value could be chosen in round $k$ .
|
| 190 |
+
|
| 191 |
+
Case 3: $j < k$ . By induction, $P\left( k\right)$ states that no value other than $x$ has been or will be chosen in any round less than $k$ . This includes round $j$ .
|
| 192 |
+
|
| 193 |
+
Finally, if $k$ is -1, then we are in the same situation as in Case 1. No value has or will be chosen in a round $j < i$ .
|
| 194 |
+
|
| 195 |
+
§ 3.4 GARBAGE COLLECTION (HOW)
|
| 196 |
+
|
| 197 |
+
We've discussed how a proposer can change its round and introduce a new configuration. Now, we explain how to shut down old configurations. At the beginning of round $i$ , a proposer $p$ executes the Matchmaking phase and computes a set ${H}_{i}$ of configurations in rounds less than $i$ . The proposer then executes Phase 1 with the acceptors in these configurations. Assume ${H}_{i}$ contains a configuration ${C}_{j}$ for a round $j < i$ . If we prematurely shut down the acceptors in ${C}_{j}$ , then proposer $p$ will get stuck in Phase 1, waiting for PHASE1B messages from a quorum of nodes that have been shut down. Therefore, we cannot shut down the acceptors in a configuration ${C}_{j}$ until we are sure that the matchmakers will never again return ${C}_{j}$ during the Matchmaking phase.
|
| 198 |
+
|
| 199 |
+
Thus, we extend Donut Paxos to allow matchmakers to garbage collect configurations from their logs, ensuring that the garbage collected configurations will not be returned during any future Matchmaking phase. More concretely, a proposer $p$ can now send a GARBAGEA $\langle i\rangle$ command to the matchmakers informing them to garbage collect all configurations in rounds less than $i$ . When a matchmaker receives a GARBAGEA $\langle i\rangle$ message, it deletes log entry $L\left\lbrack j\right\rbrack$ for every round $j < i$ . It then updates a garbage collection watermark $w$ to the maximum of $w$ and $i$ and sends back a GARBAGEB $\langle i\rangle$ message to the proposer. See Algorithm 4.
|
| 200 |
+
|
| 201 |
+
Algorithm 4 Matchmaker Pseudocode (with GC). Changes to Algorithm 1 are underlined and shown in blue.
|
| 202 |
+
|
| 203 |
+
State: a $\log L$ indexed by round, initially empty
|
| 204 |
+
|
| 205 |
+
State: a garbage collection watermark $w$ , initially 0
|
| 206 |
+
|
| 207 |
+
upon receiving GARBAGEA $\langle i\rangle$ from proposer $p$ do
|
| 208 |
+
|
| 209 |
+
delete $L\left\lbrack j\right\rbrack$ for all $j < i$ .
|
| 210 |
+
|
| 211 |
+
$w \leftarrow \max \left( {w,i}\right)$
|
| 212 |
+
|
| 213 |
+
send GARBAGEB $\langle i\rangle$ to $p$
|
| 214 |
+
|
| 215 |
+
upon receiving MATCHA $\left\langle {i,{C}_{i}}\right\rangle$ from proposer $p$ do
|
| 216 |
+
|
| 217 |
+
if $\underline{i < w}$ or $\exists {C}_{j}$ in round $j \geq i$ in $L$ then
|
| 218 |
+
|
| 219 |
+
ignore the MATCHA $\left\langle {i,{C}_{i}}\right\rangle$ message
|
| 220 |
+
|
| 221 |
+
else
|
| 222 |
+
|
| 223 |
+
${H}_{i} \leftarrow \left\{ {\left( {j,{C}_{j}}\right) \mid {C}_{j} \in L}\right\}$
|
| 224 |
+
|
| 225 |
+
$L\left\lbrack i\right\rbrack \leftarrow {C}_{i}$
|
| 226 |
+
|
| 227 |
+
send MATCHB $\left\langle {i,\underline{w},{H}_{i}}\right\rangle$ to $p$
|
| 228 |
+
|
| 229 |
+
We also update the Matchmaking phase in three ways. First, a matchmaker ignores a MATCHA $\left\langle {i,{C}_{i}}\right\rangle$ message if $i$ has been garbage collected (i.e. if $i < w$ ). Second, a matchmaker returns its garbage collection watermark $w$ in every MATCHB that it sends. Third, when a proposer receives $\operatorname{MATCHB}\left\langle {i,{w}_{1},{H}_{i}^{1}}\right\rangle ,\ldots ,\operatorname{MATCHB}\left\langle {i,{w}_{f + 1},{H}_{i}^{f + 1}}\right\rangle$ from $f + 1$ matchmakers, it again computes ${H}_{i} = { \cup }_{j = 1}^{f + 1}{H}_{i}^{j}$ . It then computes $w = \mathop{\max }\limits_{{j = 1}}^{{f + 1}}{w}_{j}$ and prunes every configuration in ${H}_{i}$ in a round less than $w$ . In other words, if any of the $f + 1$ matchmakers have garbage collected round $j$ , then the proposer also garbage collects round $j$ .
|
| 230 |
+
|
| 231 |
+
Once a proposer receives GARBAGEB $\langle i\rangle$ messages from at least $f + 1$ matchmakers $M$ , it is guaranteed that all future Matchmaking phases will not include any configuration in any round less than $i$ . Why? Consider a future Matchmaking phase run with $f + 1$ matchmakers ${M}^{\prime }.M$ and ${M}^{\prime }$ intersect, so some matchmaker in the intersection has a garbage collection watermark at least as large as $i$ . Thus, once a configuration has been garbage collected by $f + 1$ matchmakers, we can shut down the acceptors in the configuration.
|
| 232 |
+
|
| 233 |
+
§ 3.5 GARBAGE COLLECTION (WHEN)
|
| 234 |
+
|
| 235 |
+
Once a configuration has been garbage collected, it is safe to shut it down, but when is it safe to garbage collect a configuration? It is not always safe. For example, if we prematurely garbage collect configuration ${C}_{j}$ in round $j$ , a future proposer in round $i > j$ may not learn about a value $v$ chosen in round $j$ and then erroneously get a value other than $v$ chosen in round i. There are three situations in which it is safe for a proposer ${p}_{i}$ in round $i$ to issue a GARBAGEA $\langle i\rangle$ command. We explain the three situations and provide intuition on why they are safe. Later, we'll see that all three scenarios are important for Donut MultiPaxos. See Section A for a safety proof.
|
| 236 |
+
|
| 237 |
+
Scenario 1. If the proposer ${p}_{i}$ gets a value $x$ chosen in round $i$ , then it can safely issue a GARBAGEA $\langle i\rangle$ command. Why? When a proposer ${p}_{j}$ in round $j > i$ executes Phase 1, it will learn about the value $x$ and propose $x$ in Phase 2 . But first, it must establish that no value other than $x$ has been or will be chosen in any round less than $j$ . The proposer ${p}_{i}$ already established this fact for all rounds less than $i$ , so any communication with the configurations in these rounds is redundant. Thus, we can garbage collect them.
|
| 238 |
+
|
| 239 |
+
Scenario 2. If the proposer ${p}_{i}$ executes Phase 1 in round $i$ and finds $k = - 1$ (see Algorithm 3), then it can safely issue a GARBAGEA $\langle i\rangle$ command. Recall that if $k = - 1$ , then no value has been or will be chosen in any round less than $i$ . This situation is similar to Scenario 1. Any future proposer ${p}_{j}$ in round $j > i$ does not have to redundantly communicate with the configurations in rounds less than $i$ since ${p}_{i}$ already established that no value has been chosen in these rounds.
|
| 240 |
+
|
| 241 |
+
Scenario 3. If the proposer ${p}_{i}$ learns that a value $x$ has already been chosen and has been stored on $f + 1$ non-acceptor machines (e.g., $f + 1$ proposers), then the proposer can safely issue a GARBAGEA $\langle i\rangle$ command after it informs a Phase 2 quorum of acceptors in ${C}_{i}$ of this fact. Any future proposer ${p}_{j}$ in round $j > i$ will contact a Phase 1 quorum of ${C}_{i}$ and encounter at least one acceptor that knows the value $x$ has already been chosen. When this acceptor informs ${p}_{j}$ that a value $x$ has already been chosen, ${p}_{j}$ stops executing the protocol entirely and simply fetches the value $x$ from one of the $f + 1$ machines that store the value. Note that storing the value on $f + 1$ machines ensures that some machine will store the value despite $f$ failures. The decision of exactly which $f + 1$ machines is not important.
|
| 242 |
+
|
| 243 |
+
Later, we'll extend this garbage collection protocol to Donut MultiPaxos (Section 4) and see empirically that matchmakers usually return just a single configuration (Section 7).
|
| 244 |
+
|
| 245 |
+
§ 3.6 OPTIMIZATIONS
|
| 246 |
+
|
| 247 |
+
We now present a couple of protocol optimizations. First, note that a proposer can proactively run the Matchmaking phase in round $i$ before it hears from a client. This is similar to proactively executing Phase 1, a standard optimization [9]. We call this optimization proactive matchmaking.
|
| 248 |
+
|
| 249 |
+
Second, assume that the proposer in round $i$ has executed the Matchmaking phase and Phase 1. Through Phase 1, it finds that $k = - 1$ and thus learns that no value has been chosen in any round less than $i$ (see the safety proof above). Assume that before executing Phase 2, the proposer transitions from round $i$ to round $i + 1$ as part of a reconfiguration. After executing the Matchmaking phase in round $i + 1$ , the proposer can skip Phase 1 and proceed directly to Phase 2. Why? The proposer established in round $i$ that no value has been or will be chosen in any round less than $i$ . Moreover, because it did not run Phase 2 in round $i$ , it also knows that no value has been or will be chosen in round $i$ . Together, these imply that no value has been or will be chosen in any round less than $i + 1$ . Normally, the proposer would run Phase 1 in round $i + 1$ to establish this fact, but since it has already established it, it can instead proceed directly to Phase 2. We call this optimization Phase 1 bypassing.
|
| 250 |
+
|
| 251 |
+
Phase 1 Bypassing depends on a proposer being the leader of round $i$ and the leader of the next round $i + 1$ . We can construct a set of rounds such that this is always the case. Let the set of rounds be the set of lexicographically ordered tuples (r, id, s)where $r$ and $s$ are both integers and ${id}$ is a proposer id. A proposer is responsible for all the rounds that contain its id. With this set of rounds, the proposer $p$ in round(r, p, s) always owns the next round $\left( {r,p,s + 1}\right)$ . For example given two proposers $a$ and $b$ , we have the following ordering on rounds:
|
| 252 |
+
|
| 253 |
+
$$
|
| 254 |
+
\left( {0,a,0}\right) < \left( {0,a,1}\right) < \left( {0,a,2}\right) < \left( {0,a,3}\right) < \cdots
|
| 255 |
+
$$
|
| 256 |
+
|
| 257 |
+
$$
|
| 258 |
+
\left( {0,b,0}\right) < \left( {0,b,1}\right) < \left( {0,b,2}\right) < \left( {0,b,3}\right) < \cdots
|
| 259 |
+
$$
|
| 260 |
+
|
| 261 |
+
$$
|
| 262 |
+
\left( {1,a,0}\right) < \left( {1,a,1}\right) < \left( {1,a,2}\right) < \left( {1,a,3}\right) < \cdots
|
| 263 |
+
$$
|
| 264 |
+
|
| 265 |
+
In the next section, we'll see that this optimization is essential for implementing Donut MultiPaxos with good performance. Also note that this optimization is not particular to Donut Paxos. Paxos and MultiPaxos can both take advantage of this optimization.
|
| 266 |
+
|
| 267 |
+
§ 4 DONUT MULTIPAXOS
|
| 268 |
+
|
| 269 |
+
§ 4.1 MULTIPAXOS
|
| 270 |
+
|
| 271 |
+
First, we summarize MultiPaxos. Whereas Paxos is a consensus protocol that agrees on a single value, MultiPaxos [14,35] is a state machine replication protocol that agrees on a sequence, or "log" of values. MultiPaxos manages multiple replicas of a state machine. Clients send state machine commands to MultiPaxos, MultiPaxos places the commands in a totally ordered log, and state machine replicas execute the commands in log order. By beginning in the same initial state and executing the same commands in the same order, all deterministic state machine replicas are kept in sync.
|
| 272 |
+
|
| 273 |
+
< g r a p h i c s >
|
| 274 |
+
|
| 275 |
+
Figure 4: An example execution of MultiPaxos $\left( {f = 1}\right)$ . The leader is adorned with a crown.
|
| 276 |
+
|
| 277 |
+
To agree on a log of commands, MultiPaxos implements one instance of Paxos for every log entry. The $i$ th instance of Paxos chooses the command in log entry $i$ . More concretely, a MultiPaxos deployment that tolerates $f$ faults consists of an arbitrary number of clients, at least $f + 1$ proposers, a configuration $C$ of acceptors, and at least $f + 1$ replicas, as illustrated in Figure 4.
|
| 278 |
+
|
| 279 |
+
One of the proposers is elected leader in some round, say round $i$ . We assume the leader knows that log entries up to and including log entry ${k}_{c}$ have already been chosen (e.g., by communicating with the replicas). We call this log entry the commit index. The leader then runs Phase 1 of Paxos in round $i$ for every log entry larger than ${k}_{c}$ . Note that even though there are an infinite number of log entries larger than ${k}_{c}$ , the leader can execute Phase 1 using a finite amount of information. In particular, the leader sends a single $\operatorname{PHASE}1\mathrm{\;A}\langle i\rangle$ message that acts as the PHASE1 A message for every log entry larger than ${k}_{c}$ . Also, an acceptor replies with a PHASE $1\mathrm{\;B}\langle i,{vr},{vv}\rangle$ message only for log entries in which the acceptor has voted. The infinitely many log entries in which the acceptor has not yet voted do not yield an explicit PHASE1B message.
|
| 280 |
+
|
| 281 |
+
The leader's knowledge about the log after Phase 1 can be characterized by the commit index ${k}_{c}$ and a pending index ${k}_{p}$ with ${k}_{c} \leq {k}_{p}$ , as shown in Figure 5. The commit index and pending index divide the log into three regions: a prefix of chosen log entries (Region 1), a suffix of unchosen log entries (Region 3), and a middle region of pending log entries (Region 2). More specifically:
|
| 282 |
+
|
| 283 |
+
< g r a p h i c s >
|
| 284 |
+
|
| 285 |
+
Figure 5: A leader's knowledge of the log after Phase 1.
|
| 286 |
+
|
| 287 |
+
* Region $\mathbf{1}\left\lbrack {0,{k}_{c}}\right\rbrack$ : The leader knows that a command has been chosen in every log entry less than or equal to ${k}_{c}$ .
|
| 288 |
+
|
| 289 |
+
* Region $3\left\lbrack {{k}_{p} + 1,\infty }\right)$ : The leader knows that no command has been chosen (in any round less than $i$ ) in any log entry larger than ${k}_{p}$ .
|
| 290 |
+
|
| 291 |
+
* Region $2\left\lbrack {{k}_{c} + 1,{k}_{p}}\right\rbrack$ : If there is a command that may have already been chosen, then it appears between ${k}_{c}$ and ${k}_{p}$ . Region 2 may also contain some log entries in which the leader knows a value has already been chosen, and it may contain some log entries in which the leader knows that no value has been chosen (we call these "holes").
|
| 292 |
+
|
| 293 |
+
After Phase 1, the leader sends a PHASE2A message for every unchosen log entry in Region 2, proposing a "no-op" command for the holes. Simultaneously, the leader begins accepting client requests. When a client wants to propose a state machine command, it sends the command to the leader. The leader assigns log entries to commands in increasing order, beginning at ${k}_{p} + 1$ . It then runs Phase 2 of Paxos to get the command chosen in that entry in round $i$ . Once the leader learns that a command has been chosen in a given log entry, it informs the replicas. Replicas insert chosen commands into their logs and execute the logs in prefix order, sending the results of execution back to the clients. This execution is illustrated in Figure 4.
|
| 294 |
+
|
| 295 |
+
It is critical to note that a leader performs Phase 1 of Paxos only once per round, not once per command. In other words, Phase 1 is not performed during normal operation. It is performed only when the leader fails and a new leader is elected in a larger round, an uncommon occurrence.
|
| 296 |
+
|
| 297 |
+
§ 4.2 DONUT MULTIPAXOS
|
| 298 |
+
|
| 299 |
+
We first extend Donut Paxos to Donut MultiPaxos with proactive matchmaking but without Phase 1 bypassing or garbage collection. We'll see how to incorporate these two momentarily. The extension from Donut Paxos to Donut MultiPaxos is analogous to the extension of Paxos to MultiPaxos. Donut MultiPaxos reaches consensus on a totally ordered log of state machine commands, one log entry at a time, using one instance of Donut Paxos for every log entry.
|
| 300 |
+
|
| 301 |
+
More concretely, a Donut MultiPaxos deployment consists of an arbitrary number of clients, at least $f + 1$ proposers, a set of ${2f} + 1$ matchmakers, a dynamic set of acceptors (one configuration per round), and a set of at least $f + 1$ state machine replicas. We assume, as is standard, that a leader election algorithm is used to select one of the proposers as a stable leader in some round, say round $i$ . The leader selects a configuration ${C}_{i}$ of acceptors that it will use for every log entry. The mechanism by which the configuration is chosen is an orthogonal concern. A system administrator, for example, could send the configuration to the leader, or the configuration could be read from an external service.
|
| 302 |
+
|
| 303 |
+
The leader then executes the Matchmaking phase in the same way as in Donut Paxos (i.e. it sends MATCHA $\left\langle {i,{C}_{i}}\right\rangle$ messages to the matchmakers and awaits $\operatorname{MATCHB}\left\langle {i,{H}_{i}}\right\rangle$ responses). After the Matchmaking phase completes, the leader executes Phase 1 for every log entry. This is identical to MultiPaxos, except that the leader uses the configurations returned by the matchmakers rather than assuming a fixed configuration. Note that proactive matchmaking allows the leader to execute the Matchmaking phase and Phase 1 before receiving any client requests.
|
| 304 |
+
|
| 305 |
+
The leader then enters Phase 2 and operates exactly as it would in MultiPaxos. It executes Phase 2 with ${C}_{i}$ for the log entries in Region 2. Moreover, when it receives a state machine command from a client, it assigns the command a log entry in Region 3, runs Phase 2 with the acceptors in ${C}_{i}$ , and informs the replicas when the command is chosen. Replicas execute commands in log order and send the results of executing commands back to the clients.
|
| 306 |
+
|
| 307 |
+
§ 4.3 DISCUSSION
|
| 308 |
+
|
| 309 |
+
To reconfigure from some old configuration ${C}_{\text{ old }}$ in round $i$ to some new configuration ${C}_{\text{ new }}$ , the Donut MultiPaxos leader of round $i$ simply advances to round $i + 1$ and selects the new configuration ${C}_{\text{ new }}$ . The new configuration is active immediately after the Matchmaking phase, a one round trip delay. Note that the acceptors in the new configuration ${C}_{\text{ new }}$ do not have to undergo any sort of warm up or bootstrapping and do not have to contact any other acceptors in any other configuration.
|
| 310 |
+
|
| 311 |
+
The new configuration is active immediately, but it is not safe to deactivate the acceptors in the old configuration immediately, as we saw in Section 3.5. We extend Donut Paxos's garbage collection to Donut MultiPaxos momentarily.
|
| 312 |
+
|
| 313 |
+
Also note that Donut MultiPaxos does not perform the Matchmaking phase or Phase 1 on the critical path of normal execution. Similar to how MultiPaxos executes Phase 1 only once per leader change (and not once per command), Donut MultiPaxos runs the Matchmaking phase and Phase 1 only when a new leader is elected or when a leader changes its round (e.g., when a leader transitions from round $i$ to round $i +$ 1 as part of a reconfiguration). In the normal case (i.e. during Phase 2), Donut MultiPaxos and MultiPaxos are identical, and Donut MultiPaxos does not introduce any overheads.
|
| 314 |
+
|
| 315 |
+
Further note that configurations do not have to be unique across rounds. The leader in round $i$ is free to re-use a configuration ${C}_{j}$ that was used in some round $j < i$ .
|
| 316 |
+
|
| 317 |
+
§ 4.4 OPTIMIZATION
|
| 318 |
+
|
| 319 |
+
Ideally, Donut MultiPaxos' performance would be unaffected by a reconfiguration. The latency of every client request and the protocol's overall throughput would remain constant throughout a reconfiguration. Donut MultiPaxos as we've described it so far, however, does not meet this ideal. During a reconfiguration, a leader must temporarily stop processing client commands and wait for the reconfiguration to finish before resuming normal operation.
|
| 320 |
+
|
| 321 |
+
This is illustrated in Figure 6. Figure 6 shows a leader ${p}_{1}$ reconfiguring from a configuration of acceptors ${C}_{\text{ old }}$ consisting of acceptors ${a}_{1},{a}_{2}$ , and ${a}_{3}$ in round $i$ to a new configuration of acceptors ${C}_{\text{ new }}$ consisting of acceptors ${b}_{1},{b}_{2}$ , and ${b}_{3}$ in round $i + 1$ . While the leader performs the reconfiguration, clients continue to send state machine commands to the leader. We consider such a command and perform a case analysis on when the command arrives at the leader to see whether or not the command has to be stalled.
|
| 322 |
+
|
| 323 |
+
Case 1: Matchmaking (Figure 6a). If the leader receives a command during the Matchmaking phase, then the leader can process the command as normal in round $i$ using the acceptors in ${C}_{\text{ old }}$ . Even though the leader is executing the Matchmaking phase in round $i + 1$ and is communicating with the matchmakers, the acceptors in ${C}_{\text{ old }}$ are oblivious to this and can process commands in Phase 2 in round $i$ .
|
| 324 |
+
|
| 325 |
+
Case 2: Phase 1 (Figure 6b). If the leader receives a command during Phase 1, then the leader cannot process the command. It must delay the processing of the command until Phase 1 finishes. Here’s why. Once an acceptor in ${C}_{\text{ old }}$ receives a PHASE $1\mathrm{\;A}\langle i + 1\rangle$ message, it will reject any future commands in rounds less than $i + 1$ , so the leader is unable to send the command to ${C}_{\text{ old }}$ . The leader also cannot send the command to ${C}_{\text{ new }}$ in round $i + 1$ because it has not yet finished executing Phase 1.
|
| 326 |
+
|
| 327 |
+
Case 3: Phase 2 (Figure 6c). If the leader receives a command during Phase 2, then the leader can send the command to the new acceptors in ${C}_{\text{ new }}$ in round $i + 1$ . This is the normal case of execution.
|
| 328 |
+
|
| 329 |
+
In summary, any commands received during Phase 1 of a reconfiguration are delayed. Fortunately, we can eliminate this problem by using Phase 1 bypassing. Consider a leader performing a reconfiguration from ${C}_{i}$ in round $i$ to ${C}_{i + 1}$ in round $i + 1$ . At the end of the Matchmaking phase and at the beginning of Phase 1 (in round $i + 1$ ), let $k$ be the largest $\log$ entry that the leader has assigned to a command. That is, all $\log$ entries after entry $k$ are empty. These $\log$ entries satisfy the preconditions of Phase 1 bypassing, so it is safe for the leader to bypass Phase 1 in round $i + 1$ for these log entries in the following way. When a leader receives a command after the Matchmaking phase, it assigns the command a log entry larger than $k$ , skips Phase 1, and executes Phase 2 in round $i + 1$ with ${C}_{\text{ new }}$ immediately.
|
| 330 |
+
|
| 331 |
+
< g r a p h i c s >
|
| 332 |
+
|
| 333 |
+
Figure 6: An example Donut MultiPaxos reconfiguration without Phase 1 bypassing. The leader ${p}_{1}$ reconfigures from the acceptors ${a}_{1},{a}_{2},{a}_{3}$ to the acceptors ${b}_{1},{b}_{2},{b}_{3}$ . Client commands are drawn as gray dashed lines. Note that every subfigure shows one phase of a reconfiguration using solid colored lines, but the dashed lines show the complete execution of a client request that runs concurrently with the reconfiguration. For simplicity, we assume that every proposer also serves as a replica.
|
| 334 |
+
|
| 335 |
+
With this optimization and the round scheme described in Section 3.6, no state machine commands are delayed. Commands received during the Matchmaking phase or earlier are chosen in round $i$ by ${C}_{\text{ old }}$ in log entries up to and including $k$ . Commands received during Phase 1, Phase 2, or later are chosen in round $i + 1$ by ${C}_{\text{ new }}$ in log entries $k + 1,k + 2,k + 3$ , and so on. With this optimization Donut MultiPaxos can be reconfigured with minimal performance degradation.
|
| 336 |
+
|
| 337 |
+
§ 4.5 GARBAGE COLLECTION
|
| 338 |
+
|
| 339 |
+
Recall that the Donut MultiPaxos leader ${p}_{i}$ in round $i$ uses a single configuration ${C}_{i}$ for every log entry. The leader ${p}_{i}$ can safely issue a GARBAGEA $\langle i\rangle$ command to the matchmakers once it ensures that every log entry satisfies one of the three scenarios described in Section 3.5. Recall from Figure 5 that at the end of Phase 1 and at the beginning of Phase 2, the log can be divided into three regions. Each of the three garbage collection scenarios applies to one of the regions.
|
| 340 |
+
|
| 341 |
+
Scenario 2 applies to Region 3. These are the log entries for which $k = - 1$ . Scenario 1 applies to Region 2, once the leader has successfully chosen commands in all of the log entries in Region 2. Scenario 3 applies to Region 1 if we make the following adjustments. First, we deploy ${2f} + 1$ replicas instead of $f + 1$ . Second, the leader ensures that the prefix of previously chosen log entries is stored on at least $f + 1$ of the ${2f} + 1$ replicas. Third, the leader informs a Phase 2 quorum of ${C}_{i}$ acceptors that these commands have been stored on the replicas.
|
| 342 |
+
|
| 343 |
+
In summary, the leader ${p}_{i}$ of round $i$ executes as follows. It executes the Matchmaking phase to get the prior configurations ${H}_{i}$ . It executes Phase 1 with the configurations in ${H}_{i}$ . It enters Phase 2 and chooses commands in Region 2. It informs a Phase 2 quorum of ${C}_{i}$ acceptors once the commands in Region 1 have been stored on $f + 1$ replicas. It issues a GARBAGEA $\langle i\rangle$ command to the matchmakers and awaits $f + 1$ GARBAGEB $\langle i\rangle$ responses. At this point, all previous configurations can be shut down.
|
| 344 |
+
|
| 345 |
+
Note that the leader can begin processing state machine commands from clients as soon as it enters Phase 2. It does not have to stall commands during garbage collection. Note also that during normal operation, old configurations are garbage collected very quickly. In Section 7, we show that ${H}_{i}$ almost always contains a single configuration (i.e. ${C}_{i - 1}$ ).
|
| 346 |
+
|
| 347 |
+
§ 5 RECONFIGURING MATCHMAKERS
|
| 348 |
+
|
| 349 |
+
We've discussed how Donut MultiPaxos allows us to reconfigure the set of acceptors. In this section, we discuss how to reconfigure proposers, replicas, and matchmakers.
|
| 350 |
+
|
| 351 |
+
Reconfiguring proposers and replicas is straightforward. In fact, Donut MultiPaxos reconfigures proposers and replicas in exactly the same way as MultiPaxos [35], so we do not discuss it at length. In short, a proposer can be safely added or removed at any time. Replicas can also be safely added or removed at any time so long as we ensure that commands replicated on $f + 1$ replicas remain replicated on $f + 1$ replicas. For performance, a newly introduced proposer should contact an existing proposer or replica to learn about the prefix of already chosen commands, and a newly introduced replica should copy the log from an existing replica.
|
| 352 |
+
|
| 353 |
+
Reconfiguring matchmakers is a bit more involved, but still relatively straightforward. Recall that proposers perform the Matchmaking phase only during a change in round. Thus, for the vast majority of the time-specifically, when there is a single, stable leader-the matchmakers are completely idle. This means that the way we reconfigure the matchmakers has to be safe, but it doesn't have to be efficient. The matchmakers can be reconfigured at any time between round changes without any impact on the performance.
|
| 354 |
+
|
| 355 |
+
Thus, we use the simplest approach to reconfiguration: we shut down the old matchmakers and replace them with new ones, making sure that the new matchmakers' initial state is the same as the old matchmakers' final state. More concretely, we reconfigure from a set ${M}_{\text{ old }}$ of matchmakers to a new set ${M}_{\text{ new }}$ as follows. First, a proposer (or any other node) sends a STOPA $\langle \rangle$ message to the matchmakers in ${M}_{\text{ old }}$ . When a matchmaker ${m}_{i}$ receives a STOPA $\langle \rangle$ message, it stops processing messages (except for other STOPA ( ) messages) and replies with $\operatorname{STOPB}\left\langle {{L}_{i},{w}_{i}}\right\rangle$ where ${L}_{i}$ is ${m}_{i}$ ’s $\log$ and ${w}_{i}$ is its garbage collection watermark. When the proposer receives STOPB messages from $f + 1$ matchmakers, it knows that the matchmakers have effectively been shut down. It computes $w$ as the maximum of every returned ${w}_{i}$ . It computes $L$ as the union of the returned logs, and removes all entries of $L$ that appear in a round less than $w$ . An example of this log merging is illustrated in Figure 7.
|
| 356 |
+
|
| 357 |
+
< g r a p h i c s >
|
| 358 |
+
|
| 359 |
+
Figure 7: An example of merging three matchmaker logs $\left( {L}_{0}\right.$ , ${L}_{1}$ , and ${L}_{2}$ ) during a matchmaker reconfiguration. Garbage collected log entries are shown in red.
|
| 360 |
+
|
| 361 |
+
The proposer then sends $L$ and $w$ to all of the matchmakers in ${M}_{\text{ new }}$ . Each matchmaker adopts these values as its initial state. At this point, the matchmakers in ${M}_{\text{ new }}$ cannot begin processing commands yet. Naively, it is possible that two different nodes could simultaneously attempt to reconfigure to two disjoint sets of matchmakers, say ${M}_{\text{ new }}$ and ${M}_{\text{ new }}^{\prime }$ . To avoid this, we use an instance of Paxos (the matchmakers in ${M}_{\text{ old }}$ are the acceptors) to choose the new matchmakers ${M}_{\text{ new }}$ . See Section B for a safety proof.
|
| 362 |
+
|
| 363 |
+
§ 6 INSIGHTS AND GENERALITY
|
| 364 |
+
|
| 365 |
+
MultiPaxos To reconfigure from a set of nodes $N$ to a new set of nodes ${N}^{\prime }$ , a MultiPaxos leader gets the value ${N}^{\prime }$ chosen in the log at some index $i$ . All commands in the log starting at position $i + \alpha$ are chosen using the nodes in ${N}^{\prime }$ instead of the nodes in $N$ , where $\alpha$ is some configurable parameter. This protocol is called Horizontal MultiPaxos.
|
| 366 |
+
|
| 367 |
+
Donut MultiPaxos has the following advantages over Horizontal MultiPaxos. First, the core idea behind Horizontal MultiPaxos seems simple, but the protocol has a number of hidden subtleties [23]. For example, a newly elected Horizontal MultiPaxos leader with a stale log may not know the latest configuration of nodes. It may not even know which configuration of nodes to contact to learn the latest configuration of nodes. This makes it unclear when it is safe to shut down old configurations because a newly elected Horizontal Multi-Paxos leader can be arbitrarily out of date. These subtleties and the many others described in [23] makes Horizontal Mul-tiPaxos significantly more complicated that it initially seems. Donut Paxos addresses these subtleties directly. The matchmakers can always be used to learn the latest configuration, and our garbage collection protocol details exactly when and how to shut down old configurations safely.
|
| 368 |
+
|
| 369 |
+
< g r a p h i c s >
|
| 370 |
+
|
| 371 |
+
Figure 8: A MultiPaxos log during reconfiguration $\left( {\alpha = 4}\right)$ .
|
| 372 |
+
|
| 373 |
+
Second, horizontal reconfiguration is not generally applicable. It is fundamentally incompatible with replication protocols that do not have a log. Moreover, researchers are finding that avoiding a $\log$ can often be advantageous $\lbrack 2,{15},{27},{33}$ , 34, 36]. For example, protocols like Generalized Paxos [15], EPaxos [27], Atlas [8], and Caesar [2] arrange commands in a partially ordered graph instead of a totally ordered log to exploit commutativity between commands. CASPaxos [33] maintains a single value, instead of a log or graph, for simplicity. Databases like TAPIR [36] avoid ordering transactions in a log for improved performance, and databases like Meerkat [34] do the same to improve scalability. Even some protocols with logs cannot use the ideas behind Horizontal MultiPaxos. For example, Raft cannot safely perform horizontal reconfigurations [29].
|
| 374 |
+
|
| 375 |
+
Because these protocols do not have logs, they cannot use MultiPaxos' horizontal reconfiguration protocol. However, while none of the protocols have logs, all of them have rounds. This means that the protocols can either use Donut Paxos directly, or at least borrow ideas from Donut Paxos for reconfiguration. For example, we are developing a protocol called BPaxos that is an EPaxos [27] variant which partially orders commands into a graph. BPaxos is a modular protocol that uses Paxos as a black box subroutine. Due to this modularity, we can directly replace Paxos with Donut Paxos to support reconfiguration. The same idea can also be applied to EPaxos. CASPaxos [33] is similar to Paxos and can be extended to Matchmaker CASPaxos in the same way we extended Paxos to Donut Paxos. These are two simple examples, and we don't claim that extending Donut Paxos to some of the other more complicated protocols is always easy. But, the universality of rounds makes Donut Paxos an attractive foundation on top of which other non-log based protocols can build their own reconfiguration protocols.
|
| 376 |
+
|
| 377 |
+
One could argue that these other protocols are not used as much in industry, so it's not that important for them to have reconfiguration protocols, but we think the causation is in the reverse direction! Without reconfiguration, these protocols cannot be used in industry.
|
| 378 |
+
|
| 379 |
+
Third, optimizing Horizontal MultiPaxos is not easy. A MultiPaxos leader can process at most $\alpha$ unchosen commands at a time. This makes $\alpha$ an important parameter to tune. If we set $\alpha$ too low, then we limit the protocol’s pipeline parallelism and the throughput suffers. Note that a small $\alpha$ reduces the normal case throughput of Horizontal MultiPaxos, not just the throughput during reconfiguration. If we set $\alpha$ too high, then we have to wait a long time for a reconfiguration to complete. If we are reconfiguring because of a failed node, then we might have to endure a long reconfiguration with reduced throughput. Donut MultiPaxos has no $\alpha$ parameter to tune. Note that Horizontal MultiPaxos can be implemented with an optimization in which we select a very large $\alpha$ and then get a sequence of $\alpha$ noops in the log to force a quick reconfiguration. This optimization helps avoid the difficulties of finding a good value of $\alpha$ , but the optimization introduces a new set of subtleties into the protocol.
|
| 380 |
+
|
| 381 |
+
Horizontal MultiPaxos also requires a Phase 1 and Phase 2 quorum of acceptors from an old configuration in order to perform a reconfiguration after a leader failure, but Donut MultiPaxos only requires a Phase 1 quorum. Some read optimized MultiPaxos variants perform reads against Phase 1 quorums [5]. These protocols benefit from having very small Phase 1 quorums and very large Phase 2 quorums, requiring Horizontal MultiPaxos to contact far more nodes that Donut MultiPaxos during a reconfiguration.
|
| 382 |
+
|
| 383 |
+
Finally, we clarify that if Horizontal MultiPaxos is implemented with all of its subtleties ironed out, is deployed with a good choice of $\alpha$ , and is run with small Phase 2 quorums, then it can perform a reconfiguration without performance degradation. In this case, Horizontal MultiPaxos and Donut MultiPaxos both reconfigure, in some sense, "optimally".
|
| 384 |
+
|
| 385 |
+
Vertical Paxos Donut MultiPaxos significantly improves the practicality of Vertical Paxos [19] in a number of ways. First, Vertical Paxos is a consensus protocol, not a state machine replication protocol, and it's not easy to extend Vertical Paxos' garbage collection protocol to a state machine replication protocol. Vertical Paxos garbage collects old configurations in situations similar to Scenario 1 and Scenario 2 from Section 3.5. It does not include Scenario 3. Without this, old configurations cannot be garbage collected, which means that it is never safe to shut down old configurations.
|
| 386 |
+
|
| 387 |
+
Second, Vertical Paxos requires an external master but does not describe how to implement the master in an efficient way. We could implement the master using another state machine replication protocol like MultiPaxos, but this would be both slow and overly complex. Plus, we would have to implement a reconfiguration protocol for the master as well. Our matchmakers are analogous to the external master but show that such a master does not require a nested invocation of state machine replication.
|
| 388 |
+
|
| 389 |
+
Third, Vertical Paxos requires that a proposer execute Phase 1 in order to perform a reconfiguration. Thus, Vertical Paxos cannot be extended to MultiPaxos without causing performance degradation during reconfiguration. This is not the case for matchmakers thanks to Phase 1 bypassing.
|
| 390 |
+
|
| 391 |
+
Fourth, Vertical Paxos does not describe how proposers learn the configurations used in previous rounds and instead assumes that configurations are fixed in advance by an oracle. Donut Paxos shows that this assumption is not necessary, as the matchmakers store every configuration.
|
| 392 |
+
|
| 393 |
+
Fast Paxos Fast Paxos [16] is a Paxos variant that shaves off one network delay from Paxos in the best case, but can have higher delays if concurrently proposed commands conflict. While Paxos quorums consist of $f + 1$ out of ${2f} + 1$ acceptors, Fast Paxos requires larger quorums. Many protocols have reduced Fast Paxos quorum sizes a bit, but to date, Fast Paxos quorum sizes have remained larger than classic Paxos quorum sizes $\left\lbrack {8,{27}}\right\rbrack$ . Using matchmakers, we can implement Fast Paxos with a fixed set of $f + 1$ acceptors (and hence with $f + 1$ - sized quorums). Specifically, we deploy Fast Paxos with $f + 1$ acceptors, with a single unanimous Phase 2 quorum, and with singleton Phase 1 quorums. A full description of the protocol and a proof of correctness is given in Section C.
|
| 394 |
+
|
| 395 |
+
DPaxos DPaxos is a Paxos variant that allows every round to use a different subset of acceptors from some fixed set of acceptors. Donut Paxos obviates the need for a fixed set of nodes. DPaxos' scope is limited to a single instance of consensus, whereas Donut MultiPaxos shows how to efficiently reconfigure across multiple instances of consensus simultaneously. We also discovered that DPaxos' garbage collection algorithm is unsafe. Donut MultiPaxos fixes the bug. See Section D for details.
|
| 396 |
+
|
| 397 |
+
Cheap Paxos. Cheap Paxos [21] is a MultiPaxos variant that consists of a fixed set of $f + 1$ main acceptors and $f$ auxiliary acceptors. During failure-free execution (the normal case), only the main acceptors are contacted. The auxiliary acceptors perform MultiPaxos' horizontal reconfiguration protocol to replace failed main acceptors. As with Fast Paxos, we can deploy Donut MultiPaxos with only $f + 1$ acceptors, $f$ fewer than Cheap Paxos. Donut Paxos does require ${2f} +$ 1 matchmakers, but matchmakers do not act as acceptors and have to process only a single message (i.e. a MATCHA message) to perform a reconfiguration.
|
| 398 |
+
|
| 399 |
+
§ 7 EVALUATION
|
| 400 |
+
|
| 401 |
+
We now evaluate Donut MultiPaxos. Donut MultiPaxos is implemented in Scala using the Netty networking library. We deployed Donut MultiPaxos on m5.xlarge AWS EC2 instances within a single availability zone. We deploy Donut MultiPaxos with $f = 1,f + 1$ proposers, ${2f} + 1$ acceptors, ${2f} + 1$ matchmakers, and ${2f} + 1$ replicas. For simplicity, every node is deployed on its own machine, but in practice, nodes can be physically co-located. In particular, any two logical roles can be placed on the same machine, so long as the two roles are not the same. For example, we can colocate a leader, an acceptor, a replica, and a matchmaker, but we can't co-locate two acceptors (without reducing the fault tolerance of the system). All of our results hold in a co-located deployment as well. For simplicity, we deploy Donut MultiPaxos with a trivial no-op state machine in which every state machine command is a one byte no-op. All of our results generalize to more complex state machines as well (the choice of state machine is orthogonal to reconfiguration).
|
| 402 |
+
|
| 403 |
+
§ 7.1 RECONFIGURATION
|
| 404 |
+
|
| 405 |
+
Experiment Description. We run a benchmark with 1,4, and 8 clients. Every client repeatedly proposes a state machine command, waits to receive a response, and then immediately proposes another command. Every benchmark runs for 35 seconds. During the first 10 seconds, we perform no reconfigurations. From 10 seconds to 20 seconds, the leader reconfigures the set of acceptors once every second. In practice, we would reconfigure much less often. This is a worst case stress test for Donut MultiPaxos. For each of the ten reconfigurations, the leader selects a random set of ${2f} + 1$ acceptors from a pool of $2 \times \left( {{2f} + 1}\right)$ acceptors. At 25 seconds, we fail one of the acceptors. 5 seconds later, the leader performs a reconfiguration to replace the failed acceptor. The delay of 5 seconds is completely arbitrary. The leader can reconfigure sooner if desired.
|
| 406 |
+
|
| 407 |
+
We also perform this experiment with an implementation of MultiPaxos with horizontal reconfiguration. As with Donut MultiPaxos, we deploy MultiPaxos with $f + 1$ proposers, ${2f} + 1$ acceptors, and ${2f} + 1$ replicas. We set $\alpha$ to 8 . Because $\alpha$ is equal to the number of clients, MultiPaxos never stalls because of an insufficiently large $\alpha$ .
|
| 408 |
+
|
| 409 |
+
Results. The latency and throughput of Donut MultiPaxos are shown in Figure 9. Throughput and latency are both computed using sliding one second windows. Median latency is shown using solid lines, while the 95% latency is shown as a shaded region above the median latency. The black vertical lines denote reconfigurations, and the red dashed vertical line denotes the acceptor failure.
|
| 410 |
+
|
| 411 |
+
The medians, interquartile ranges (IQR), and standard deviations of the latency and throughput (a) during the first 10 seconds and (b) between 10 and 20 seconds are shown in
|
| 412 |
+
|
| 413 |
+
< g r a p h i c s >
|
| 414 |
+
|
| 415 |
+
Figure 9: Donut MultiPaxos’ latency and throughput $\left( {f = 1}\right)$ . Median latency is shown using solid lines, while the 95% latency is shown as a shaded region above the median latency. The vertical black lines show reconfigurations. The vertical dashed red line shows an acceptor failure.
|
| 416 |
+
|
| 417 |
+
Table 1. Figure 12 includes violin plots of the same data. The white circles show the median values, while the thick black rectangles show the 25th and 75th percentiles. For latency, reconfiguration has little to no impact (roughly $2\%$ changes) on the medians, IQRs, or standard deviations. The one exception is that the 8 client standard deviation is significantly larger. This is due to a small number of outliers. Reconfiguration has little impact on median throughput, with all differences being statistically insignificant. The IQRs and standard deviations sometimes increase and sometimes decrease. The IQR is always less than $1\%$ of the median throughput, and the standard deviation is always less than 4%.
|
| 418 |
+
|
| 419 |
+
For every reconfiguration, the new acceptors become active within a millisecond. The old acceptors are garbage collected within five milliseconds. This means that only one configuration is ever returned by the matchmakers. We implement Donut MultiPaxos with an optimization called thriftiness [27]-where PHASE2A messages are sent to a randomly selected Phase 2 quorum-so the throughput and latency expectedly degrade after we fail an acceptor. After we replace the failed acceptor, throughput and latency return to normal within two seconds.
|
| 420 |
+
|
| 421 |
+
The latency and throughput of MultiPaxos is shown in Figure 10. As with Donut MultiPaxos, MultiPaxos can perform a horizontal reconfiguration without any performance degradation. We include the comparison to MultiPaxos for the sake of having some baseline against which we can compare Donut MultiPaxos, but the comparison is shallow. For this reason, we do not elaborate on the results much.
|
| 422 |
+
|
| 423 |
+
While Donut MultiPaxos does provide performance benefits over MultiPaxos' and Raft's reconfiguration protocols, our goal is not to replace these protocols. Rather, there are dozens of other state machine replication protocols (e.g., EPaxos, CASPaxos, Caesar, Atlas) and distributed databases (e.g., TAPIR, Janus, Ocean Vista) that do not have any reconfiguration protocol and cannot use the existing reconfiguration protocols from MultiPaxos or Raft. Our hope is that the ideas in Donut MultiPaxos can be used to implement reconfiguration protocols for these other systems. For this reason, it is difficult to compare Donut MultiPaxos against some existing baseline because they simply do not exist.
|
| 424 |
+
|
| 425 |
+
< g r a p h i c s >
|
| 426 |
+
|
| 427 |
+
Figure 10: The latency and throughput of MultiPaxos with horizontal reconfiguration $\left( {f = 1}\right)$ .
|
| 428 |
+
|
| 429 |
+
Table 1: Figure 9 median, interquartile range, and standard deviation of latency and throughput.
|
| 430 |
+
|
| 431 |
+
Latency (ms)
|
| 432 |
+
|
| 433 |
+
max width=
|
| 434 |
+
|
| 435 |
+
2*X 2|c|1 Client 2|c|4 Clients 2|c|8 Clients
|
| 436 |
+
|
| 437 |
+
2-7
|
| 438 |
+
0s-10s 10s-20s 0s-10s 10s-20s 0s-10s 10s-20s
|
| 439 |
+
|
| 440 |
+
1-7
|
| 441 |
+
median 0.292 0.287 0.317 0.321 0.398 0.410
|
| 442 |
+
|
| 443 |
+
1-7
|
| 444 |
+
IQR 0.040 0.026 0.029 0.036 0.036 0.039
|
| 445 |
+
|
| 446 |
+
1-7
|
| 447 |
+
stdev 0.114 0.085 0.076 0.081 0.089 0.305
|
| 448 |
+
|
| 449 |
+
1-7
|
| 450 |
+
|
| 451 |
+
Throughput (commands/second)
|
| 452 |
+
|
| 453 |
+
max width=
|
| 454 |
+
|
| 455 |
+
2*X 2|c|1 Client 2|c|4 Clients 2|c|8 Clients
|
| 456 |
+
|
| 457 |
+
2-7
|
| 458 |
+
0s-10s 10s-20s 0s-10s 10s-20s 0s-10s 10s-20s
|
| 459 |
+
|
| 460 |
+
1-7
|
| 461 |
+
median 2,995 3,177 11,874 11,478 19,146 18,446
|
| 462 |
+
|
| 463 |
+
1-7
|
| 464 |
+
IQR 152 53 175 145 140 373
|
| 465 |
+
|
| 466 |
+
1-7
|
| 467 |
+
stdev 157 111 298 307 358 520
|
| 468 |
+
|
| 469 |
+
1-7
|
| 470 |
+
|
| 471 |
+
Summary. This experiment confirms that Donut Multi-Paxos's throughput and latency remain steady even during abnormally frequent reconfiguration. Moreover, it confirms that Donut MultiPaxos can reconfigure to a new set of acceptors and retire the old set of acceptors on the order of milliseconds.
|
| 472 |
+
|
| 473 |
+
< g r a p h i c s >
|
| 474 |
+
|
| 475 |
+
Figure 11: Violin plots of Figure 9 latency and throughput during the first 10 seconds and between 10 and 20 seconds.
|
| 476 |
+
|
| 477 |
+
§ 7.2 LEADER FAILURE
|
| 478 |
+
|
| 479 |
+
Experiment Description. We deploy Donut MultiPaxos exactly as before. Now, each benchmark runs for 20 seconds. During the first 7 seconds, there are no reconfigurations and no failures. At 7 seconds, we fail the leader. 5 seconds later, a new leader is elected and resumes normal operation. The 5 second delay is arbitrary; a new leader could be elected quicker if desired.
|
| 480 |
+
|
| 481 |
+
Results. The latency and throughput of the benchmarks are shown in Figure 14. During the first 7 seconds, throughput and latency are both stable. When the leader fails, the throughput expectedly drops to zero. The throughput and latency return to normal within two seconds after a new leader is elected.
|
| 482 |
+
|
| 483 |
+
Summary. This experiment confirms that the extra latency of the Matchmaker phase during a leader change is negligible.
|
| 484 |
+
|
| 485 |
+
§ 7.3 MATCHMAKER RECONFIGURATION
|
| 486 |
+
|
| 487 |
+
Experiment Description. We deploy Donut MultiPaxos as above. We again run three benchmarks with 1,4, and 8 clients. Each benchmark runs for 40 seconds. During the first 10 seconds, there are no reconfigurations and no failures. Between 10 and 20 seconds, the leader reconfigures the set of matchmakers once every second. Every reconfiguration randomly selects ${2f} + 1$ matchmakers from a set of $2 \times ({2f} +$ 1) matchmakers. At 25 seconds, we fail a matchmaker. At 30 we perform a matchmaker reconfiguration to replace the failed matchmaker. At 35 seconds, we reconfigure the acceptors.
|
| 488 |
+
|
| 489 |
+
< g r a p h i c s >
|
| 490 |
+
|
| 491 |
+
Figure 12: Violin plots of Figure 10 latency and throughput during the first 10 seconds and between 10 and 20 seconds.
|
| 492 |
+
|
| 493 |
+
Table 2: Figure 15 median, interquartile range, and standard deviation of latency and throughput.
|
| 494 |
+
|
| 495 |
+
Latency (ms)
|
| 496 |
+
|
| 497 |
+
max width=
|
| 498 |
+
|
| 499 |
+
2*X 2|c|1 Client 2|c|4 Clients 2|c|8 Clients
|
| 500 |
+
|
| 501 |
+
2-7
|
| 502 |
+
0s-10s 10s-20s 0s-10s 10s-20s 0s-10s 10s-20s
|
| 503 |
+
|
| 504 |
+
1-7
|
| 505 |
+
median 0.297 0.292 0.314 0.313 0.404 0.398
|
| 506 |
+
|
| 507 |
+
1-7
|
| 508 |
+
IQR 0.032 0.024 0.031 0.030 0.035 0.028
|
| 509 |
+
|
| 510 |
+
1-7
|
| 511 |
+
stdev 0.077 0.061 0.093 0.098 0.383 0.067
|
| 512 |
+
|
| 513 |
+
1-7
|
| 514 |
+
|
| 515 |
+
Throughput (commands/second)
|
| 516 |
+
|
| 517 |
+
max width=
|
| 518 |
+
|
| 519 |
+
2*X 2|c|1 Client 2|c|4 Clients 2|c|8 Clients
|
| 520 |
+
|
| 521 |
+
2-7
|
| 522 |
+
0s-10s 10s-20s 0s-10s 10s-20s 0s-10s 10s-20s
|
| 523 |
+
|
| 524 |
+
1-7
|
| 525 |
+
median 3019 3147 11631 11726 18569 19248
|
| 526 |
+
|
| 527 |
+
1-7
|
| 528 |
+
IQR 41 51 140 145 391 71
|
| 529 |
+
|
| 530 |
+
1-7
|
| 531 |
+
stdev 66 72 250 231 478 159
|
| 532 |
+
|
| 533 |
+
1-7
|
| 534 |
+
|
| 535 |
+
Results. The latency and throughput of Donut MultiPaxos are shown in Figure 15. The latency and throughput of the protocol remain steady through the first ten matchmaker reconfigurations, through the matchmaker failure and recovery, and through the acceptor reconfiguration. This is confirmed by the medians, IQRs, and standard deviations of the latency and throughput during the first 10 seconds and between 10 and 20 seconds, which are shown in Table 2.
|
| 536 |
+
|
| 537 |
+
Summary. This benchmark confirms that matchmakers are off the critical path. The latency and throughput of Donut MultiPaxos remains steady during a matchmaker reconfiguration and matchmaker failure. Moreover, a matchmaker reconfiguration does not affect the performance of subsequent acceptor reconfigurations.
|
| 538 |
+
|
| 539 |
+
< g r a p h i c s >
|
| 540 |
+
|
| 541 |
+
Figure 13: Donut MultiPaxos’ latency and throughput $\left( {f = 1}\right)$ . The dashed red line denotes a leader failure.
|
| 542 |
+
|
| 543 |
+
§ 8 RELATED WORK
|
| 544 |
+
|
| 545 |
+
SMART. SMART [23] is a reconfiguration protocol that resolves many ambiguities in MultiPaxos' horizontal approach (e.g., when it is safe to retire old configurations). Like Mul-tiPaxos' horizontal reconfiguration protocol, SMART can reconfigure a protocol with minimal performance degradation. SMART differs from Donut Paxos is a number of ways. First, like MultiPaxos' horizontal reconfiguration protocol, SMART is fundamentally log based and is therefore incompatible with many sophisticated state machine replication protocols. Second, SMART assumes that acceptors and replicas are always co-located. This prevents us from reconfiguring the acceptors without reconfiguring the replicas. This is not ideal since we can reconfigure an acceptor without copying any state, but must transfer logs from an old replica to a new replica. SMART's garbage collection also has higher latency that Donut Paxos' garbage collection. For Scenario 3, Donut Paxos proposers wait until a prefix of the log is stored on $f + 1$ replicas. SMART waits for the prefix of the log to be executed and snapshotted by $f + 1$ replicas.
|
| 546 |
+
|
| 547 |
+
Raft. Raft [30] uses a reconfiguration protocol called joint consensus. Like MultiPaxos' horizontal reconfiguration, joint consensus is log-based and therefore incompatible with many existing replication protocols. A simpler reconfiguration protocol for Raft was proposed in [29] but requires more rounds of communication.
|
| 548 |
+
|
| 549 |
+
Viewstamped Replication (VR). VR [22] uses a stop-the-world approach to reconfiguration. During a reconfiguration, the entire protocol stops processing commands. Thus, while the reconfiguration is quite simple, it is inefficient. Stoppable Paxos [18] is similar to MultiPaxos' horizontal reconfiguration, but also uses a stop-the-world approach. VR's stop-the-world approach to reconfiguration is also adopted by databases built on VR, including TAPIR [36] and Meerkat [34]. We use a similar approach to reconfigure matchmakers, but because matchmakers are off the critical path, the performance overheads are invisible.
|
| 550 |
+
|
| 551 |
+
< g r a p h i c s >
|
| 552 |
+
|
| 553 |
+
Figure 14: The latency and throughput of Horizontal Multi-Paxos with $f = 1$ .
|
| 554 |
+
|
| 555 |
+
Fast Paxos Coordinated Recovery. Fast Paxos has an optimization called coordinated recovery that is similar to Phase 1 Bypassing. The main difference is that in coordinated recovery, a leader uses Phase 2 information in round $i$ to skip Phase 1 in round $i + 1$ , whereas with Phase 1 Bypassing, the leader instead uses Phase 1 information. Note that coordinated recovery is not useful for Donut MultiPaxos. It is subsumed by Phase 1 Bypassing. Coordinated recovery is only needed for Fast Paxos because the leader may not know which values were proposed in a round it owns. Phase 1 Bypassing cannot be applied to Fast Paxos for pretty much the same reason.
|
| 556 |
+
|
| 557 |
+
DynaStore. Vertical Paxos assumes its external master is implemented using state machine replication. MultiPaxos' horizontal reconfiguration also depends on consensus. Donut Paxos does not require consensus to implement matchmakers, but we are not the first to notice this. DynaStore [1] showed that reconfiguring atomic storage does not require consensus.
|
| 558 |
+
|
| 559 |
+
ZooKeeper. ZooKeeper, a distributed coordinated service, which uses ZooKeeper Atomic Broadcast [11] is a protocol similar to MultiPaxos that can also reconfigure quickly after leader failures.
|
| 560 |
+
|
| 561 |
+
< g r a p h i c s >
|
| 562 |
+
|
| 563 |
+
Figure 15: The latency and throughput of Donut MultiPaxos $\left( {f = 1}\right)$ . The dotted blue, dashed red, and vertical black lines show matchmaker reconfigurations, a matchmaker failure, and an acceptor reconfiguration respectively.
|
| 564 |
+
|
| 565 |
+
§ 9 CONCLUSION
|
| 566 |
+
|
| 567 |
+
We presented Donut Paxos and Donut MultiPaxos to address the lack of research on the increasingly important topic of reconfiguration. Our protocols achieve a number of desirable properties, both theoretical and practical: they can reconfigure without performance degradation, they provide insights into existing protocols, and they generalize better than existing techniques.
|
papers/JSYS/JSYS 2022/JSYS 2022 Aug_Papers/RgeMS1Tf1zs/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,245 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# SOLUTION: INSPECTING TRAFFIC IN RESIDENTIAL NETWORKS WITH OPPORTUNISTICALLY OUTSOURCED MIDDLEBOXES
|
| 2 |
+
|
| 3 |
+
Anonymous authors
|
| 4 |
+
|
| 5 |
+
Paper under double-blind review
|
| 6 |
+
|
| 7 |
+
## Abstract
|
| 8 |
+
|
| 9 |
+
Since they lack the powerful tools and personnel available in enterprise-grade security solution, home networks have particularly difficult network security challenges. While prior efforts outsource network traffic to cloud or cloudlet services, such measures redirect network traffic out of the home network, which grants a third-party access to see and profile traffic. This affects the privacy of that traffic. Further, if those tools need to apply Transport Layer Security (TLS) decryption to enhance their monitoring insight, the privacy risks to home users grows substantially. Alternatively, residents may introduce new physical hardware in their home networks, but doing so incurs greater capital costs that would impede deployment.
|
| 10 |
+
|
| 11 |
+
Our work explores a system to leverage existing available devices, such as smartphones, tablets and laptops, inside a home network to create a platform for traffic inspection. By using devices owned and operated by the same end-users, the system can peeking into TLS traffic and perform detailed inspection without introducing risks from third parties. By leveraging existing devices in a home network, we can implement our platform with no additional hardware costs. Our performance evaluation shows that such middleboxes can substantially increase the throughput of communication from around ${10}\mathrm{{Mbps}}$ to around ${90}\mathrm{{Mbps}}$ , while increasing CPU usage at the router by around ${15}\%$ , with a ${20}\%$ CPU usage increase on a smartphone (with single core processing), and with a latency increase of about 120 milliseconds to network packets.
|
| 12 |
+
|
| 13 |
+
## 1 Introduction
|
| 14 |
+
|
| 15 |
+
The increasing use of household broadband Internet service and smart home technologies are likewise increasing risks related to home network. In 2021, 442 million smart devices, and ${82}\%$ home networks are connected to the Internet. New smart devices, like smart cameras, can collect data from users to provide intelligent services, but it can be difficult for defenders to determine what network traffic is associated with these devices. It can be difficult for users to determine if their devices are safe from network attacks. A study with fifteen smart home users indicates that eleven participants were worry about physical security and five participants were concerned about the privacy associated with these devices [1]. Typically, consumer-grade routers do not provide effective network protections [2]. As a result, attackers have various opportunities to compromise networked devices, allowing lateral propagation and a range of attacks.
|
| 16 |
+
|
| 17 |
+
The limited capabilities of consumer-grade network hardware force difficult trade-offs in modern home networks. While prior work has proposed lightweight functionality on residential routers [3], the computational constraints of those devices limit the types of tasks that can be hosted on such routers. Efforts to profile and examine encrypted traffic using machine learning [4] would exceed the resources of many such routers. These routers are unable to engage in the more sophisticated analysis common in enterprise security gateways.
|
| 18 |
+
|
| 19 |
+
Other techniques push the computational tasks associated with network screening to remote servers. Feamster [5] proposes that home networks can outsource their security mechanisms to cloud servers with software-defined networking (SDN) technologies. TLSDeputy [6] uses remote servers to validate TLS certificates and protocol settings to ensure the authenticity of communicating endpoints. However, both techniques allow the operators of cloud infrastructure to have insight into the activities of a home network, introducing new privacy risks and an expanded trusting computing base.
|
| 20 |
+
|
| 21 |
+
In contrast to prior efforts, we consider mechanisms to deploy home network traffic inspection in an opportunistic fashion. We explore mechanisms to leverage existing devices in a home network when they are available to screen communication. In doing so, we ask the following research questions:
|
| 22 |
+
|
| 23 |
+
- To what extent can we utilize current resources within a home network to build real-time packet inspection?
|
| 24 |
+
|
| 25 |
+
- To what extent would such a packet inspection system influence the performance of the home network, in terms of traffic latency, resource consumption, and throughput?
|
| 26 |
+
|
| 27 |
+
Our approach enables devices such as smartphones, tablets, laptops and desktops to perform traffic analysis. These devices can operate as security proxies when they are available, enabling detailed analysis. In pursuing this direction, our work makes the following contributions:
|
| 28 |
+
|
| 29 |
+
- Creation of Prototype On-Router and Outsourced Middleboxes: We use open source firmware on a consumer-grade router to profile traffic locally and via a smartphone. We compare baseline communication with an on-router program that profiles destination addresses. We further implement a technique to transparently direct traffic through a smartphone middlebox using network address translation (NAT) rules on the router.
|
| 30 |
+
|
| 31 |
+
- Performance Evaluation of Deployment Options: We compare baseline forwarding of the router with on-router inspection and with opportunistic outsourcing to a smart-phone. Our evaluation shows that on-device inspection has a throughput of around ${10}\mathrm{{Mbps}}$ whereas outsourc-ing the inspection to a smartphone achieve roughly 90 Mbps of throughput. The smartphone middleboxing approach adds around 15% CPU usage to the router, 20% CPU usage to a smartphone (with single core processing), and introduces 120 milliseconds of round trip time (RTT) delay to network traffic.
|
| 32 |
+
|
| 33 |
+
## 2 Background and Related work
|
| 34 |
+
|
| 35 |
+
In this section we introduce the background knowledge and prior work on residential network computation and security.
|
| 36 |
+
|
| 37 |
+
### 2.1 Computation in Residential Networks
|
| 38 |
+
|
| 39 |
+
A 2015 survey found that 77% of US households subscribed broadband internet service, and ${78}\%$ own a desktop or laptop computer [7]. However, modern home networks face many security challenges, since the value of assets managed by residential networks is increasing. Attackers can gain sensitive information or directly control the devices and launch attack on other devices, such eavesdropping, replay attack, network scanning, and data theft $\left\lbrack {8,9}\right\rbrack$ .
|
| 40 |
+
|
| 41 |
+
There are effective ways to detect these attacks, but they require significant computational resources. Hafeez et al. [8] find that machine learning methods can detect a series attack with accuracy as high as 99%. Jan et al. [10] propose a method to detect a compromised device that joins a botnet with very limited data through a deep learning algorithm. A powerful inspection platform is helpful in increasing home network security.
|
| 42 |
+
|
| 43 |
+
### 2.2 Perimeter Defense for Home Networks
|
| 44 |
+
|
| 45 |
+
Perimeter defenses can be useful for residential networks. While the basic NAT functionality on residential routers typically prevents unsolicited inbound communication, it is ineffective at detecting or stopping existing compromises within a network or attacks that are launched via a connection initiated from within the network.
|
| 46 |
+
|
| 47 |
+
Li et al. propose applying deep learning anomaly detection techniques for securing home networks; however, their method runs on equipment with computational resources that may be inapplicable to many home networks [11]. ParaDrop [12] proposes allowing third-party application providers to install lightweight containers to provide a gateway for simple tasks. However, ParaDrop does not have sufficient resources to run resource-consuming tasks like intrusion detection. Another work [13] adds plug-and-play devices to a consumer-grade router, which enables the router to work as an intelligent IoT gateway that can inspect traffic; however, it incurs capital costs and requires hardware modifications inside consumer routers that are likely beyond the technical abilities of some potential deployers.
|
| 48 |
+
|
| 49 |
+
Shirali-Shahreza et al. [14] summarized commercial home network firewall products. Each requires the installation of additional devices in the network with an initial cost of at least $\$ {200}$ and with ongoing monthly service costs. These devices may replace typical home routers or act in conjunction with existing routers. Some use virtual private network (VPN) techniques to tunnel traffic to a remote VPN server that can inspects and analyze home network traffic before forwarding the traffic. These methods introduce additional costs and equipment for users.
|
| 50 |
+
|
| 51 |
+
To simplify the management but keep security enforcement, Feamster proposes to outsource security needs to a remote cloud server through SDN architecture [5]. Experts and professional security software help to dynamically manage the network. Since cloud servers may contain richer resources, modules running in the controller can provide better analysis, along with a broader view of the network. Many other works propose to build remote firewalls on the controller based on this architecture $\left\lbrack {8,{14} - {18}}\right\rbrack$ . Most of them propose to utilize the gateway as an OpenFlow host in the home network. Others propose to use a locally-available device, such as a Raspberry $\mathrm{{Pi}}$ , instead. The agent usually samples the network traffic and uploads it to the cloud. Controllers running in the cloud may also run a firewall module to inspect the sampled flow information. The agent further executes the returned action decision, which is usually a security policy received from the controller. However, these methods are based on users' trust that their private data is being used properly by a third-party provider; some users have concerns about such providers.
|
| 52 |
+
|
| 53 |
+
### 2.3 Edge Computing for Local Networks
|
| 54 |
+
|
| 55 |
+
The edge computing paradigm builds decentralized computing pools for processing jobs from clients, bringing the computation closer to the source of data [19]. Cloudlet [20] is a popular edge computing prototype that offloads tasks to nodes that can scale. These nodes can be hosted by ISPs or other providers. Drop computing [21] builds a collaborative computing cloud using mobile devices in which one device can offload tasks to other devices. When there is no available device, the system seeks help from the cloud. This method is designed for ad hoc networks, which lack reliability since devices may enter and leave the network frequently and network coverage is usually limited. Similarly, Verbelen et al. [22] split tasks and offload them to a virtualized environment, either on mobile devices or on the cloud. Gedeon et al.propose to let a more reliable device, a home network gateway, run a broker. The gateway seeks available cloudlet nodes to help with its tasks [23]. Their methods still outsource the computation to a third-party platform, which raises privacy concerns. That work demonstrates that running a broker on a residential router does not introduce significant overhead, a result we leverage in our own approach.
|
| 56 |
+
|
| 57 |
+
Aazam et al. [24] propose to use either smart gateway or other localized fog nodes to do data pre-processing, before uploading data to the cloud. The pre-processing not only reduces the data size and retains only the necessary data, it can reduce some, but not all, privacy risks.
|
| 58 |
+
|
| 59 |
+
We explore a mechanism to perform network traffic inspection within a home network only. Unlike existing edge computing work, we focus on inspecting streaming data rather than discrete computational jobs.
|
| 60 |
+
|
| 61 |
+
## 3 Approach
|
| 62 |
+
|
| 63 |
+
Our research compares two approaches: on-router inspection via NFQUEUE and on-phone inspection via NAT redirection. We start by introducing the threat model. Then, we illustrate the process of on-router inspection. Finally, we describe the functionalities of each component of the phone-based inspection platform and how they work together.
|
| 64 |
+
|
| 65 |
+
### 3.1 Threat Model and Scope
|
| 66 |
+
|
| 67 |
+
We consider a basic threat model where malicious communication occurs between a host within the network and a system outside of the home network. The defender's goal is to inspect all traffic leaving the home network. In this model, we trust the router and the smartphone that acts as a proxy. We do not trust the endpoints.
|
| 68 |
+
|
| 69 |
+
Our goal is to create mechanisms that enable arbitrary traffic inspection on a reasonably resourced device, such as a phone. We do not aim to create new anomaly detectors or traffic inspection engines; that is out of the scope of this work. Accordingly, we demonstrate baseline functionality using an block list of destination addresses.
|
| 70 |
+
|
| 71 |
+
### 3.2 On-Router Inspection via NFQUEUE
|
| 72 |
+
|
| 73 |
+
We implement a basic C++ program to inspect IP addresses that is compiled to run natively on the router. The program uses the iptables packet inspection tool and the netfilter_queue library to inspect traffic. Essentially, the iptables tool operates on each packet processed by the Linux stack on the router. This action occurs when packets cross from the LAN interfaces to the WAN interface. The iptables program sets an NFQUEUE judgment for all packets, causing them to enter a kernel data structure. The C++ program extracts the packets from that data structure, inspects the address, and returns the packets to the kernel queue for transmission. This program represents the minimum inspection required for a general-purpose user-space inspection program on the router.
|
| 74 |
+
|
| 75 |
+
### 3.3 On-Phone Inspection via NAT Redirection
|
| 76 |
+
|
| 77 |
+
There are two components in our on-phone inspection approach. The first is a set of NAT rules on the router. We use the iptables program, which can manage IP packet rules in the Linux kernel. The NAT table is one table of iptables to create several rules in the NAT table to transform the original destination IP address of the packets from the server to the IP address of the smartphone, so the traffic sent from the client can be redirected to the smart-phone. In the example shown in Figure 1, we first apply a DNAT rule as iptables -t nat -A PREROUTING -p tcp -s 192.168.1.2 -d 172.16.1.2 -dport 6666 -j DNAT -to-destination 192.168.1.3:6666 and an SNAT rule as iptables -t nat -A POSTROUTING -p tcp -s 192.168.1.2 -d 192.168.1.193 -dport 6666 -j SNAT -to-source 192.168.1.1 to forward traffic to the smartphone. Then the smartphone works as a proxy that receives packets and sends them back to the router after inspection. When these packets return to the router, the router transforms their destination IP address to the original server destination IP address based on another DNAT rule, such as iptables -t nat -A PREROUTING -p tcp -s 192.168.1.3 -d 192.168.1.1 -sport 7777 -j DNAT -to-destination 172.16.57.216:6666 Since all of the NAT rules work bidirectionally, the packets sent from server will will also go in the reverse direction, again traversing the smartphone. Rather than processing traffic as an arbitrary user space program in the router's Linux stack, our method forwards them using kernel data structures. This feature avoids potentially costly transitions to user space.
|
| 78 |
+
|
| 79 |
+
The second component in our approach is the proxy device and service. We first explore the smartphone as a proxy and implement a Java program that uses TCP to accept traffic for inspection on a pre-defined port. Figure 1 shows how the phone accepts communication from the router on a specific port. It starts a new TCP connection to a specifically configured port on the router, which the router pre-configures to forward to the remote server. Since the smartphone is on-path in our method, we retrieve the raw payload of every packet. While we only apply IP list filtering, more advanced inspection can be deployed in our method, such as TLS inspection (TLSI) .
|
| 80 |
+
|
| 81 |
+

|
| 82 |
+
|
| 83 |
+
<table><tr><td>Packet #</td><td>Source IP</td><td>Source Port</td><td>Destination IP</td><td>Destination Port</td></tr><tr><td>1</td><td>192,168.1.2</td><td>47289</td><td>172,16.1.2</td><td>6666</td></tr><tr><td>2</td><td>192,168.1.1</td><td>6001</td><td>192,168.1.3</td><td>6666</td></tr><tr><td>3</td><td>192,168.1.3</td><td>7777</td><td>192,168.1.1</td><td>6001</td></tr><tr><td>4</td><td>172,16.1.1</td><td>6002</td><td>172,16.1.2</td><td>6666</td></tr><tr><td>5</td><td>172,16.1.2</td><td>6666</td><td>172.16.1.1</td><td>6002</td></tr><tr><td>6</td><td>192,168.1.1</td><td>6001</td><td>192,168.1.3</td><td>7777</td></tr><tr><td>7</td><td>192,168.1.3</td><td>6666</td><td>192,168.1.1</td><td>6001</td></tr><tr><td>8</td><td>172,16.1.2</td><td>6666</td><td>192,168.1.2</td><td>47289</td></tr></table>
|
| 84 |
+
|
| 85 |
+
Figure 1: An example of packet forwarding via NAT rules. As the client sends the original packet to the server, the router modifies the packet and forwards it to the smartphone. After the smartphone performs packet inspection, it sends the packet back to the router. Then the router forwards it to the server. Since all of the NAT rules work bidirectionally, the packets sent from the server will follow the reverse path.
|
| 86 |
+
|
| 87 |
+
## 4 Implementation
|
| 88 |
+
|
| 89 |
+
We implement our method in a lab environment on physical devices. We run the OpenWrt 21.02.2 operating system (OS) on a TP-LINK AC1750 Wireless Dual Band Gigabit Router. We simulate a home network client user on a laptop with four cores and 16 GBytes of memory, running the Windows OS. We simulate a server outside of the home network on a laptop with four cores and 16 GBytes of memory, running the Ubuntu 20.04 OS. We use a smartphone with eight ${2.0}\mathrm{{GHz}}$ cores and 4 GBytes of memory, running the Android 11 OS as the proxy device.
|
| 90 |
+
|
| 91 |
+
For the network configuration, as showen in Figure 2, we create two VLANs: one is on interface eth0, and the other is on interface eth1. We assign the LAN ports and wireless radio to one VLAN, and assign the WAN port to the other VLAN. The client connects to a LAN port via a category 6 Ethernet cable. The server also connects to the WAN port using a category 6 cable. For the radio, we build an access point on ${5.785}\mathrm{{GHz}}$ using a Qualcomm Atheros QCA 9880 802.11ac adapter. We connect the smartphone to this access point at a distance of 3 feet with an unobstructed line of sight.
|
| 92 |
+
|
| 93 |
+
After configuring the home network as defined in the threat model, we add three NAT rules to iptables in the router, as described in Section 3. These rules include SNAT and DNAT rules and have the capability of redirecting traffic between the client and the server to travel via the smartphone. On the smartphone side, we use Android Studio to build a Java application that performs packet inspection based on a malicious IP block list and hosts a proxy service.
|
| 94 |
+
|
| 95 |
+

|
| 96 |
+
|
| 97 |
+
Figure 2: Our implementation's network configuration
|
| 98 |
+
|
| 99 |
+
## 5 Performance Evaluation and Results
|
| 100 |
+
|
| 101 |
+
The most straightforward mechanism for implementing an inspection and analysis middlebox is to use a device that is already physically on the network path. In home networks, a residential wireless router typically fills that role. To justify the added complexity of opportunistic middleboxes, we explore the performance implications of using such commodity devices. We use a typical network setting, without the use of inspection functionality, to establish a baseline. We then explore on-device inspection. Finally, we examine an inspection method in which NAT rules are used to reroute traffic to a middlebox, using both a smartphone emulator and a physical commodity smartphone for analysis.
|
| 102 |
+
|
| 103 |
+
In examining these scenarios, we evaluate the performance of each using four metrics: flow throughput, end-to-end round trip time (RTT), the CPU usage at the router, and the CPU usage of the smartphone when it is in use.
|
| 104 |
+
|
| 105 |
+
### 5.1 The Baseline: LAN to WAN traffic
|
| 106 |
+
|
| 107 |
+
Our baseline scenario connects a client to a server though a residential router. Often, the WAN port is used on the router to connect to upstream networks, such as the Internet, and the servers available through those networks. Therefore, we connect an Ubuntu server to the WAN port of the router using a category 6 Ethernet cable, which supports duplex gigabit connectivity. The server uses a gigabit Ethernet card. We statically configure the IP addresses of the server and the router's WAN port within a subnet that is only used by those two devices.
|
| 108 |
+
|
| 109 |
+
The connectivity options for clients may vary in different homes. Some devices may be connected via Ethernet connections to the LAN ports of the router. In other cases, devices may connect using WiFi radio links. Accordingly, we explore both of these connection scenarios.
|
| 110 |
+
|
| 111 |
+
We begin by exploring the case in which the client is connected to a LAN port on the router via a category 6 Ethernet cable. We use the router's built-in DHCP server, which assigns an address to the client in a subnet that the router and client share, yet is disjoint from the subnet used by the server. We use the router's built-in default NAT capabilities to translate across the subnets, which is a common deployment model in homes. Using the iperf 3 benchmarking tool [25], we test a TCP connection between the client and the server. We configure iperf 3 to attempt to maximize throughput in the channel and observe it for 1,100 seconds. We conducted 3 trials and measured the throughput for 1000 seconds after an initial delay of 100 seconds to accommodate TCP's slow-start behavior. As we see in the right-most two lines in Figure 3, the median download throughput is 440.00 Mbps and the median upload throughput is ${254.00}\mathrm{{Mbps}}$ , with tight distributions (standard deviation of ${4.90}\mathrm{{Mbps}}$ for download and 3.27 Mbps for upload).
|
| 112 |
+
|
| 113 |
+

|
| 114 |
+
|
| 115 |
+
Figure 3: Results from throughput tests when the client connects to the router via a category 6 Ethernet cable. The green lines show upload and download throughput under a baseline setting. The red lines show both throughputs after applying on-router inspection via NFQUEUE library. The blue lines show both throughput after applying on-phone inspection using NAT redirection rules.
|
| 116 |
+
|
| 117 |
+
Next, we determine the impact of connecting the client to the router using WiFi radios. The client has a network adapter capable of using 802.11ac communication, and the router supports 802.11ac, leading to the lowest common denominator of the 802.11ac standard. That standard has a theoretical maximum throughput of 1300 Mbps, though practical throughput is often less due to interference and obstructions. We place the router and the client roughly 3 feet apart with line-of-sight. We then repeat our throughput analysis using the iperf3 benchmark tool, with the same settings as our Ethernet experiments. We use the same 3 trials and timing windows as in the earlier experiment. In Figure 4, the right-most two lines show the baseline throughput results via WiFi. The median download throughput is 196.00 Mbps and the median upload throughput is ${229.00}\mathrm{{Mbps}}$ , with standard deviation of 10.38 Mbps for download and 14.47 Mbps for upload.
|
| 118 |
+
|
| 119 |
+

|
| 120 |
+
|
| 121 |
+
Figure 4: Results from throughput tests when the client connects to the router wireless. The rightmost green lines show upload and download throughput under a baseline setting. The leftmost red lines show the throughput after applying on-router inspection via NFQUEUE library. The middle blue lines show the throughput after applying on-phone inspection using NAT redirection rules.
|
| 122 |
+
|
| 123 |
+
Since the communication throughput via Ethernet appears to be less than the medium's theoretical maximum, we explore whether the router could be causing a bottleneck. In particular, we examine the CPU of the router. While we test the maximum throughput, we use the top tool to record the CPU usage of the router for 1000 seconds. As shown in Table 1, the CPU usage of the router is at its limit more than ${90}\%$ of the time when testing maximum throughput. These results show that the CPU of our router is the performance bottleneck for higher throughput.
|
| 124 |
+
|
| 125 |
+
To determine the added CPU usage from different traffic inspection methods, we need to measure the router's CPU usage in a moderate working scenario, rather than in an extreme situation. We thus evaluate the scenario in which the TCP connection throughput is reduced to ${10}\mathrm{{Mbps}}$ of randomized payload to the server. We also record the CPU usage of the router for 1000 seconds. The green line in Figure 5 shows that the median CPU usage of the router is ${9.00}\%$ with standard deviation of ${1.58}\%$ .
|
| 126 |
+
|
| 127 |
+
Table 1: CPU usage of the router while testing the maximum throughput of connections in six scenarios.
|
| 128 |
+
|
| 129 |
+
<table><tr><td>Percentile of Trials</td><td>10th</td><td>50th</td><td>90th</td></tr><tr><td>CPU Usage in Baseline Upload</td><td>100%</td><td>100%</td><td>100%</td></tr><tr><td>CPU Usage in Baseline Download</td><td>100%</td><td>100%</td><td>100%</td></tr><tr><td>CPU Usage in NAT Upload</td><td>98%</td><td>100%</td><td>100%</td></tr><tr><td>CPU Usage in NAT Download</td><td>97%</td><td>100%</td><td>100%</td></tr><tr><td>CPU Usage in NFQUEUE Upload</td><td>100%</td><td>100%</td><td>100%</td></tr><tr><td>CPU Usage in NFQUEUE Download</td><td>100%</td><td>100%</td><td>100%</td></tr></table>
|
| 130 |
+
|
| 131 |
+

|
| 132 |
+
|
| 133 |
+
Figure 5: CPU usage of the router when applying on-phone inspection using NAT redirection rules, applying on-router inspection via NFQUEUE library, and a baseline without inspection when throughput is limited to ${10}\mathrm{{Mbps}}$ .
|
| 134 |
+
|
| 135 |
+
While throughput is an important metric, the end-to-end round trip time (RTT) is also important for understanding the delay introduced by the network paths and the router. To test this, we construct an echo program on the server and a recording device on the client to measure the time difference between the client sending a specific payload and receiving a reply. Across 1000 trials, we see that the left-most line in Figure 6 has a median RTT of ${1.12}\mathrm{\;{ms}}$ with a standard deviation of ${0.12}\mathrm{\;{ms}}$ . When repeating this analysis via WiFi, in Figure 7, we see the median RTT of the left-most line is ${2.72}\mathrm{\;{ms}}$ with a ${6.14}\mathrm{\;{ms}}$ standard deviation.
|
| 136 |
+
|
| 137 |
+
### 5.2 On-Router Inspection via NFQUEUE
|
| 138 |
+
|
| 139 |
+
To explore whether the router itself can feasibly inspect traffic, we implement a basic $\mathrm{C} + +$ program, that is compiled to run natively on the router, to inspect IP addresses. The program's details are described in Section 3.2.
|
| 140 |
+
|
| 141 |
+
We explore the throughput, RTT, and router CPU metrics
|
| 142 |
+
|
| 143 |
+

|
| 144 |
+
|
| 145 |
+
RTT (milliseconds) in connection via Ethernet
|
| 146 |
+
|
| 147 |
+
Figure 6: RTT with a log scale in milliseconds between the client and the server when the client connects to the router via Ethernet. The leftmost green line shows baseline result. The middle red line shows the result after applying on-router inspection via the NFQUEUE library. The two rightmost blue lines show the results with two separate phones after applying on-phone inspection using NAT redirection rules. of the on-device inspection program using the same tools and settings used in Section 5.1. In the two left-most lines of Figures 3 and 4, we see the upload and download throughput after applying this inspection approach. We conducted 3 trials and measured the throughput for 1000 seconds after an initial delay of 100 seconds to accommodate TCP's slow-start behavior. As we seen in the right-most two lines in Figure 3, the median download throughput is ${9.62}\mathrm{{Mbps}}$ , and the median upload throughput is ${8.40}\mathrm{{Mbps}}$ (standard deviation of 3.91 Mbps for download and 3.93 Mbps for upload). Given this substantially decreased throughput from the baseline, we hypothesize that the change introduces a bottleneck on the router.
|
| 148 |
+
|
| 149 |
+

|
| 150 |
+
|
| 151 |
+
Figure 7: The RTT with a log scale in milliseconds between the client and the server when the client connects to the router via WiFi. The green line shows the baseline result. The red line shows the result after applying on-router inspection via NFQUEUE library. The two blue lines show the results with two separate phones after applying on-phone inspection using NAT redirection rules.
|
| 152 |
+
|
| 153 |
+
When we examine the CPU usage of the router, we confirm that this resource is exhausted. In Figure 5, we see that the baseline CPU usage is around 9% when throughput is limited to ${10}\mathrm{{Mbps}}$ , but is ${100}\%$ when the router performs packet inspection. The process elevates all traffic to the router's Linux user space environment, which requires significant computational resources on the router. Such routers tend to be manufactured with lower-end CPUs for economic reasons [26] and there appears to be little headroom for this additional operation. However, when the router is not overwhelmed, as in the simple echo server RTT tests, we see that the on-device router introduces minimal RTT increases over the baseline. These results are shown by the red line in Figure 6, which is close to the baseline results.
|
| 154 |
+
|
| 155 |
+
### 5.3 On-Phone Inspection via NAT Redirection
|
| 156 |
+
|
| 157 |
+
With the CPU limitations of residential routers, we explore the potential of re-routing packets via a smartphone to inspect traffic. As described in Section 3, we add three different NAT rules via iptables on the router to cause traffic to be sent via the phone. An example of traffic forwarding, after applying NAT rules, is shown in Figure 1.
|
| 158 |
+
|
| 159 |
+
The NAT rules cause traffic to be sent to a specific port on the smartphone. Our Java program runs on the phone, binds to the specified port, and receives packets. It performs simplistic packet inspection and then sends it back to the router on a specific port. The router uses its NAT rewriting rules to send it on to the server. When a reply from the server is received by the router, the traffic likewise traverses the phone for inspection before traveling to the client.
|
| 160 |
+
|
| 161 |
+
We use the same three metrics as in the baseline and on-router cases to explore the performance characteristics of this phone-based inspection approach. In addition, we consider the CPU usage of the phone application itself, since high usage may result in battery depletion on the phone and could prevent its practical deployment.
|
| 162 |
+
|
| 163 |
+
Using the same settings as in the two prior sections, we explore the throughput when traffic is directed through the Moto G Power smartphone. In the middle two lines of Figures 3, we see that the median download throughput is ${94.80}\mathrm{{Mbps}}$ and the median upload throughput is ${70.10}\mathrm{{Mbps}}$ , with tight distributions (standard deviation of 4.32 Mbps for download and 2.87 Mbps for upload). The throughput is substantially higher than the on-router inspection approach in both Figure 3 and Figure 4. In effect, the processing of the NAT rules on the router may incur less computational overhead than the full process of inspecting the traffic. Since the router's CPU was the bottleneck in the on-device inspection scenario, this adjustment increases the amount of traffic the router can handle.
|
| 164 |
+
|
| 165 |
+
In Figure 5, we are able to confirm that the NAT-based approach yields significantly lower CPU utilization than on-device inspection when throughput is limited to ${10}\mathrm{{Mbps}}$ . The middle line in that graph shows that the NAT approach has a median of 24.0% CPU utilization with a standard deviation of 2.61%.
|
| 166 |
+
|
| 167 |
+
The insertion of another device on the network path through a loop will necessarily increase the packet's propagation delay and may be observable in the overall end-to-end RTT. This is apparent in Figure 6, with the RTT of the NAT approach represented by the two right-most lines. We see patterns where ${20}\%$ of traffic has an RTT less than ${30.44}\mathrm{\;{ms}}$ while ${75}\%$ of traffic has an RTT over ${120.17}\mathrm{\;{ms}}$ . This is significantly higher than either the baseline scenario or when on-device inspection occurs. Importantly, this experiment uses a simple echo server approach and does not tax the CPU of the router. The on-device scenario would incur greater RTT delays when the CPU is a bottleneck due to processing delay.
|
| 168 |
+
|
| 169 |
+
In Figure 8, we explore the cause of the RTT delay in greater detail. We host a simple TCP echo program in three different ways. The left-most line represents the scenario when the echo server runs on the server using the baseline scenario (i.e., the traffic traverses the router to the server, bypassing the phone). The middle line represents the case when the echo server runs in an application within an Android emulator running on a laptop. The two rightmost lines represent the echo server running on two separate physical smartphones: a Moto G Power and a Pixel 2. While the first two scenarios have fairly tight distributions with RTTs less than ${10}\mathrm{\;{ms}}$ , the echo server built on the Moto G Power has a latency around ${20}\mathrm{\;{ms}}$ for most traffic. However, it has much longer delays for around ${20}\%$ of traffic. Moreover, the echo server built on the Pixel 2 has a latency of less than ${50}\mathrm{\;{ms}}$ for around half of the trials, but has delays over ${200}\mathrm{\;{ms}}$ for around half of the traffic. In essence, the simple echo server smartphone application sometimes incurs significant delay in sending or receiving traffic. While this occurs only around ${20}\%$ of the time for the echo server, the proxy example would incur two instances of this behavior, causing more traffic to incur a delay.
|
| 170 |
+
|
| 171 |
+
The distinct RTT behaviors exhibited by the two physical phones, that are not present in the Android emulator, may indicate some outside effect due to phone-specific factors. These could include the use of power savings modes, in which applications are periodically suspended or queued to reduce energy consumption.
|
| 172 |
+
|
| 173 |
+
Our last metric explores the energy usage of the proxy application on the phone. We again use the Moto G Power smartphone as a proxy while maximizing throughput transmission from the client to the server. In this experiment, we also open a music-playing application on the phone, in the background, for comparison. We then record the CPU usage of the proxy application and the music application for 1000 seconds using the top tool in the phone. We monitor the idle percentage of the 8 cores in the proxy device. In Table 2, we show the CPU usage of the proxy application and the music application, along with the time for which the CPU core is idle. In this table, ${100}\%$ represents the full utilization of a core on the device and ${800}\%$ represents the full utilization of all eight device cores. The first row in Table 2 represents the proxy application, which uses only about ${21}\%$ of a single core (and roughly ${107}\%$ of the CPU usage of the music application). We see that the majority of the device's computational resources are unused. As a result, we anticipate that the CPU-based energy consumption of the device would be a small fraction of a music application. Since phones are regularly used for music playing without significant power-related disruptions to end-users, it is likely that the proxy application would likewise be accommodated by phones.
|
| 174 |
+
|
| 175 |
+

|
| 176 |
+
|
| 177 |
+
Figure 8: Comparison of RTT between connecting the client directly to the Ubuntu server, the Android emulator, the Moto G Power, and the Pixel 2.
|
| 178 |
+
|
| 179 |
+
Table 2: CPU usage of the smartphone for different applications when maximizing throughput while applying on-phone inspection.
|
| 180 |
+
|
| 181 |
+
<table><tr><td>Percentile of Trials</td><td>10th</td><td>50th</td><td>90th</td></tr><tr><td>CPU Usage of Proxy App</td><td>18%</td><td>21%</td><td>24%</td></tr><tr><td>CPU Usage of Music App</td><td>98%</td><td>107%</td><td>114%</td></tr><tr><td>CPU Idle</td><td>535%</td><td>560%</td><td>584%</td></tr></table>
|
| 182 |
+
|
| 183 |
+
## 6 Conclusion
|
| 184 |
+
|
| 185 |
+
The need for privacy and the limited computational resources in residential networks complicate traffic inspection and analysis. Residential routers' limited CPU resources make it difficult to deploy even straightforward IP address-based inspection tools without substantially limiting throughput through the router. However, with carefully-crafted NAT rules, a router can redirect communication through another device, such as a smartphone, to inspect traffic.
|
| 186 |
+
|
| 187 |
+
In our experiments, we find that NAT-based diversion through a smartphone can substantially raise the communication throughput from around ${10}\mathrm{{Mbps}}$ to around ${90}\mathrm{{Mbps}}$ . The router can periodically examine its ARP and DHCP data structures to detect the availability of a phone in the LAN, contact an application on the phone to configure proxy services, and then divert traffic through the phone to enable outsourced inspection. With such an approach, residential networks can opportunistically use available smartphones as middleboxes to enable higher-throughput traffic inspection.
|
| 188 |
+
|
| 189 |
+
## References
|
| 190 |
+
|
| 191 |
+
[1] E. Zeng, S. Mare, and F. Roesner, "End user security
|
| 192 |
+
|
| 193 |
+
and privacy concerns with smart homes," in Symposium on Usable Privacy and Security (SOUPS 2017), 2017, pp. 65-80.
|
| 194 |
+
|
| 195 |
+
[2] R. Mahmoud, T. Yousuf, F. Aloul, and I. Zualkernan, "Internet of things (IoT) security: Current status, challenges and prospective measures," in International Conference for Internet Technology and Secured Transactions (IC-ITST). IEEE, 2015, pp. 336-341.
|
| 196 |
+
|
| 197 |
+
[3] B. Lantz, B. Heller, and N. McKeown, "A network in a laptop: rapid prototyping for software-defined networks," in ACM SIGCOMM Workshop on Hot Topics in Networks, 2010, pp. 1-6.
|
| 198 |
+
|
| 199 |
+
[4] B. Anderson and D. McGrew, "Identifying encrypted malware traffic with contextual flow data," in ACM Workshop on Artificial Intelligence and Security, 2016, pp. 35-46.
|
| 200 |
+
|
| 201 |
+
[5] N. Feamster, "Outsourcing home network security," in ACM SIGCOMM Workshop on Home Networks. ACM, 2010, pp. 37-42.
|
| 202 |
+
|
| 203 |
+
[6] C. R. Taylor and C. A. Shue, "Validating security protocols with cloud-based middleboxes," in IEEE Conference on Communications and Network Security (CNS). IEEE, 2016, pp. 261-269.
|
| 204 |
+
|
| 205 |
+
[7] C. Ryan and J. M. Lewis, "Computer and internet use in the united states: 2015," https://www.census.gov/content/dam/Census/library/ publications/2017/acs/acs-37.pdf.
|
| 206 |
+
|
| 207 |
+
[8] I. Hafeez, M. Antikainen, A. Y. Ding, and S. Tarkoma, "Iot-keeper: Securing iot communications in edge networks," arXiv preprint arXiv:1810.08415, 2018.
|
| 208 |
+
|
| 209 |
+
[9] Z. A. Almusaylim and N. Zaman, "A review on smart
|
| 210 |
+
|
| 211 |
+
home present state and challenges: linked to context-awareness internet of things (iot)," Wireless networks, vol. 25, no. 6, pp. 3193-3204, 2019.
|
| 212 |
+
|
| 213 |
+
[10] S. T. Jan, Q. Hao, T. Hu, J. Pu, S. Oswal, G. Wang, and B. Viswanath, "Throwing darts in the dark? detecting bots with limited data using neural data augmentation," in IEEE Symposium on Security and Privacy (IEEE SP), 2020.
|
| 214 |
+
|
| 215 |
+
[11] H. Li, K. Ota, and M. Dong, "Learning iot in edge: Deep learning for the internet of things with edge computing," IEEE network, vol. 32, no. 1, pp. 96-101, 2018.
|
| 216 |
+
|
| 217 |
+
[12] D. Willis, A. Dasgupta, and S. Banerjee, "Paradrop: a multi-tenant platform to dynamically install third party services on wireless gateways," in ACM workshop on Mobility in the evolving internet architecture, 2014, pp. 43-48.
|
| 218 |
+
|
| 219 |
+
[13] A. Wieczorek and B. Markowski, "Intelligent iot gateway on openwrt," https://elinux.org/images/4/41/ Intelligent_IoT_Gateway_on_OpenWrt.pdf, 2015.
|
| 220 |
+
|
| 221 |
+
[14] S. Shirali-Shahreza and Y. Ganjali, "Protecting home user devices with an sdn-based firewall," IEEE Transactions on Consumer Electronics, vol. 64, no. 1, pp. 92- 100, 2018.
|
| 222 |
+
|
| 223 |
+
[15] M. Nobakht, V. Sivaraman, and R. Boreli, "A host-based intrusion detection and mitigation framework for smart home iot using openflow," in International conference on availability, reliability and security (ARES). IEEE, 2016, pp. 147-156.
|
| 224 |
+
|
| 225 |
+
[16] R. F. Moyano, D. F. Cambronero, and L. B. Triana, "A user-centric sdn management architecture for nfv-based residential networks," Computer Standards & Interfaces, vol. 54, pp. 279-292, 2017.
|
| 226 |
+
|
| 227 |
+
[17] K. Xu, F. Wang, and X. Jia, "Secure the internet, one home at a time," Security and Communication Networks, vol. 9, no. 16, pp. 3821-3832, 2016.
|
| 228 |
+
|
| 229 |
+
[18] M. Boussard, D. Thai Bui, R. Douville, P. Justen, N. Le Sauze, P. Peloso, F. Vandeputte, and V. Verdot, "Future spaces: Reinventing the home network for better security and automation in the iot era," Sensors, vol. 18, no. 9, p. 2986, 2018.
|
| 230 |
+
|
| 231 |
+
[19] W. Shi, J. Cao, Q. Zhang, Y. Li, and L. Xu, "Edge computing: Vision and challenges," IEEE Internet of Things, vol. 3, no. 5, pp. 637-646, 2016.
|
| 232 |
+
|
| 233 |
+
[20] M. Satyanarayanan, "Cloudlet-based edge computing," http://elijah.cs.cmu.edu.
|
| 234 |
+
|
| 235 |
+
[21] R.-I. Ciobanu, C. Negru, F. Pop, C. Dobre, C. X. Mavro-moustakis, and G. Mastorakis, "Drop computing: Ad-hoc dynamic collaborative computing," Future Generation Computer Systems, vol. 92, pp. 889-899, 2019.
|
| 236 |
+
|
| 237 |
+
[22] T. Verbelen, P. Simoens, F. De Turck, and B. Dhoedt, "Cloudlets: Bringing the cloud to the mobile user," in ACM Workshop on Mobile Cloud Computing and Services, 2012, pp. 29-36.
|
| 238 |
+
|
| 239 |
+
[23] J. Gedeon, C. Meurisch, D. Bhat, M. Stein, L. Wang, and M. Mühlhäuser, "Router-based brokering for surrogate discovery in edge computing," in International Conference on Distributed Computing Systems Workshops (ICDCSW). IEEE, 2017, pp. 145-150.
|
| 240 |
+
|
| 241 |
+
[24] M. Aazam and E.-N. Huh, "Fog computing and smart gateway based communication for cloud of things," in International Conference on Future Internet of Things and Cloud. IEEE, 2014, pp. 464-470.
|
| 242 |
+
|
| 243 |
+
[25] J. Dugan, S. Elliott, B. A. Mah, J. Poskanzer, and K. Prabhu, "iperf - the ultimate speed test tool for tcp, udp and sctp," https://iperf.fr/, 2020.
|
| 244 |
+
|
| 245 |
+
[26] Hall, Michael, and R. Jain, "Performance analysis of openvpn on a consumer grade router," cse. wustl. edu, 2008.
|
papers/JSYS/JSYS 2022/JSYS 2022 Aug_Papers/RgeMS1Tf1zs/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,250 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ SOLUTION: INSPECTING TRAFFIC IN RESIDENTIAL NETWORKS WITH OPPORTUNISTICALLY OUTSOURCED MIDDLEBOXES
|
| 2 |
+
|
| 3 |
+
Anonymous authors
|
| 4 |
+
|
| 5 |
+
Paper under double-blind review
|
| 6 |
+
|
| 7 |
+
§ ABSTRACT
|
| 8 |
+
|
| 9 |
+
Since they lack the powerful tools and personnel available in enterprise-grade security solution, home networks have particularly difficult network security challenges. While prior efforts outsource network traffic to cloud or cloudlet services, such measures redirect network traffic out of the home network, which grants a third-party access to see and profile traffic. This affects the privacy of that traffic. Further, if those tools need to apply Transport Layer Security (TLS) decryption to enhance their monitoring insight, the privacy risks to home users grows substantially. Alternatively, residents may introduce new physical hardware in their home networks, but doing so incurs greater capital costs that would impede deployment.
|
| 10 |
+
|
| 11 |
+
Our work explores a system to leverage existing available devices, such as smartphones, tablets and laptops, inside a home network to create a platform for traffic inspection. By using devices owned and operated by the same end-users, the system can peeking into TLS traffic and perform detailed inspection without introducing risks from third parties. By leveraging existing devices in a home network, we can implement our platform with no additional hardware costs. Our performance evaluation shows that such middleboxes can substantially increase the throughput of communication from around ${10}\mathrm{{Mbps}}$ to around ${90}\mathrm{{Mbps}}$ , while increasing CPU usage at the router by around ${15}\%$ , with a ${20}\%$ CPU usage increase on a smartphone (with single core processing), and with a latency increase of about 120 milliseconds to network packets.
|
| 12 |
+
|
| 13 |
+
§ 1 INTRODUCTION
|
| 14 |
+
|
| 15 |
+
The increasing use of household broadband Internet service and smart home technologies are likewise increasing risks related to home network. In 2021, 442 million smart devices, and ${82}\%$ home networks are connected to the Internet. New smart devices, like smart cameras, can collect data from users to provide intelligent services, but it can be difficult for defenders to determine what network traffic is associated with these devices. It can be difficult for users to determine if their devices are safe from network attacks. A study with fifteen smart home users indicates that eleven participants were worry about physical security and five participants were concerned about the privacy associated with these devices [1]. Typically, consumer-grade routers do not provide effective network protections [2]. As a result, attackers have various opportunities to compromise networked devices, allowing lateral propagation and a range of attacks.
|
| 16 |
+
|
| 17 |
+
The limited capabilities of consumer-grade network hardware force difficult trade-offs in modern home networks. While prior work has proposed lightweight functionality on residential routers [3], the computational constraints of those devices limit the types of tasks that can be hosted on such routers. Efforts to profile and examine encrypted traffic using machine learning [4] would exceed the resources of many such routers. These routers are unable to engage in the more sophisticated analysis common in enterprise security gateways.
|
| 18 |
+
|
| 19 |
+
Other techniques push the computational tasks associated with network screening to remote servers. Feamster [5] proposes that home networks can outsource their security mechanisms to cloud servers with software-defined networking (SDN) technologies. TLSDeputy [6] uses remote servers to validate TLS certificates and protocol settings to ensure the authenticity of communicating endpoints. However, both techniques allow the operators of cloud infrastructure to have insight into the activities of a home network, introducing new privacy risks and an expanded trusting computing base.
|
| 20 |
+
|
| 21 |
+
In contrast to prior efforts, we consider mechanisms to deploy home network traffic inspection in an opportunistic fashion. We explore mechanisms to leverage existing devices in a home network when they are available to screen communication. In doing so, we ask the following research questions:
|
| 22 |
+
|
| 23 |
+
* To what extent can we utilize current resources within a home network to build real-time packet inspection?
|
| 24 |
+
|
| 25 |
+
* To what extent would such a packet inspection system influence the performance of the home network, in terms of traffic latency, resource consumption, and throughput?
|
| 26 |
+
|
| 27 |
+
Our approach enables devices such as smartphones, tablets, laptops and desktops to perform traffic analysis. These devices can operate as security proxies when they are available, enabling detailed analysis. In pursuing this direction, our work makes the following contributions:
|
| 28 |
+
|
| 29 |
+
* Creation of Prototype On-Router and Outsourced Middleboxes: We use open source firmware on a consumer-grade router to profile traffic locally and via a smartphone. We compare baseline communication with an on-router program that profiles destination addresses. We further implement a technique to transparently direct traffic through a smartphone middlebox using network address translation (NAT) rules on the router.
|
| 30 |
+
|
| 31 |
+
* Performance Evaluation of Deployment Options: We compare baseline forwarding of the router with on-router inspection and with opportunistic outsourcing to a smart-phone. Our evaluation shows that on-device inspection has a throughput of around ${10}\mathrm{{Mbps}}$ whereas outsourc-ing the inspection to a smartphone achieve roughly 90 Mbps of throughput. The smartphone middleboxing approach adds around 15% CPU usage to the router, 20% CPU usage to a smartphone (with single core processing), and introduces 120 milliseconds of round trip time (RTT) delay to network traffic.
|
| 32 |
+
|
| 33 |
+
§ 2 BACKGROUND AND RELATED WORK
|
| 34 |
+
|
| 35 |
+
In this section we introduce the background knowledge and prior work on residential network computation and security.
|
| 36 |
+
|
| 37 |
+
§ 2.1 COMPUTATION IN RESIDENTIAL NETWORKS
|
| 38 |
+
|
| 39 |
+
A 2015 survey found that 77% of US households subscribed broadband internet service, and ${78}\%$ own a desktop or laptop computer [7]. However, modern home networks face many security challenges, since the value of assets managed by residential networks is increasing. Attackers can gain sensitive information or directly control the devices and launch attack on other devices, such eavesdropping, replay attack, network scanning, and data theft $\left\lbrack {8,9}\right\rbrack$ .
|
| 40 |
+
|
| 41 |
+
There are effective ways to detect these attacks, but they require significant computational resources. Hafeez et al. [8] find that machine learning methods can detect a series attack with accuracy as high as 99%. Jan et al. [10] propose a method to detect a compromised device that joins a botnet with very limited data through a deep learning algorithm. A powerful inspection platform is helpful in increasing home network security.
|
| 42 |
+
|
| 43 |
+
§ 2.2 PERIMETER DEFENSE FOR HOME NETWORKS
|
| 44 |
+
|
| 45 |
+
Perimeter defenses can be useful for residential networks. While the basic NAT functionality on residential routers typically prevents unsolicited inbound communication, it is ineffective at detecting or stopping existing compromises within a network or attacks that are launched via a connection initiated from within the network.
|
| 46 |
+
|
| 47 |
+
Li et al. propose applying deep learning anomaly detection techniques for securing home networks; however, their method runs on equipment with computational resources that may be inapplicable to many home networks [11]. ParaDrop [12] proposes allowing third-party application providers to install lightweight containers to provide a gateway for simple tasks. However, ParaDrop does not have sufficient resources to run resource-consuming tasks like intrusion detection. Another work [13] adds plug-and-play devices to a consumer-grade router, which enables the router to work as an intelligent IoT gateway that can inspect traffic; however, it incurs capital costs and requires hardware modifications inside consumer routers that are likely beyond the technical abilities of some potential deployers.
|
| 48 |
+
|
| 49 |
+
Shirali-Shahreza et al. [14] summarized commercial home network firewall products. Each requires the installation of additional devices in the network with an initial cost of at least $\$ {200}$ and with ongoing monthly service costs. These devices may replace typical home routers or act in conjunction with existing routers. Some use virtual private network (VPN) techniques to tunnel traffic to a remote VPN server that can inspects and analyze home network traffic before forwarding the traffic. These methods introduce additional costs and equipment for users.
|
| 50 |
+
|
| 51 |
+
To simplify the management but keep security enforcement, Feamster proposes to outsource security needs to a remote cloud server through SDN architecture [5]. Experts and professional security software help to dynamically manage the network. Since cloud servers may contain richer resources, modules running in the controller can provide better analysis, along with a broader view of the network. Many other works propose to build remote firewalls on the controller based on this architecture $\left\lbrack {8,{14} - {18}}\right\rbrack$ . Most of them propose to utilize the gateway as an OpenFlow host in the home network. Others propose to use a locally-available device, such as a Raspberry $\mathrm{{Pi}}$ , instead. The agent usually samples the network traffic and uploads it to the cloud. Controllers running in the cloud may also run a firewall module to inspect the sampled flow information. The agent further executes the returned action decision, which is usually a security policy received from the controller. However, these methods are based on users' trust that their private data is being used properly by a third-party provider; some users have concerns about such providers.
|
| 52 |
+
|
| 53 |
+
§ 2.3 EDGE COMPUTING FOR LOCAL NETWORKS
|
| 54 |
+
|
| 55 |
+
The edge computing paradigm builds decentralized computing pools for processing jobs from clients, bringing the computation closer to the source of data [19]. Cloudlet [20] is a popular edge computing prototype that offloads tasks to nodes that can scale. These nodes can be hosted by ISPs or other providers. Drop computing [21] builds a collaborative computing cloud using mobile devices in which one device can offload tasks to other devices. When there is no available device, the system seeks help from the cloud. This method is designed for ad hoc networks, which lack reliability since devices may enter and leave the network frequently and network coverage is usually limited. Similarly, Verbelen et al. [22] split tasks and offload them to a virtualized environment, either on mobile devices or on the cloud. Gedeon et al.propose to let a more reliable device, a home network gateway, run a broker. The gateway seeks available cloudlet nodes to help with its tasks [23]. Their methods still outsource the computation to a third-party platform, which raises privacy concerns. That work demonstrates that running a broker on a residential router does not introduce significant overhead, a result we leverage in our own approach.
|
| 56 |
+
|
| 57 |
+
Aazam et al. [24] propose to use either smart gateway or other localized fog nodes to do data pre-processing, before uploading data to the cloud. The pre-processing not only reduces the data size and retains only the necessary data, it can reduce some, but not all, privacy risks.
|
| 58 |
+
|
| 59 |
+
We explore a mechanism to perform network traffic inspection within a home network only. Unlike existing edge computing work, we focus on inspecting streaming data rather than discrete computational jobs.
|
| 60 |
+
|
| 61 |
+
§ 3 APPROACH
|
| 62 |
+
|
| 63 |
+
Our research compares two approaches: on-router inspection via NFQUEUE and on-phone inspection via NAT redirection. We start by introducing the threat model. Then, we illustrate the process of on-router inspection. Finally, we describe the functionalities of each component of the phone-based inspection platform and how they work together.
|
| 64 |
+
|
| 65 |
+
§ 3.1 THREAT MODEL AND SCOPE
|
| 66 |
+
|
| 67 |
+
We consider a basic threat model where malicious communication occurs between a host within the network and a system outside of the home network. The defender's goal is to inspect all traffic leaving the home network. In this model, we trust the router and the smartphone that acts as a proxy. We do not trust the endpoints.
|
| 68 |
+
|
| 69 |
+
Our goal is to create mechanisms that enable arbitrary traffic inspection on a reasonably resourced device, such as a phone. We do not aim to create new anomaly detectors or traffic inspection engines; that is out of the scope of this work. Accordingly, we demonstrate baseline functionality using an block list of destination addresses.
|
| 70 |
+
|
| 71 |
+
§ 3.2 ON-ROUTER INSPECTION VIA NFQUEUE
|
| 72 |
+
|
| 73 |
+
We implement a basic C++ program to inspect IP addresses that is compiled to run natively on the router. The program uses the iptables packet inspection tool and the netfilter_queue library to inspect traffic. Essentially, the iptables tool operates on each packet processed by the Linux stack on the router. This action occurs when packets cross from the LAN interfaces to the WAN interface. The iptables program sets an NFQUEUE judgment for all packets, causing them to enter a kernel data structure. The C++ program extracts the packets from that data structure, inspects the address, and returns the packets to the kernel queue for transmission. This program represents the minimum inspection required for a general-purpose user-space inspection program on the router.
|
| 74 |
+
|
| 75 |
+
§ 3.3 ON-PHONE INSPECTION VIA NAT REDIRECTION
|
| 76 |
+
|
| 77 |
+
There are two components in our on-phone inspection approach. The first is a set of NAT rules on the router. We use the iptables program, which can manage IP packet rules in the Linux kernel. The NAT table is one table of iptables to create several rules in the NAT table to transform the original destination IP address of the packets from the server to the IP address of the smartphone, so the traffic sent from the client can be redirected to the smart-phone. In the example shown in Figure 1, we first apply a DNAT rule as iptables -t nat -A PREROUTING -p tcp -s 192.168.1.2 -d 172.16.1.2 -dport 6666 -j DNAT -to-destination 192.168.1.3:6666 and an SNAT rule as iptables -t nat -A POSTROUTING -p tcp -s 192.168.1.2 -d 192.168.1.193 -dport 6666 -j SNAT -to-source 192.168.1.1 to forward traffic to the smartphone. Then the smartphone works as a proxy that receives packets and sends them back to the router after inspection. When these packets return to the router, the router transforms their destination IP address to the original server destination IP address based on another DNAT rule, such as iptables -t nat -A PREROUTING -p tcp -s 192.168.1.3 -d 192.168.1.1 -sport 7777 -j DNAT -to-destination 172.16.57.216:6666 Since all of the NAT rules work bidirectionally, the packets sent from server will will also go in the reverse direction, again traversing the smartphone. Rather than processing traffic as an arbitrary user space program in the router's Linux stack, our method forwards them using kernel data structures. This feature avoids potentially costly transitions to user space.
|
| 78 |
+
|
| 79 |
+
The second component in our approach is the proxy device and service. We first explore the smartphone as a proxy and implement a Java program that uses TCP to accept traffic for inspection on a pre-defined port. Figure 1 shows how the phone accepts communication from the router on a specific port. It starts a new TCP connection to a specifically configured port on the router, which the router pre-configures to forward to the remote server. Since the smartphone is on-path in our method, we retrieve the raw payload of every packet. While we only apply IP list filtering, more advanced inspection can be deployed in our method, such as TLS inspection (TLSI) .
|
| 80 |
+
|
| 81 |
+
< g r a p h i c s >
|
| 82 |
+
|
| 83 |
+
max width=
|
| 84 |
+
|
| 85 |
+
Packet # Source IP Source Port Destination IP Destination Port
|
| 86 |
+
|
| 87 |
+
1-5
|
| 88 |
+
1 192,168.1.2 47289 172,16.1.2 6666
|
| 89 |
+
|
| 90 |
+
1-5
|
| 91 |
+
2 192,168.1.1 6001 192,168.1.3 6666
|
| 92 |
+
|
| 93 |
+
1-5
|
| 94 |
+
3 192,168.1.3 7777 192,168.1.1 6001
|
| 95 |
+
|
| 96 |
+
1-5
|
| 97 |
+
4 172,16.1.1 6002 172,16.1.2 6666
|
| 98 |
+
|
| 99 |
+
1-5
|
| 100 |
+
5 172,16.1.2 6666 172.16.1.1 6002
|
| 101 |
+
|
| 102 |
+
1-5
|
| 103 |
+
6 192,168.1.1 6001 192,168.1.3 7777
|
| 104 |
+
|
| 105 |
+
1-5
|
| 106 |
+
7 192,168.1.3 6666 192,168.1.1 6001
|
| 107 |
+
|
| 108 |
+
1-5
|
| 109 |
+
8 172,16.1.2 6666 192,168.1.2 47289
|
| 110 |
+
|
| 111 |
+
1-5
|
| 112 |
+
|
| 113 |
+
Figure 1: An example of packet forwarding via NAT rules. As the client sends the original packet to the server, the router modifies the packet and forwards it to the smartphone. After the smartphone performs packet inspection, it sends the packet back to the router. Then the router forwards it to the server. Since all of the NAT rules work bidirectionally, the packets sent from the server will follow the reverse path.
|
| 114 |
+
|
| 115 |
+
§ 4 IMPLEMENTATION
|
| 116 |
+
|
| 117 |
+
We implement our method in a lab environment on physical devices. We run the OpenWrt 21.02.2 operating system (OS) on a TP-LINK AC1750 Wireless Dual Band Gigabit Router. We simulate a home network client user on a laptop with four cores and 16 GBytes of memory, running the Windows OS. We simulate a server outside of the home network on a laptop with four cores and 16 GBytes of memory, running the Ubuntu 20.04 OS. We use a smartphone with eight ${2.0}\mathrm{{GHz}}$ cores and 4 GBytes of memory, running the Android 11 OS as the proxy device.
|
| 118 |
+
|
| 119 |
+
For the network configuration, as showen in Figure 2, we create two VLANs: one is on interface eth0, and the other is on interface eth1. We assign the LAN ports and wireless radio to one VLAN, and assign the WAN port to the other VLAN. The client connects to a LAN port via a category 6 Ethernet cable. The server also connects to the WAN port using a category 6 cable. For the radio, we build an access point on ${5.785}\mathrm{{GHz}}$ using a Qualcomm Atheros QCA 9880 802.11ac adapter. We connect the smartphone to this access point at a distance of 3 feet with an unobstructed line of sight.
|
| 120 |
+
|
| 121 |
+
After configuring the home network as defined in the threat model, we add three NAT rules to iptables in the router, as described in Section 3. These rules include SNAT and DNAT rules and have the capability of redirecting traffic between the client and the server to travel via the smartphone. On the smartphone side, we use Android Studio to build a Java application that performs packet inspection based on a malicious IP block list and hosts a proxy service.
|
| 122 |
+
|
| 123 |
+
< g r a p h i c s >
|
| 124 |
+
|
| 125 |
+
Figure 2: Our implementation's network configuration
|
| 126 |
+
|
| 127 |
+
§ 5 PERFORMANCE EVALUATION AND RESULTS
|
| 128 |
+
|
| 129 |
+
The most straightforward mechanism for implementing an inspection and analysis middlebox is to use a device that is already physically on the network path. In home networks, a residential wireless router typically fills that role. To justify the added complexity of opportunistic middleboxes, we explore the performance implications of using such commodity devices. We use a typical network setting, without the use of inspection functionality, to establish a baseline. We then explore on-device inspection. Finally, we examine an inspection method in which NAT rules are used to reroute traffic to a middlebox, using both a smartphone emulator and a physical commodity smartphone for analysis.
|
| 130 |
+
|
| 131 |
+
In examining these scenarios, we evaluate the performance of each using four metrics: flow throughput, end-to-end round trip time (RTT), the CPU usage at the router, and the CPU usage of the smartphone when it is in use.
|
| 132 |
+
|
| 133 |
+
§ 5.1 THE BASELINE: LAN TO WAN TRAFFIC
|
| 134 |
+
|
| 135 |
+
Our baseline scenario connects a client to a server though a residential router. Often, the WAN port is used on the router to connect to upstream networks, such as the Internet, and the servers available through those networks. Therefore, we connect an Ubuntu server to the WAN port of the router using a category 6 Ethernet cable, which supports duplex gigabit connectivity. The server uses a gigabit Ethernet card. We statically configure the IP addresses of the server and the router's WAN port within a subnet that is only used by those two devices.
|
| 136 |
+
|
| 137 |
+
The connectivity options for clients may vary in different homes. Some devices may be connected via Ethernet connections to the LAN ports of the router. In other cases, devices may connect using WiFi radio links. Accordingly, we explore both of these connection scenarios.
|
| 138 |
+
|
| 139 |
+
We begin by exploring the case in which the client is connected to a LAN port on the router via a category 6 Ethernet cable. We use the router's built-in DHCP server, which assigns an address to the client in a subnet that the router and client share, yet is disjoint from the subnet used by the server. We use the router's built-in default NAT capabilities to translate across the subnets, which is a common deployment model in homes. Using the iperf 3 benchmarking tool [25], we test a TCP connection between the client and the server. We configure iperf 3 to attempt to maximize throughput in the channel and observe it for 1,100 seconds. We conducted 3 trials and measured the throughput for 1000 seconds after an initial delay of 100 seconds to accommodate TCP's slow-start behavior. As we see in the right-most two lines in Figure 3, the median download throughput is 440.00 Mbps and the median upload throughput is ${254.00}\mathrm{{Mbps}}$ , with tight distributions (standard deviation of ${4.90}\mathrm{{Mbps}}$ for download and 3.27 Mbps for upload).
|
| 140 |
+
|
| 141 |
+
< g r a p h i c s >
|
| 142 |
+
|
| 143 |
+
Figure 3: Results from throughput tests when the client connects to the router via a category 6 Ethernet cable. The green lines show upload and download throughput under a baseline setting. The red lines show both throughputs after applying on-router inspection via NFQUEUE library. The blue lines show both throughput after applying on-phone inspection using NAT redirection rules.
|
| 144 |
+
|
| 145 |
+
Next, we determine the impact of connecting the client to the router using WiFi radios. The client has a network adapter capable of using 802.11ac communication, and the router supports 802.11ac, leading to the lowest common denominator of the 802.11ac standard. That standard has a theoretical maximum throughput of 1300 Mbps, though practical throughput is often less due to interference and obstructions. We place the router and the client roughly 3 feet apart with line-of-sight. We then repeat our throughput analysis using the iperf3 benchmark tool, with the same settings as our Ethernet experiments. We use the same 3 trials and timing windows as in the earlier experiment. In Figure 4, the right-most two lines show the baseline throughput results via WiFi. The median download throughput is 196.00 Mbps and the median upload throughput is ${229.00}\mathrm{{Mbps}}$ , with standard deviation of 10.38 Mbps for download and 14.47 Mbps for upload.
|
| 146 |
+
|
| 147 |
+
< g r a p h i c s >
|
| 148 |
+
|
| 149 |
+
Figure 4: Results from throughput tests when the client connects to the router wireless. The rightmost green lines show upload and download throughput under a baseline setting. The leftmost red lines show the throughput after applying on-router inspection via NFQUEUE library. The middle blue lines show the throughput after applying on-phone inspection using NAT redirection rules.
|
| 150 |
+
|
| 151 |
+
Since the communication throughput via Ethernet appears to be less than the medium's theoretical maximum, we explore whether the router could be causing a bottleneck. In particular, we examine the CPU of the router. While we test the maximum throughput, we use the top tool to record the CPU usage of the router for 1000 seconds. As shown in Table 1, the CPU usage of the router is at its limit more than ${90}\%$ of the time when testing maximum throughput. These results show that the CPU of our router is the performance bottleneck for higher throughput.
|
| 152 |
+
|
| 153 |
+
To determine the added CPU usage from different traffic inspection methods, we need to measure the router's CPU usage in a moderate working scenario, rather than in an extreme situation. We thus evaluate the scenario in which the TCP connection throughput is reduced to ${10}\mathrm{{Mbps}}$ of randomized payload to the server. We also record the CPU usage of the router for 1000 seconds. The green line in Figure 5 shows that the median CPU usage of the router is ${9.00}\%$ with standard deviation of ${1.58}\%$ .
|
| 154 |
+
|
| 155 |
+
Table 1: CPU usage of the router while testing the maximum throughput of connections in six scenarios.
|
| 156 |
+
|
| 157 |
+
max width=
|
| 158 |
+
|
| 159 |
+
Percentile of Trials 10th 50th 90th
|
| 160 |
+
|
| 161 |
+
1-4
|
| 162 |
+
CPU Usage in Baseline Upload 100% 100% 100%
|
| 163 |
+
|
| 164 |
+
1-4
|
| 165 |
+
CPU Usage in Baseline Download 100% 100% 100%
|
| 166 |
+
|
| 167 |
+
1-4
|
| 168 |
+
CPU Usage in NAT Upload 98% 100% 100%
|
| 169 |
+
|
| 170 |
+
1-4
|
| 171 |
+
CPU Usage in NAT Download 97% 100% 100%
|
| 172 |
+
|
| 173 |
+
1-4
|
| 174 |
+
CPU Usage in NFQUEUE Upload 100% 100% 100%
|
| 175 |
+
|
| 176 |
+
1-4
|
| 177 |
+
CPU Usage in NFQUEUE Download 100% 100% 100%
|
| 178 |
+
|
| 179 |
+
1-4
|
| 180 |
+
|
| 181 |
+
< g r a p h i c s >
|
| 182 |
+
|
| 183 |
+
Figure 5: CPU usage of the router when applying on-phone inspection using NAT redirection rules, applying on-router inspection via NFQUEUE library, and a baseline without inspection when throughput is limited to ${10}\mathrm{{Mbps}}$ .
|
| 184 |
+
|
| 185 |
+
While throughput is an important metric, the end-to-end round trip time (RTT) is also important for understanding the delay introduced by the network paths and the router. To test this, we construct an echo program on the server and a recording device on the client to measure the time difference between the client sending a specific payload and receiving a reply. Across 1000 trials, we see that the left-most line in Figure 6 has a median RTT of ${1.12}\mathrm{\;{ms}}$ with a standard deviation of ${0.12}\mathrm{\;{ms}}$ . When repeating this analysis via WiFi, in Figure 7, we see the median RTT of the left-most line is ${2.72}\mathrm{\;{ms}}$ with a ${6.14}\mathrm{\;{ms}}$ standard deviation.
|
| 186 |
+
|
| 187 |
+
§ 5.2 ON-ROUTER INSPECTION VIA NFQUEUE
|
| 188 |
+
|
| 189 |
+
To explore whether the router itself can feasibly inspect traffic, we implement a basic $\mathrm{C} + +$ program, that is compiled to run natively on the router, to inspect IP addresses. The program's details are described in Section 3.2.
|
| 190 |
+
|
| 191 |
+
We explore the throughput, RTT, and router CPU metrics
|
| 192 |
+
|
| 193 |
+
< g r a p h i c s >
|
| 194 |
+
|
| 195 |
+
RTT (milliseconds) in connection via Ethernet
|
| 196 |
+
|
| 197 |
+
Figure 6: RTT with a log scale in milliseconds between the client and the server when the client connects to the router via Ethernet. The leftmost green line shows baseline result. The middle red line shows the result after applying on-router inspection via the NFQUEUE library. The two rightmost blue lines show the results with two separate phones after applying on-phone inspection using NAT redirection rules. of the on-device inspection program using the same tools and settings used in Section 5.1. In the two left-most lines of Figures 3 and 4, we see the upload and download throughput after applying this inspection approach. We conducted 3 trials and measured the throughput for 1000 seconds after an initial delay of 100 seconds to accommodate TCP's slow-start behavior. As we seen in the right-most two lines in Figure 3, the median download throughput is ${9.62}\mathrm{{Mbps}}$ , and the median upload throughput is ${8.40}\mathrm{{Mbps}}$ (standard deviation of 3.91 Mbps for download and 3.93 Mbps for upload). Given this substantially decreased throughput from the baseline, we hypothesize that the change introduces a bottleneck on the router.
|
| 198 |
+
|
| 199 |
+
< g r a p h i c s >
|
| 200 |
+
|
| 201 |
+
Figure 7: The RTT with a log scale in milliseconds between the client and the server when the client connects to the router via WiFi. The green line shows the baseline result. The red line shows the result after applying on-router inspection via NFQUEUE library. The two blue lines show the results with two separate phones after applying on-phone inspection using NAT redirection rules.
|
| 202 |
+
|
| 203 |
+
When we examine the CPU usage of the router, we confirm that this resource is exhausted. In Figure 5, we see that the baseline CPU usage is around 9% when throughput is limited to ${10}\mathrm{{Mbps}}$ , but is ${100}\%$ when the router performs packet inspection. The process elevates all traffic to the router's Linux user space environment, which requires significant computational resources on the router. Such routers tend to be manufactured with lower-end CPUs for economic reasons [26] and there appears to be little headroom for this additional operation. However, when the router is not overwhelmed, as in the simple echo server RTT tests, we see that the on-device router introduces minimal RTT increases over the baseline. These results are shown by the red line in Figure 6, which is close to the baseline results.
|
| 204 |
+
|
| 205 |
+
§ 5.3 ON-PHONE INSPECTION VIA NAT REDIRECTION
|
| 206 |
+
|
| 207 |
+
With the CPU limitations of residential routers, we explore the potential of re-routing packets via a smartphone to inspect traffic. As described in Section 3, we add three different NAT rules via iptables on the router to cause traffic to be sent via the phone. An example of traffic forwarding, after applying NAT rules, is shown in Figure 1.
|
| 208 |
+
|
| 209 |
+
The NAT rules cause traffic to be sent to a specific port on the smartphone. Our Java program runs on the phone, binds to the specified port, and receives packets. It performs simplistic packet inspection and then sends it back to the router on a specific port. The router uses its NAT rewriting rules to send it on to the server. When a reply from the server is received by the router, the traffic likewise traverses the phone for inspection before traveling to the client.
|
| 210 |
+
|
| 211 |
+
We use the same three metrics as in the baseline and on-router cases to explore the performance characteristics of this phone-based inspection approach. In addition, we consider the CPU usage of the phone application itself, since high usage may result in battery depletion on the phone and could prevent its practical deployment.
|
| 212 |
+
|
| 213 |
+
Using the same settings as in the two prior sections, we explore the throughput when traffic is directed through the Moto G Power smartphone. In the middle two lines of Figures 3, we see that the median download throughput is ${94.80}\mathrm{{Mbps}}$ and the median upload throughput is ${70.10}\mathrm{{Mbps}}$ , with tight distributions (standard deviation of 4.32 Mbps for download and 2.87 Mbps for upload). The throughput is substantially higher than the on-router inspection approach in both Figure 3 and Figure 4. In effect, the processing of the NAT rules on the router may incur less computational overhead than the full process of inspecting the traffic. Since the router's CPU was the bottleneck in the on-device inspection scenario, this adjustment increases the amount of traffic the router can handle.
|
| 214 |
+
|
| 215 |
+
In Figure 5, we are able to confirm that the NAT-based approach yields significantly lower CPU utilization than on-device inspection when throughput is limited to ${10}\mathrm{{Mbps}}$ . The middle line in that graph shows that the NAT approach has a median of 24.0% CPU utilization with a standard deviation of 2.61%.
|
| 216 |
+
|
| 217 |
+
The insertion of another device on the network path through a loop will necessarily increase the packet's propagation delay and may be observable in the overall end-to-end RTT. This is apparent in Figure 6, with the RTT of the NAT approach represented by the two right-most lines. We see patterns where ${20}\%$ of traffic has an RTT less than ${30.44}\mathrm{\;{ms}}$ while ${75}\%$ of traffic has an RTT over ${120.17}\mathrm{\;{ms}}$ . This is significantly higher than either the baseline scenario or when on-device inspection occurs. Importantly, this experiment uses a simple echo server approach and does not tax the CPU of the router. The on-device scenario would incur greater RTT delays when the CPU is a bottleneck due to processing delay.
|
| 218 |
+
|
| 219 |
+
In Figure 8, we explore the cause of the RTT delay in greater detail. We host a simple TCP echo program in three different ways. The left-most line represents the scenario when the echo server runs on the server using the baseline scenario (i.e., the traffic traverses the router to the server, bypassing the phone). The middle line represents the case when the echo server runs in an application within an Android emulator running on a laptop. The two rightmost lines represent the echo server running on two separate physical smartphones: a Moto G Power and a Pixel 2. While the first two scenarios have fairly tight distributions with RTTs less than ${10}\mathrm{\;{ms}}$ , the echo server built on the Moto G Power has a latency around ${20}\mathrm{\;{ms}}$ for most traffic. However, it has much longer delays for around ${20}\%$ of traffic. Moreover, the echo server built on the Pixel 2 has a latency of less than ${50}\mathrm{\;{ms}}$ for around half of the trials, but has delays over ${200}\mathrm{\;{ms}}$ for around half of the traffic. In essence, the simple echo server smartphone application sometimes incurs significant delay in sending or receiving traffic. While this occurs only around ${20}\%$ of the time for the echo server, the proxy example would incur two instances of this behavior, causing more traffic to incur a delay.
|
| 220 |
+
|
| 221 |
+
The distinct RTT behaviors exhibited by the two physical phones, that are not present in the Android emulator, may indicate some outside effect due to phone-specific factors. These could include the use of power savings modes, in which applications are periodically suspended or queued to reduce energy consumption.
|
| 222 |
+
|
| 223 |
+
Our last metric explores the energy usage of the proxy application on the phone. We again use the Moto G Power smartphone as a proxy while maximizing throughput transmission from the client to the server. In this experiment, we also open a music-playing application on the phone, in the background, for comparison. We then record the CPU usage of the proxy application and the music application for 1000 seconds using the top tool in the phone. We monitor the idle percentage of the 8 cores in the proxy device. In Table 2, we show the CPU usage of the proxy application and the music application, along with the time for which the CPU core is idle. In this table, ${100}\%$ represents the full utilization of a core on the device and ${800}\%$ represents the full utilization of all eight device cores. The first row in Table 2 represents the proxy application, which uses only about ${21}\%$ of a single core (and roughly ${107}\%$ of the CPU usage of the music application). We see that the majority of the device's computational resources are unused. As a result, we anticipate that the CPU-based energy consumption of the device would be a small fraction of a music application. Since phones are regularly used for music playing without significant power-related disruptions to end-users, it is likely that the proxy application would likewise be accommodated by phones.
|
| 224 |
+
|
| 225 |
+
< g r a p h i c s >
|
| 226 |
+
|
| 227 |
+
Figure 8: Comparison of RTT between connecting the client directly to the Ubuntu server, the Android emulator, the Moto G Power, and the Pixel 2.
|
| 228 |
+
|
| 229 |
+
Table 2: CPU usage of the smartphone for different applications when maximizing throughput while applying on-phone inspection.
|
| 230 |
+
|
| 231 |
+
max width=
|
| 232 |
+
|
| 233 |
+
Percentile of Trials 10th 50th 90th
|
| 234 |
+
|
| 235 |
+
1-4
|
| 236 |
+
CPU Usage of Proxy App 18% 21% 24%
|
| 237 |
+
|
| 238 |
+
1-4
|
| 239 |
+
CPU Usage of Music App 98% 107% 114%
|
| 240 |
+
|
| 241 |
+
1-4
|
| 242 |
+
CPU Idle 535% 560% 584%
|
| 243 |
+
|
| 244 |
+
1-4
|
| 245 |
+
|
| 246 |
+
§ 6 CONCLUSION
|
| 247 |
+
|
| 248 |
+
The need for privacy and the limited computational resources in residential networks complicate traffic inspection and analysis. Residential routers' limited CPU resources make it difficult to deploy even straightforward IP address-based inspection tools without substantially limiting throughput through the router. However, with carefully-crafted NAT rules, a router can redirect communication through another device, such as a smartphone, to inspect traffic.
|
| 249 |
+
|
| 250 |
+
In our experiments, we find that NAT-based diversion through a smartphone can substantially raise the communication throughput from around ${10}\mathrm{{Mbps}}$ to around ${90}\mathrm{{Mbps}}$ . The router can periodically examine its ARP and DHCP data structures to detect the availability of a phone in the LAN, contact an application on the phone to configure proxy services, and then divert traffic through the phone to enable outsourced inspection. With such an approach, residential networks can opportunistically use available smartphones as middleboxes to enable higher-throughput traffic inspection.
|
papers/JSYS/JSYS 2022/JSYS 2022 May_Papers/EFTlLmTzmVp/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
papers/JSYS/JSYS 2022/JSYS 2022 May_Papers/EFTlLmTzmVp/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,584 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ SOLUTION: BYZANTINE CLUSTER-SENDING IN EXPECTED CONSTANT COST AND CONSTANT TIME
|
| 2 |
+
|
| 3 |
+
Anonymous authors
|
| 4 |
+
|
| 5 |
+
Paper under double-blind review
|
| 6 |
+
|
| 7 |
+
§ ABSTRACT
|
| 8 |
+
|
| 9 |
+
Traditional resilient systems operate on fully-replicated fault-tolerant clusters, which limits their scalability and performance. One way to make the step towards resilient high-performance systems that can deal with huge workloads, is by enabling independent fault-tolerant clusters to efficiently communicate and cooperate with each other, as this also enables the usage of high-performance techniques such as sharding. Recently, such inter-cluster communication was formalized as the Byzantine cluster-sending problem. Unfortunately, existing worst-case optimal protocols for cluster-sending all have linear complexity in the size of the clusters involved.
|
| 10 |
+
|
| 11 |
+
In this paper, we propose probabilistic cluster-sending techniques as a solution for the cluster-sending problem with only an expected constant message complexity, this independent of the size of the clusters involved and this even in the presence of highly unreliable communication. Depending on the robustness of the clusters involved, our techniques require only two-to-four message round-trips (without communication failures). Furthermore, our protocols can support worst-case linear communication between clusters. Finally, we have put our techniques to the test in an in-depth experimental evaluation that further underlines the exceptional low expected costs of our techniques in comparison with other protocols. As such, our work provides a strong foundation for the further development of resilient high-performance systems.
|
| 12 |
+
|
| 13 |
+
§ 1 INTRODUCTION
|
| 14 |
+
|
| 15 |
+
The promises of resilient data processing, as provided by private and public blockchains $\left\lbrack {{14},{20},{26}}\right\rbrack$ , has renewed interest in traditional consensus-based Byzantine fault-tolerant resilient systems $\left\lbrack {5,6,{23}}\right\rbrack$ . Unfortunately, blockchains and other consensus-based systems typically rely on fully-replicated designs, which limits their scalability and performance. Consequently, these systems cannot deal with the ever-growing requirements in data processing $\left\lbrack {{28},{29}}\right\rbrack$ .
|
| 16 |
+
|
| 17 |
+
One wat to improve on these limitations is by building complex system designs that consist of independently-operating resilient clusters that can cooperate to provide certain services. To illustrate this, one can consider a sharded resilient design. In a traditional resilient systems, resilience is provided by a fully-replicated consensus-based Byzantine fault-tolerant cluster in which all replicas hold all data and process all requests. This traditional design has only limited performance, even with the best consensus protocols, and lacks scalability. To improve on the design of traditional systems, one can employ the sharded design of Figure 1. In this sharded design, each cluster only holds part of the data. Consequently, each cluster only needs to process requests that affect data they hold. In this way, this sharded design improves performance by enabling parallel processing of requests by different clusters, while also improving storage scalability. To support requests that affect data in several clusters in such a sharded design, the clusters need to be able to coordinate their operations, however [1,7,15,18].
|
| 18 |
+
|
| 19 |
+
Central to such complex system designs is the ability to reliably and efficiently communicate between independently-operating resilient clusters. Recently, this problem of communication between Byzantine fault-tolerant clusters has been formalized as the cluster-sending problem [17]. We believe that efficient solutions to this problem have a central role towards bridging resilient and high-performance data processing.
|
| 20 |
+
|
| 21 |
+
< g r a p h i c s >
|
| 22 |
+
|
| 23 |
+
Figure 1: A sharded design in which each resilient cluster of four replicas holds only a part of the data. Local decisions within a cluster are made via consensus $\left( \leftrightarrow \right)$ , whereas multi-shard coordination to process multi-shard transactions requires cluster-sending $\left( \leftrightarrow \right)$ .
|
| 24 |
+
|
| 25 |
+
Although the cluster-sending problem has received some attention (e.g., as part of the design of AHL [7], BYSHARD [18], GEOBFT [15], and CHAINSPACE [1]), and cluster-sending protocols that solve the cluster-sending problem with worst-case optimal complexity are known [17], we believe there is still much room for improvement.
|
| 26 |
+
|
| 27 |
+
In this paper, we introduce a new solution to the cluster-sending problem: we introduce cluster-sending protocols that use probabilistic cluster-sending techniques and are able to provide low expected-case message complexity (at the cost of higher communication latencies, a good trade-off in systems where inter-cluster network bandwidth is limited). In specific, our main contributions are as follows:
|
| 28 |
+
|
| 29 |
+
Figure 2: A comparison of cluster-sending protocols that send a value from cluster ${\mathcal{C}}_{1}$ with ${\mathbf{n}}_{{\mathcal{C}}_{1}}$ replicas, of which ${\mathbf{f}}_{{\mathcal{C}}_{1}}$ are faulty, to cluster ${\mathcal{C}}_{2}$ with ${\mathbf{n}}_{{C}_{2}}$ replicas, of which ${\mathbf{f}}_{{C}_{2}}$ are faulty. For each protocol $P$ , Protocol specifies its name; Robustness specifies the conditions $P$ puts on the clusters; Message Steps specifies the number of messages exchanges $P$ performs; Optimal specifies whether $P$ is worst-case optimal; and Unreliable specifies whether $P$ can deal with unreliable communication.
|
| 30 |
+
|
| 31 |
+
max width=
|
| 32 |
+
|
| 33 |
+
X Protocol Robustness ${}^{a}$ 2|c|Message Steps (expected-case)(worst-case) 2|c|Optimal Unreliable
|
| 34 |
+
|
| 35 |
+
1-7
|
| 36 |
+
X PBS-cs [17] $\min ({\mathbf{n}}_{{\mathcal{C}}_{1}},{\mathbf{n}}_{{\mathcal{C}}_{2}}) > {\mathbf{f}}_{{\mathcal{C}}_{1}} + {\mathbf{f}}_{{\mathcal{C}}_{2}}$ 2|c|${\mathbf{f}}_{{\mathcal{C}}_{1}} + {\mathbf{f}}_{{\mathcal{C}}_{2}} + 1$ ✓ ✘
|
| 37 |
+
|
| 38 |
+
1-7
|
| 39 |
+
X PBS-CS [17] ${\mathbf{n}}_{{\mathcal{C}}_{1}} > 3{\mathbf{f}}_{{\mathcal{C}}_{1}},{\mathbf{n}}_{{\mathcal{C}}_{2}} > 3{\mathbf{f}}_{{\mathcal{C}}_{2}}$ 2|c|$\max \left( {{\mathbf{n}}_{{\mathcal{C}}_{1}},{\mathbf{n}}_{{\mathcal{C}}_{2}}}\right)$ ✓ ✘
|
| 40 |
+
|
| 41 |
+
1-7
|
| 42 |
+
X GeoBFT [15] ${\mathbf{n}}_{{\mathcal{C}}_{1}} = {\mathbf{n}}_{{\mathcal{C}}_{2}} > 3\max \left( {{\mathbf{f}}_{{\mathcal{C}}_{1}},{\mathbf{f}}_{{\mathcal{C}}_{2}}}\right)$ ${\mathbf{f}}_{\mathcal{G}} + {1}^{b}$ $\Omega \left( {{\mathbf{f}}_{{\mathcal{C}}_{1}}{\mathbf{n}}_{{\mathcal{C}}_{2}}}\right)$ ✘ ✓
|
| 43 |
+
|
| 44 |
+
1-7
|
| 45 |
+
X Chainspace [1] ${\mathbf{n}}_{{\mathcal{C}}_{1}} > 3{\mathbf{f}}_{{\mathcal{C}}_{1}},{\mathbf{n}}_{{\mathcal{C}}_{2}} > 3{\mathbf{f}}_{{\mathcal{C}}_{2}}$ 2|c|${\mathbf{n}}_{{C}_{1}}{\mathbf{n}}_{{C}_{2}}$ ✘ ✘
|
| 46 |
+
|
| 47 |
+
1-7
|
| 48 |
+
5*This Paper CSPP ${\mathbf{n}}_{{\mathcal{C}}_{1}} > 2{\mathbf{f}}_{{\mathcal{C}}_{1}},{\mathbf{n}}_{{\mathcal{C}}_{2}} > 2{\mathbf{f}}_{{\mathcal{C}}_{2}}$ 4 $\left( {{\mathbf{f}}_{{\mathcal{C}}_{1}} + 1}\right) \left( {{\mathbf{f}}_{{\mathcal{C}}_{2}} + 1}\right)$ ✘ ✓
|
| 49 |
+
|
| 50 |
+
2-7
|
| 51 |
+
CSPP ${\mathbf{n}}_{{\mathcal{C}}_{1}} > 3{\mathbf{f}}_{{\mathcal{C}}_{1}},{\mathbf{n}}_{{\mathcal{C}}_{2}} > 3{\mathbf{f}}_{{\mathcal{C}}_{2}}$ $2\frac{1}{4}$ $\left( {{\mathbf{f}}_{{\mathcal{C}}_{1}} + 1}\right) \left( {{\mathbf{f}}_{{\mathcal{C}}_{2}} + 1}\right)$ ✘ ✓
|
| 52 |
+
|
| 53 |
+
2-7
|
| 54 |
+
CSPL $\min ({\mathbf{n}}_{{\mathcal{C}}_{1}},{\mathbf{n}}_{{\mathcal{C}}_{2}}) > {\mathbf{f}}_{{\mathcal{C}}_{1}} + {\mathbf{f}}_{{\mathcal{C}}_{2}}$ 4 ${\mathbf{f}}_{{\mathcal{C}}_{1}} + {\mathbf{f}}_{{\mathcal{C}}_{2}} + 1$ ✓ ✓
|
| 55 |
+
|
| 56 |
+
2-7
|
| 57 |
+
CSPL $\min ({\mathbf{n}}_{{\mathcal{C}}_{1}},{\mathbf{n}}_{{\mathcal{C}}_{2}}) > 2({\mathbf{f}}_{{\mathcal{C}}_{1}} + {\mathbf{f}}_{{\mathcal{C}}_{2}})$ $2\frac{1}{4}$ ${\mathbf{f}}_{{\mathcal{C}}_{1}} + {\mathbf{f}}_{{\mathcal{C}}_{2}} + 1$ ✓ ✓
|
| 58 |
+
|
| 59 |
+
2-7
|
| 60 |
+
CSPL ${\mathbf{n}}_{{\mathcal{C}}_{1}} > 3{\mathbf{f}}_{{\mathcal{C}}_{1}},{\mathbf{n}}_{{\mathcal{C}}_{2}} > 3{\mathbf{f}}_{{\mathcal{C}}_{2}}$ 3 $\max \left( {{\mathbf{n}}_{{\mathcal{C}}_{1}},{\mathbf{n}}_{{\mathcal{C}}_{2}}}\right)$ ✓ ✓
|
| 61 |
+
|
| 62 |
+
1-7
|
| 63 |
+
|
| 64 |
+
${}^{a}$ Protocols that have different message step complexities depending on the robustness assumptions have been included for each of the robustness assumptions. ${}^{b}$ Complexity when the coordinating primary in ${\mathcal{C}}_{1}$ is non-faulty and communication is reliable.
|
| 65 |
+
|
| 66 |
+
1. First, in Section 3, we introduce the cluster-sending step CS-STEP that attempts to send a value from a replica in the sending cluster to a replica in the receiving cluster in a verifiable manner and with a constant amount of inter-cluster communication.
|
| 67 |
+
|
| 68 |
+
2. Then, in Section 4, we introduce the Synchronous Probabilistic Cluster-Sending protocol CSP that uses CS-STEP with randomly selected sending and receiving replicas to provide cluster-sending in expected constant steps. We also propose pruned-CSP (CSPP), a fine-tuned version of CSP that guarantees termination.
|
| 69 |
+
|
| 70 |
+
3. In Section 5, we propose the Synchronous Probabilistic Linear Cluster-Sending protocol CSPL, that uses CS-STEP with a specialized randomized scheme to select replicas, this to provide cluster-sending in expected constant steps and worst-case linear steps, which is optimal.
|
| 71 |
+
|
| 72 |
+
4. Next, in Section 6, we discuss how CSP, CSPP, and CSPL can be generalized to operate in environments with asynchronous and unreliable communication.
|
| 73 |
+
|
| 74 |
+
5. Finally, in Section 7, we evaluate the behavior of the proposed probabilistic cluster-sending protocols via an in-depth evaluation. In this evaluation, we show that probabilistic cluster-sending protocols has exceptionally low communication costs in comparison with existing cluster-sending protocols, this even in the presence of communication failures.
|
| 75 |
+
|
| 76 |
+
A summary of our findings in comparison with existing techniques can be found in Figure 2. In Section 2, we introduce the necessary terminology and notation, in Section 8, we compare with related work, and in Section 9, we conclude on our findings.
|
| 77 |
+
|
| 78 |
+
§ 2 THE CLUSTER-SENDING PROBLEM
|
| 79 |
+
|
| 80 |
+
Before we present our probabilistic cluster-sending techniques, we first introduce all necessary terminology and notation. The formal model we use is based on the formalization of the cluster-sending problem provided by Hellings et al. [17]. If $S$ is a set of replicas, then $\mathfrak{f}\left( S\right) \subseteq S$ denotes the faulty replicas in $S$ , whereas $\operatorname{nf}\left( S\right) = S \smallsetminus \mathrm{f}\left( S\right)$ denotes the non-faulty replicas in $S$ . We write ${\mathbf{n}}_{S} = \left| S\right| ,{\mathbf{f}}_{S} = \left| {\mathfrak{f}\left( S\right) }\right|$ , and ${\mathbf{{nf}}}_{S} = \left| {\operatorname{nf}\left( S\right) }\right| = {\mathbf{n}}_{S} - {\mathbf{f}}_{S}$ to denote the number of replicas, faulty replicas, and non-faulty replicas in $S$ , respectively. A cluster $\mathcal{C}$ is a finite set of replicas. We consider clusters with Byzantine replicas that behave in arbitrary manners. In specific, if $\mathcal{C}$ is a cluster, then any malicious adversary can control the replicas in $\mathrm{f}\left( \mathcal{C}\right)$ at any time, but adversaries cannot bring non-faulty replicas under their control.
|
| 81 |
+
|
| 82 |
+
Definition 2.1. Let ${\mathcal{C}}_{1},{\mathcal{C}}_{2}$ be disjoint clusters. The cluster-sending problem is the problem of sending a value $v$ from ${\mathcal{C}}_{1}$ to ${\mathcal{C}}_{2}$ such that (1) all non-faulty replicas in $\operatorname{nf}\left( {\mathcal{C}}_{2}\right)$ RECEIVE the value $v$ ; (2) all non-faulty replicas in $\operatorname{nf}\left( {\mathcal{C}}_{1}\right)$ CONFIRM that the value $v$ was received by all non-faulty replicas in $\operatorname{nf}\left( {\mathcal{C}}_{2}\right)$ ; and (3) non-faulty replicas in $\operatorname{nf}\left( {\mathcal{C}}_{2}\right)$ only receive a value $v$ if all non-faulty replicas in $\operatorname{nf}\left( {\mathcal{C}}_{1}\right)$ AGREE upon sending $v$ .
|
| 83 |
+
|
| 84 |
+
We assume that there is no limitation on local communication within a cluster, while global communication between clusters is costly. This model is supported by practice, where communication between wide-area deployments of clusters is up-to-two orders of magnitudes more expensive than communication within a cluster [7, 15].
|
| 85 |
+
|
| 86 |
+
We assume that each cluster can make local decisions among all non-faulty replicas, e.g., via a consensus protocol such as PBFT or PAXOS [6, 23]. Furthermore, we assume that the replicas in each cluster can certify such local decisions via a signature scheme. E.g., a cluster $\mathcal{C}$ can certify a consensus decision on some message $m$ by collecting a set of signatures for $m$ of ${\mathbf{f}}_{\mathcal{C}} + 1$ replicas in $\mathcal{C}$ , guaranteeing one such signature is from a non-faulty replica (which would only signs values on which consensus is reached). We write $\langle m{\rangle }_{C}$ to denote a message $m$ certified by $\mathcal{C}$ . To minimize the size of certified messages, one can utilize a threshold signature scheme [30]. To enable decision making and message certification, we assume, for every cluster $\mathcal{C},{\mathbf{n}}_{\mathcal{C}} > 2{\mathbf{f}}_{\mathcal{C}}$ , a minimal requirement $\left\lbrack {9,{24}}\right\rbrack$ . Lastly, we assume that there is a common source of randomness for all non-faulty replicas of each cluster, e.g., via a distributed fault-tolerant random coin [3,4].
|
| 87 |
+
|
| 88 |
+
§ 3 THE CLUSTER-SENDING STEP
|
| 89 |
+
|
| 90 |
+
If communication is reliable and one knows non-faulty replicas ${\mathrm{R}}_{1} \in \mathfrak{{nf}}\left( {\mathcal{C}}_{1}\right)$ and ${\mathrm{R}}_{2} \in \mathfrak{{nf}}\left( {\mathcal{C}}_{2}\right)$ , then cluster-sending a value $v$ from ${\mathcal{C}}_{1}$ to ${\mathcal{C}}_{2}$ can be done via a straightforward cluster-sending step: one can simply instruct ${\mathrm{R}}_{1}$ to send $v$ to ${\mathrm{R}}_{2}$ . When ${\mathrm{R}}_{2}$ receives $v$ , it can disperse $v$ locally in ${\mathcal{C}}_{2}$ . Unfortunately, we do not know which replicas are faulty and which are non-faulty. Furthermore, it is practically impossible to reliably determine which replicas are non-faulty, as non-faulty replicas can appear faulty due to unreliable communication, while faulty replicas can appear well-behaved to most replicas, while interfering with the operations of only some non-faulty replicas.
|
| 91 |
+
|
| 92 |
+
To deal with faulty replicas when utilizing the above cluster-sending step, one needs a sufficient safeguards to detect failure of ${\mathrm{R}}_{1}$ , of ${\mathrm{R}}_{2}$ , or of the communication between them. To do so, we add receive and confirmation phases to the sketched cluster-sending step. During the receive phase, the receiving replica ${\mathrm{R}}_{2}$ must construct a proof $P$ that it received and dispersed $v$ locally in ${\mathcal{C}}_{2}$ and then send this proof back to ${\mathrm{R}}_{1}$ . Finally, during the confirmation phase, ${\mathrm{R}}_{1}$ can utilize $P$ to prove to all other replicas in ${\mathcal{C}}_{1}$ that the cluster-sending step was successful. The pseudo-code of this cluster-sending step protocol CS-STEP can be found in Figure 3. We have the following:
|
| 93 |
+
|
| 94 |
+
Proposition 3.1. Let ${\mathcal{C}}_{1},{\mathcal{C}}_{2}$ be disjoint clusters with ${\mathrm{R}}_{1} \in {\mathcal{C}}_{1}$ and ${\mathrm{R}}_{2} \in {\mathcal{C}}_{2}$ . If ${\mathcal{C}}_{1}$ satisfies the pre-conditions of CS-STEP $\left( {\mathrm{R}}_{1}\right.$ , ${\mathrm{R}}_{2},v$ ), then execution of CS-STEP $\left( {{\mathrm{R}}_{1},{\mathrm{R}}_{2},v}\right)$ satisfies the post-conditions and will exchange at most two messages between ${\mathcal{C}}_{1}$ and ${\mathcal{C}}_{2}$ .
|
| 95 |
+
|
| 96 |
+
Proof. We prove the three post-conditions separately. (i)
|
| 97 |
+
|
| 98 |
+
Protocol CS-STEP $\left( {{\mathrm{R}}_{1},{\mathrm{R}}_{2},v}\right)$ , with ${\mathrm{R}}_{1} \in {\mathcal{C}}_{1}$ and ${\mathrm{R}}_{2} \in {\mathcal{C}}_{2}$ :
|
| 99 |
+
|
| 100 |
+
Pre: Each replica in nf $\left( {\mathcal{C}}_{1}\right)$ decided AGREE on sending $v$ to ${\mathcal{C}}_{2}$ (and
|
| 101 |
+
|
| 102 |
+
can construct ${\left\langle \text{ send : }v,{\mathcal{C}}_{2}\right\rangle }_{{\mathcal{C}}_{1}}$ ).
|
| 103 |
+
|
| 104 |
+
Post: (i) If communication is reliable, ${\mathrm{R}}_{1} \in \operatorname{nf}\left( {\mathcal{C}}_{1}\right)$ , and ${\mathrm{R}}_{2} \in \operatorname{nf}\left( {\mathcal{C}}_{2}\right)$ ,
|
| 105 |
+
|
| 106 |
+
then ${\mathrm{R}}_{1}$ decides coNFIRM on $v$ . (ii) If a replica in $\mathrm{{nf}}\left( {\mathcal{C}}_{2}\right)$ decides
|
| 107 |
+
|
| 108 |
+
RECEIVE on $v$ , then all replicas in $\operatorname{nf}\left( {\mathcal{C}}_{1}\right)$ decided AGREE on
|
| 109 |
+
|
| 110 |
+
sending $v$ to ${\mathcal{C}}_{2}$ . (iii) If a replica in $\operatorname{nf}\left( {\mathcal{C}}_{1}\right)$ decides coNFIRM
|
| 111 |
+
|
| 112 |
+
on $v$ , then all replicas in $\operatorname{nf}\left( {\mathcal{C}}_{2}\right)$ decided RECEIVE on $v$ and all
|
| 113 |
+
|
| 114 |
+
replicas in $\operatorname{nf}\left( {\mathcal{C}}_{1}\right)$ eventually decide CONFIRM on $v$ (whenever
|
| 115 |
+
|
| 116 |
+
communication becomes reliable).
|
| 117 |
+
|
| 118 |
+
The cluster-sending step for ${\mathrm{R}}_{1}$ and ${\mathrm{R}}_{2}$ :
|
| 119 |
+
|
| 120 |
+
Instruct ${\mathrm{R}}_{1}$ to send ${\left\langle \text{ send } : v,{\mathcal{C}}_{2}\right\rangle }_{{\mathcal{C}}_{1}}$ to ${\mathrm{R}}_{2}$ .
|
| 121 |
+
|
| 122 |
+
§ THE RECEIVE ROLE FOR ${\MATHCAL{C}}_{2}$ :
|
| 123 |
+
|
| 124 |
+
: event ${\mathrm{R}}_{2} \in \operatorname{nf}\left( {\mathcal{C}}_{2}\right)$ receives message $m \mathrel{\text{ := }} {\left\langle \text{ send } : v,{\mathcal{C}}_{2}\right\rangle }_{{\mathcal{C}}_{1}}$
|
| 125 |
+
|
| 126 |
+
from ${\mathrm{R}}_{1} \in {\mathcal{C}}_{1}$ do
|
| 127 |
+
|
| 128 |
+
if ${\mathrm{R}}_{2}$ does not have consensus on $m$ then
|
| 129 |
+
|
| 130 |
+
Use local consensus on $m$ and construct $\langle$ proof : $m{\rangle }_{{\mathcal{C}}_{2}}$ .
|
| 131 |
+
|
| 132 |
+
{Each replica in $\operatorname{nf}\left( {\mathcal{C}}_{2}\right)$ decides RECEIVE on $v$ .}
|
| 133 |
+
|
| 134 |
+
Send $\langle$ proof : $m{\rangle }_{{\mathcal{C}}_{2}}$ to ${\mathrm{R}}_{1}$ .
|
| 135 |
+
|
| 136 |
+
§ THE CONFIRMATION ROLE FOR ${\MATHCAL{C}}_{1}$ :
|
| 137 |
+
|
| 138 |
+
: event ${\mathrm{R}}_{1} \in \operatorname{nf}\left( {\mathcal{C}}_{1}\right)$ receives message ${m}_{p} \mathrel{\text{ := }} \langle$ proof : $m{\rangle }_{{\mathcal{C}}_{2}}$
|
| 139 |
+
|
| 140 |
+
with $m \mathrel{\text{ := }} {\left\langle \text{ send } : v,{\mathcal{C}}_{2}\right\rangle }_{{\mathcal{C}}_{1}}$ from ${\mathrm{R}}_{2} \in {\mathcal{C}}_{2}$ do
|
| 141 |
+
|
| 142 |
+
if ${\mathrm{R}}_{1}$ does not have consensus on ${m}_{p}$ then
|
| 143 |
+
|
| 144 |
+
Use local consensus on ${m}_{p}$ .
|
| 145 |
+
|
| 146 |
+
{Each replica in $\operatorname{nf}\left( {\mathcal{C}}_{1}\right)$ decides construm on $v$ .
|
| 147 |
+
|
| 148 |
+
Figure 3: The Cluster-sending step protocol CS-STEP $\left( {{\mathrm{R}}_{1},{\mathrm{R}}_{2}}\right.$ , $v$ ). In this protocol, ${\mathrm{R}}_{1}$ tries to send $v$ to ${\mathrm{R}}_{2}$ , which will succeed if both ${\mathrm{R}}_{1}$ and ${\mathrm{R}}_{2}$ are non-faulty.
|
| 149 |
+
|
| 150 |
+
We assume that communication is reliable, ${\mathrm{R}}_{1} \in \operatorname{nf}\left( {\mathcal{C}}_{1}\right)$ , and ${\mathrm{R}}_{2} \in \operatorname{nf}\left( {\mathcal{C}}_{2}\right)$ . Hence, ${\mathrm{R}}_{1}$ sends message $m \mathrel{\text{ := }} {\left\langle \text{ send } : v,{\mathcal{C}}_{2}\right\rangle }_{{\mathcal{C}}_{1}}$ to ${\mathrm{R}}_{2}$ (Line 1 of Figure 3). In the receive phase (Lines 2-6 of Figure 3), replica ${\mathrm{R}}_{2}$ receives message $m$ from ${\mathrm{R}}_{1}$ . Replica ${\mathrm{R}}_{2}$ uses local consensus on $m$ to replicate $m$ among all replicas ${\mathcal{C}}_{2}$ and, along the way, to constructs a proof of receipt ${m}_{p} \mathrel{\text{ := }}$ ${\left\langle \text{ proof : }m\right\rangle }_{{\mathcal{C}}_{2}}$ . As all replicas in $\operatorname{nf}\left( {\mathcal{C}}_{2}\right)$ participate in this local consensus, all replicas in $\operatorname{nf}\left( {\mathcal{C}}_{2}\right)$ will decide RECEIVE on $v$ from ${\mathcal{C}}_{1}$ . Finally, the proof ${m}_{p}$ is returned to ${\mathrm{R}}_{1}$ . In the confirmation phase (Lines 7-10 of Figure 3), replica ${\mathrm{R}}_{1}$ receives the proof of receipt ${m}_{p}$ . Next, ${\mathrm{R}}_{1}$ uses local consensus on ${m}_{p}$ to replicate ${m}_{p}$ among all replicas in $\operatorname{nf}\left( {\mathcal{C}}_{1}\right)$ , after which all replicas in $\operatorname{nf}\left( {\mathcal{C}}_{1}\right)$ decide coNFIRM on sending $v$ to ${\mathcal{C}}_{2}$
|
| 151 |
+
|
| 152 |
+
(ii) A replica in $\operatorname{nf}\left( {\mathcal{C}}_{2}\right)$ only decides RECEIVE on $v$ after consensus is reached on a message $m \mathrel{\text{ := }} {\left\langle \text{ send : }v,{\mathcal{C}}_{2}\right\rangle }_{{\mathcal{C}}_{1}}$ (Line 5 of Figure 3). This message $m$ not only contains the value $v$ , but also the identity of the recipient cluster ${\mathcal{C}}_{2}$ . Due to the usage of certificates and the pre-condition, the message $m$ cannot be created without the replicas in $\operatorname{nf}\left( {\mathcal{C}}_{1}\right)$ deciding AGREE on sending $v$ to ${\mathcal{C}}_{2}$ .
|
| 153 |
+
|
| 154 |
+
(iii) A replica in $\operatorname{nf}\left( {\mathcal{C}}_{1}\right)$ only decides CONFIRM on $v$ after consensus is reached on a proof of receipt message ${m}_{p} \mathrel{\text{ := }}$ $\langle$ proof : $m{\rangle }_{{\mathcal{C}}_{2}}$ (Line 10 of Figure 3). This consensus step will complete for all replicas in ${\mathcal{C}}_{1}$ whenever communication becomes reliable. Hence, all replicas in $\operatorname{nf}\left( {\mathcal{C}}_{1}\right)$ will eventually decide CONFIRM on $v$ . Due to the usage of certificates, the message ${m}_{p}$ cannot be created without cooperation of the replicas in $\operatorname{nf}\left( {\mathcal{C}}_{2}\right)$ . The replicas in $\operatorname{nf}\left( {\mathcal{C}}_{2}\right)$ only cooperate in constructing ${m}_{p}$ as part of the consensus step of Line 4 of Figure 3. Upon completion of this consensus step, all replicas in $\operatorname{nf}\left( {\mathcal{C}}_{2}\right)$ will decide RECEIVE on $v$ .
|
| 155 |
+
|
| 156 |
+
In the following sections, we show how to use the cluster-sending step in the construction of cluster-sending protocols. In Section 4, we introduce synchronous protocols that provide expected constant message complexity. Then, in Section 5, we introduce synchronous protocols that additionally provide worst-case linear message complexity, which is optimal. Finally, in Section 6, we show how to extend the presented techniques to asynchronous communication.
|
| 157 |
+
|
| 158 |
+
§ 4 PROBABILISTIC CLUSTER-SENDING WITH RANDOM REPLICA SELECTION
|
| 159 |
+
|
| 160 |
+
In the previous section, we introduced CS-STEP, the cluster-sending step protocol that succeeds whenever the participating replicas are non-faulty and communication is reliable. Using CS-STEP, we build a three-step protocol that cluster-sends a value $v$ from ${\mathcal{C}}_{1}$ to ${\mathcal{C}}_{2}$ :
|
| 161 |
+
|
| 162 |
+
1. First, the replicas in $\operatorname{nf}\left( {\mathcal{C}}_{1}\right)$ reach agreement and decide AGREE on sending $v$ to ${\mathcal{C}}_{2}$ .
|
| 163 |
+
|
| 164 |
+
2. Then, the replicas in $\operatorname{nf}\left( {\mathcal{C}}_{1}\right)$ perform a probabilistic cluster-sending step by electing replicas ${\mathrm{R}}_{1} \in {\mathcal{C}}_{1}$ and ${\mathrm{R}}_{2} \in {\mathcal{C}}_{2}$ fully at random, after which CS-STEP $\left( {{\mathrm{R}}_{1},{\mathrm{R}}_{2},v}\right)$ is executed.
|
| 165 |
+
|
| 166 |
+
3. Finally, each replicas in $\operatorname{nf}\left( {\mathcal{C}}_{1}\right)$ waits for the completion of CS-STEP $\left( {{\mathrm{R}}_{1},{\mathrm{R}}_{2},v}\right)$ If the waiting replicas decided CONFIRM on $v$ during this wait, then cluster-sending is successful. Otherwise, we repeat the previous step.
|
| 167 |
+
|
| 168 |
+
To simplify presentation, we assume synchronous inter-cluster communication to enable replicas to wait for completion: messages sent by non-faulty replicas will be delivered within some known bounded delay. We refer to Section 6 on how to deal with asynchronous and unreliable communication. Synchronous systems can be modeled by pulses $\left\lbrack {{10},{11}}\right\rbrack$ :
|
| 169 |
+
|
| 170 |
+
Definition 4.1. A system is synchronous if all inter-cluster communication happens in pulses such that every message sent in a pulse will be received in the same pulse.
|
| 171 |
+
|
| 172 |
+
The pseudo-code of the resultant Synchronous Probabilistic Cluster-Sending protocol CSP can be found in Figure 4. Next, we prove that CSP is correct and has expected-case constant message complexity:
|
| 173 |
+
|
| 174 |
+
Protocol $\operatorname{CSp}\left( {{\mathcal{C}}_{1},{\mathcal{C}}_{2},v}\right)$ :
|
| 175 |
+
|
| 176 |
+
: Use local consensus on $v$ and construct ${\left\langle \text{ send } : v,{\mathcal{C}}_{2}\right\rangle }_{{\mathcal{C}}_{1}}$ .
|
| 177 |
+
|
| 178 |
+
{Each replica in $\operatorname{nf}\left( {\mathcal{C}}_{1}\right)$ decides AGREE on $v$ .
|
| 179 |
+
|
| 180 |
+
repeat
|
| 181 |
+
|
| 182 |
+
Choose replicas $\left( {{\mathrm{R}}_{1},{\mathrm{R}}_{2}}\right) \in {\mathcal{C}}_{1} \times {\mathcal{C}}_{2}$ , fully at random.
|
| 183 |
+
|
| 184 |
+
$\operatorname{CS} - \operatorname{STEP}\left( {{\mathrm{R}}_{1},{\mathrm{R}}_{2},v}\right)$
|
| 185 |
+
|
| 186 |
+
Wait three global pulses.
|
| 187 |
+
|
| 188 |
+
mtil ${\mathcal{C}}_{1}$ reaches consensus on ${\left\langle \text{ proof : }\langle \text{ send : }v,{\mathcal{C}}_{2}{\rangle }_{{\mathcal{C}}_{1}}\right\rangle }_{{\mathcal{C}}_{2}}$ .
|
| 189 |
+
|
| 190 |
+
Figure 4: The Synchronous Probabilistic Cluster-Sending protocol $\operatorname{CSp}\left( {{\mathcal{C}}_{1},{\mathcal{C}}_{2},v}\right)$ that cluster-sends a value $v$ from ${\mathcal{C}}_{1}$ to ${C}_{2}$ .
|
| 191 |
+
|
| 192 |
+
Theorem 4.2. Let ${\mathcal{C}}_{1},{\mathcal{C}}_{2}$ be disjoint clusters. If communication is synchronous, then $\operatorname{CSP}\left( {{\mathcal{C}}_{1},{\mathcal{C}}_{2},v}\right)$ results in cluster-sending $v$ from ${\mathcal{C}}_{1}$ to ${\mathcal{C}}_{2}$ . The execution performs two local consensus steps in ${\mathcal{C}}_{1}$ , one local consensus step in ${\mathcal{C}}_{2}$ , and is expected to make $\left( {{\mathbf{n}}_{{\mathcal{C}}_{1}}{\mathbf{n}}_{{\mathcal{C}}_{2}}}\right) /\left( {{\mathbf{{nf}}}_{{\mathcal{C}}_{1}}{\mathbf{{nf}}}_{{\mathcal{C}}_{1}}}\right)$ cluster-sending steps.
|
| 193 |
+
|
| 194 |
+
Proof. Due to Lines 1-2 of Figure 4, $\operatorname{CSp}\left( {{\mathcal{C}}_{1},{\mathcal{C}}_{2},v}\right)$ establishes the pre-conditions for any execution of CS-STEP $\left( {\mathrm{R}}_{1}\right.$ , ${\mathrm{R}}_{2},v)$ with ${\mathrm{R}}_{1} \in {\mathcal{C}}_{1}$ and ${\mathrm{R}}_{2} \in {\mathcal{C}}_{2}$ . Using the correctness of CS-STEP (Proposition 3.1), we conclude that $\operatorname{CSp}\left( {{\mathcal{C}}_{1},{\mathcal{C}}_{2},v}\right)$ results in cluster-sending $v$ from ${\mathcal{C}}_{1}$ to ${\mathcal{C}}_{2}$ whenever the replicas $\left( {{\mathrm{R}}_{1},{\mathrm{R}}_{2}}\right) \in {\mathcal{C}}_{1} \times {\mathcal{C}}_{2}$ chosen at Line 4 of Figure 4 are non-faulty. As the replicas $\left( {{\mathrm{R}}_{1},{\mathrm{R}}_{2}}\right) \in {\mathcal{C}}_{1} \times {\mathcal{C}}_{2}$ are chosen fully at random, we have probability ${p}_{i} = {\mathbf{{nf}}}_{{\mathcal{C}}_{i}}/{\mathbf{n}}_{{\mathcal{C}}_{i}},i \in \{ 1,2\}$ , of choosing ${\mathrm{R}}_{i} \in$ $\operatorname{nf}\left( {\mathcal{C}}_{i}\right)$ . The probabilities ${p}_{1}$ and ${p}_{2}$ are independent of each other. Consequently, the probability of choosing $\left( {{\mathrm{R}}_{1},{\mathrm{R}}_{2}}\right) \in$ $\operatorname{nf}\left( {\mathcal{C}}_{1}\right) \times \operatorname{nf}\left( {\mathcal{C}}_{2}\right)$ is $p = {p}_{1}{p}_{2} = \left( {{\mathbf{{nf}}}_{{\mathcal{C}}_{1}}{\mathbf{{nf}}}_{{\mathcal{C}}_{2}}}\right) /\left( {{\mathbf{n}}_{{\mathcal{C}}_{1}}{\mathbf{n}}_{{\mathcal{C}}_{2}}}\right)$ . As such, each iteration of the loop at Line 3 of Figure 4 can be modeled as an independent Bernoulli trial with probability of success $p$ , and the expected number of iterations of the loop is ${p}^{-1} = \left( {{\mathbf{n}}_{{\mathcal{C}}_{1}}{\mathbf{n}}_{{\mathcal{C}}_{2}}}\right) /\left( {{\mathbf{{nf}}}_{{\mathcal{C}}_{1}}{\mathbf{{nf}}}_{{\mathcal{C}}_{1}}}\right)$ .
|
| 195 |
+
|
| 196 |
+
Finally, we prove that each local consensus step needs to be performed only once. To do so, we consider the local consensus steps triggered by the loop at Line 3 of Figure 4. These are the local consensus steps at Lines 4 and 9 of Figure 3 . The local consensus step at Line 4 can be initiated by a faulty replica ${\mathrm{R}}_{2}$ . After this single local consensus step reaches consensus on message $m \mathrel{\text{ := }} {\left\langle \text{ send } : v,{\mathcal{C}}_{2}\right\rangle }_{{\mathcal{C}}_{1}}$ , each replica in $\operatorname{nf}\left( {\mathcal{C}}_{2}\right)$ reaches consensus on $m$ , decides RECEIVE on $v$ , and can construct ${m}_{p} \mathrel{\text{ := }} \langle$ proof : $m{\rangle }_{{\mathcal{C}}_{2}}$ , this independent of the behavior of ${\mathrm{R}}_{2}$ . Hence, a single local consensus step for $m$ in ${\mathcal{C}}_{2}$ suffices, and no replica in $\operatorname{nf}\left( {\mathcal{C}}_{2}\right)$ will participate in future consensus steps for $m$ . An analogous argument proves that a single local consensus step for ${m}_{p}$ in ${\mathcal{C}}_{1}$ , performed at Line 9 of Figure 3, suffices.
|
| 197 |
+
|
| 198 |
+
Remark 4.3. Although Theorem 4.2 indicates local consensus steps in clusters ${\mathcal{C}}_{1}$ and ${\mathcal{C}}_{2}$ , these local consensus steps typically come for free as part of the protocol that uses cluster-sending as a building block. To see this, we consider a multi-shard transaction processed by clusters ${\mathcal{C}}_{1}$ and ${\mathcal{C}}_{2}$ .
|
| 199 |
+
|
| 200 |
+
The decision of cluster ${\mathcal{C}}_{1}$ to send a value $v$ to cluster ${\mathcal{C}}_{2}$ is a consequence of the execution of some transaction $\tau$ in ${\mathcal{C}}_{1}$ . Before the replicas in ${\mathcal{C}}_{1}$ execute $\tau$ , they need to reach consensus on the order in which $\tau$ is executed in ${\mathcal{C}}_{1}$ . As part of this consensus step, the replicas in ${\mathcal{C}}_{1}$ can also construct ${\left\langle \text{ send : }v,{\mathcal{C}}_{2}\right\rangle }_{{\mathcal{C}}_{1}}$ without additional consensus steps. Hence, no consensus step is necessary in ${\mathcal{C}}_{1}$ to send value $v$ . Likewise, if value $v$ is received by replicas in ${\mathcal{C}}_{2}$ as part of some multi-shard transaction execution protocol, then the replicas in ${\mathcal{C}}_{2}$ need to perform the necessary transaction execution steps as a consequence of receiving $v$ . To do so, the replicas in ${\mathcal{C}}_{2}$ need to reach consensus on the order in which these transaction execution steps are performed. As part of this consensus step, the replicas in ${\mathcal{C}}_{2}$ can also constructing a proof of receipt for $v$ .
|
| 201 |
+
|
| 202 |
+
In typical fault-tolerant clusters, at least half of the replicas are non-faulty (e.g., in synchronous systems with Byzantine failures that use digital signatures, or in systems that only deal with crashes) or at least two-third of the replicas are nonfaulty (e.g., asynchronous systems). In these systems, CSP is expected to only performs a few cluster-sending steps:
|
| 203 |
+
|
| 204 |
+
Corollary 4.4. Let ${\mathcal{C}}_{1},{\mathcal{C}}_{2}$ be disjoint clusters. If communication is synchronous, then the expected number of cluster-sending steps performed by $\operatorname{CSP}\left( {{\mathcal{C}}_{1},{\mathcal{C}}_{2},v}\right)$ is upper bounded by 4 if ${\mathbf{n}}_{{\mathcal{C}}_{1}} > 2{\mathbf{f}}_{{\mathcal{C}}_{1}}$ and ${\mathbf{n}}_{{\mathcal{C}}_{2}} > 2{\mathbf{f}}_{{\mathcal{C}}_{2}}$ ; and by $2\frac{1}{4}$ if ${\mathbf{n}}_{{\mathcal{C}}_{1}} > 3{\mathbf{f}}_{{\mathcal{C}}_{1}}$ and ${\mathbf{n}}_{{\mathcal{C}}_{2}} > 3{\mathbf{f}}_{{\mathcal{C}}_{2}}$ .
|
| 205 |
+
|
| 206 |
+
In CSP, the replicas $\left( {{\mathrm{R}}_{1},{\mathrm{R}}_{2}}\right) \in {\mathcal{C}}_{1} \times {\mathcal{C}}_{2}$ are chosen fully at random and with replacement, as CSP does not retain any information on failed probabilistic steps. In the worst case, this prevents termination, as the same pair of replicas can be picked repeatedly. Furthermore, CSP does not prevent the choice of faulty replicas whose failure could be detected. We can easily improve on this, as the failure of a probabilistic step provides some information on the chosen replicas. In specific, we have the following technical properties:
|
| 207 |
+
|
| 208 |
+
Lemma 4.1. Let ${\mathcal{C}}_{1},{\mathcal{C}}_{2}$ be disjoint clusters. We assume synchronous communication and assume that each replica in nf $\left( {\mathcal{C}}_{1}\right)$ decided AGREE on sending $v$ to ${\mathcal{C}}_{2}$ .
|
| 209 |
+
|
| 210 |
+
1. Let $\left( {{\mathrm{R}}_{1},{\mathrm{R}}_{2}}\right) \in {\mathcal{C}}_{1} \times {\mathcal{C}}_{2}$ . If CS-STEP $\left( {{\mathrm{R}}_{1},{\mathrm{R}}_{2},v}\right)$ fails to cluster-send $v$ , then either ${\mathrm{R}}_{1} \in \mathrm{f}\left( {\mathcal{C}}_{1}\right) ,{\mathrm{R}}_{2} \in {\mathcal{C}}_{2}$ , or both.
|
| 211 |
+
|
| 212 |
+
2. Let ${\mathrm{R}}_{1} \in {\mathcal{C}}_{1}$ . If $\operatorname{CS-STEP}\left( {{\mathrm{R}}_{1},{\mathrm{R}}_{2},v}\right)$ fails to cluster-send $v$ for ${\mathbf{f}}_{{\mathcal{C}}_{2}} + 1$ distinct replicas ${\mathrm{R}}_{2} \in {\mathcal{C}}_{2}$ , then ${\mathrm{R}}_{1} \in \mathrm{f}\left( {\mathcal{C}}_{1}\right)$ .
|
| 213 |
+
|
| 214 |
+
3. Let ${\mathrm{R}}_{2} \in {\mathcal{C}}_{2}$ . If $\mathrm{{CS}} - \operatorname{STEP}\left( {{\mathrm{R}}_{1},{\mathrm{R}}_{2},v}\right)$ fails to cluster-send $v$ for ${\mathbf{f}}_{{\mathcal{C}}_{1}} + 1$ distinct replicas ${\mathrm{R}}_{1} \in {\mathcal{C}}_{1}$ , then ${\mathrm{R}}_{2} \in \mathrm{f}\left( {\mathcal{C}}_{2}\right)$ .
|
| 215 |
+
|
| 216 |
+
Proof. The statement of this Lemma assumes that the preconditions for any execution of CS-STEP $\left( {{\mathrm{R}}_{1},{\mathrm{R}}_{2},v}\right)$ with ${\mathrm{R}}_{1} \in$ ${\mathcal{C}}_{1}$ and ${\mathrm{R}}_{2} \in {\mathcal{C}}_{2}$ are established. Hence, by Proposition 3.1, CS-STEP $\left( {{\mathrm{R}}_{1},{\mathrm{R}}_{2},v}\right)$ will cluster-send $v$ if ${\mathrm{R}}_{1} \in \operatorname{nf}\left( {\mathcal{C}}_{1}\right)$ and ${\mathrm{R}}_{2} \in \operatorname{nf}\left( {\mathcal{C}}_{2}\right)$ . If the cluster-sending step fails to cluster-send $v$ , then one of the replicas involved must be faulty, proving the first property. Next, let ${\mathrm{R}}_{1} \in {\mathcal{C}}_{1}$ and consider a set $S \subseteq {\mathcal{C}}_{2}$ of ${\mathbf{n}}_{S} = {\mathbf{f}}_{{C}_{2}} + 1$ replicas such that, for all ${\mathrm{R}}_{2} \in S$ , CS-STEP $\left( {\mathrm{R}}_{1}\right.$ , ${\mathrm{R}}_{2},v)$ fails to cluster-send $v$ . Let ${S}^{\prime } = S \smallsetminus \mathrm{f}\left( {\mathcal{C}}_{2}\right)$ be the nonfaulty replicas in $S$ . As ${\mathbf{n}}_{S} > {\mathbf{f}}_{{\mathcal{C}}_{2}}$ , we have ${\mathbf{n}}_{{S}^{\prime }} \geq 1$ and there exists a ${\mathrm{R}}_{2}^{\prime } \in {S}^{\prime }$ . As ${\mathrm{R}}_{2}^{\prime } \notin \mathrm{f}\left( {\mathcal{C}}_{2}\right)$ and $\operatorname{CS-STEP}\left( {{\mathrm{R}}_{1},{\mathrm{R}}_{2}^{\prime },v}\right)$ fails to cluster-send $v$ , we must have ${\mathrm{R}}_{1} \in \mathrm{f}\left( {\mathcal{C}}_{1}\right)$ by the first property, proving the second property. An analogous argument proves the third property.
|
| 217 |
+
|
| 218 |
+
We can apply the properties of Lemma 4.1 to actively prune which replica pairs CSP considers (Line 4 of Figure 4). Notice that pruning via Lemma 4.1(1) simply replaces choosing replica pairs with replacement, as done by CSP, by choosing replica pairs without replacement, this without further reducing the possible search space. Pruning via Lemma 4.1(2) does reduce the search space, however, as each replica in ${\mathcal{C}}_{1}$ will only be paired with a subset of ${\mathbf{f}}_{{\mathcal{C}}_{2}} + 1$ replicas in ${\mathcal{C}}_{2}$ . Likewise, pruning via Lemma 4.1(3) also reduces the search space. We obtain the Pruned Synchronous Probabilistic Cluster-Sending protocol (CSPP) by applying all three prune steps to CSP. By construction, Theorem 4.2, and Lemma 4.1, we conclude:
|
| 219 |
+
|
| 220 |
+
Corollary 4.5. Let ${\mathcal{C}}_{1},{\mathcal{C}}_{2}$ be disjoint clusters. If communication is synchronous, then $\operatorname{CSPP}\left( {{\mathcal{C}}_{1},{\mathcal{C}}_{2},v}\right)$ results in cluster-sending $v$ from ${\mathcal{C}}_{1}$ to ${\mathcal{C}}_{2}$ . The execution performs two local consensus steps in ${\mathcal{C}}_{1}$ , one local consensus step in ${\mathcal{C}}_{2}$ , is expected to make less than $\left( {{\mathbf{n}}_{{\mathcal{C}}_{1}}{\mathbf{n}}_{{\mathcal{C}}_{2}}}\right) /\left( {{\mathbf{{nf}}}_{{\mathcal{C}}_{1}}{\mathbf{{nf}}}_{{\mathcal{C}}_{1}}}\right)$ cluster-sending steps, and makes worst-case $\left( {{\mathbf{f}}_{{\mathcal{C}}_{1}} + 1}\right) \left( {{\mathbf{f}}_{{\mathcal{C}}_{2}} + 1}\right)$ cluster-sending steps.
|
| 221 |
+
|
| 222 |
+
§ 5 WORST-CASE LINEAR-TIME PROBABILISTIC CLUSTER-SENDING
|
| 223 |
+
|
| 224 |
+
In the previous section, we introduced CSP and CSPP, two probabilistic cluster-sending protocols that can cluster-send a value $v$ from ${\mathcal{C}}_{1}$ to ${\mathcal{C}}_{2}$ with expected constant cost. Unfortunately, CSP does not guarantee termination, while CSPP has a worst-case quadratic complexity. To improve on this, we need to improve the scheme by which we select replica pairs $\left( {{\mathrm{R}}_{1},{\mathrm{R}}_{2}}\right) \in {\mathcal{C}}_{1} \times {\mathcal{C}}_{2}$ that we use in cluster-sending steps. The straightforward manner to guarantee a worst-case linear complexity is by using a scheme that can select only up-to- $n = \max \left( {{\mathbf{n}}_{{\mathcal{C}}_{1}},{\mathbf{n}}_{{\mathcal{C}}_{2}}}\right)$ distinct pairs $\left( {{\mathrm{R}}_{1},{\mathrm{R}}_{2}}\right) \in {\mathcal{C}}_{1} \times {\mathcal{C}}_{2}$ . To select $n$ replica pairs from ${\mathcal{C}}_{1} \times {\mathcal{C}}_{2}$ , we will proceed in two steps.
|
| 225 |
+
|
| 226 |
+
1. We generate list ${S}_{1}$ of $n$ replicas taken from ${\mathcal{C}}_{1}$ and list ${S}_{2}$ of $n$ replicas taken from ${\mathcal{C}}_{2}$ .
|
| 227 |
+
|
| 228 |
+
2. Then, we choose permutations ${P}_{1} \in \operatorname{perms}\left( {S}_{1}\right)$ and ${P}_{2} \in \operatorname{perms}\left( {S}_{2}\right)$ fully at random, and interpret each pair $\left( {{P}_{1}\left\lbrack i\right\rbrack ,{P}_{2}\left\lbrack i\right\rbrack }\right) .\;0 \leq i < n$ , as one of the chosen replica pairs.
|
| 229 |
+
|
| 230 |
+
We use the first step to deal with any differences in the sizes of ${\mathcal{C}}_{1}$ and ${\mathcal{C}}_{2}$ , and we use the second step to introduce sufficient randomness in our protocol to yield an low expected-case message complexity.
|
| 231 |
+
|
| 232 |
+
Next, we introduce some notations to simplify reasoning about the above list-based scheme. If $R$ is a set of replicas, then $\operatorname{list}\left( R\right)$ is the list consisting of the replicas in $R$ placed in a predetermined order (e.g., on increasing replica identifier). If $S$ is a list of replicas, then we write $\mathrm{f}\left( S\right)$ to denote the faulty replicas in $S$ and nf(S)to denote the non-faulty replicas in $S$ , and we write ${\mathbf{n}}_{S} = \left| S\right| ,{\mathbf{f}}_{S} = \left| \left\{ {i \mid \left( {0 \leq i < {\mathbf{n}}_{S}}\right) \land S\left\lbrack i\right\rbrack \in }\right. \right|$ $\left. {\mathrm{f}\left( S\right) }\right\} \mid$ , and ${\mathbf{{nf}}}_{S} = {\mathbf{n}}_{S} - {\mathbf{f}}_{S}$ to denote the number of positions in $S$ with replicas, faulty replicas, and non-faulty replicas, respectively. If $\left( {{P}_{1},{P}_{2}}\right)$ is a pair of equal-length lists of $n =$ $\left| {P}_{1}\right| = \left| {P}_{2}\right|$ replicas, then we say that the $i$ -th position is a faulty position if either ${P}_{1}\left\lbrack i\right\rbrack \in \mathfrak{f}\left( {P}_{1}\right)$ or ${P}_{2}\left\lbrack i\right\rbrack \in \mathfrak{f}\left( {P}_{2}\right)$ . We write ${\begin{Vmatrix}{P}_{1};{P}_{2}\end{Vmatrix}}_{\mathbf{f}}$ to denote the number of faulty positions in $\left( {{P}_{1},{P}_{2}}\right)$ . As faulty positions can only be constructed out of the ${\mathbf{f}}_{{P}_{1}}$ faulty replicas in ${P}_{1}$ and the ${\mathbf{f}}_{{P}_{2}}$ faulty replicas in ${P}_{2}$ , we must have $\max \left( {{\mathbf{f}}_{{P}_{1}},{\mathbf{f}}_{{P}_{2}}}\right) \leq {\begin{Vmatrix}{P}_{1};{P}_{2}\end{Vmatrix}}_{\mathbf{f}} \leq \min \left( {n,{\mathbf{f}}_{{P}_{1}} + {\mathbf{f}}_{{P}_{2}}}\right)$ .
|
| 233 |
+
|
| 234 |
+
Example 5.1. Consider clusters ${\mathcal{C}}_{1},{\mathcal{C}}_{2}$ with
|
| 235 |
+
|
| 236 |
+
$$
|
| 237 |
+
{S}_{1} = \operatorname{list}\left( {\mathcal{C}}_{1}\right) = \left\lbrack {{\mathrm{R}}_{1,1},\ldots ,{\mathrm{R}}_{1,5}}\right\rbrack ,\;\mathrm{f}\left( {\mathcal{C}}_{1}\right) = \left\{ {{\mathrm{R}}_{1,1},{\mathrm{R}}_{1,2}}\right\} ;
|
| 238 |
+
$$
|
| 239 |
+
|
| 240 |
+
$$
|
| 241 |
+
{S}_{2} = \operatorname{list}\left( {\mathcal{C}}_{2}\right) = \left\lbrack {{\mathrm{R}}_{2,1},\ldots ,{\mathrm{R}}_{2,5}}\right\rbrack ,\;\mathrm{f}\left( {\mathcal{C}}_{2}\right) = \left\{ {{\mathrm{R}}_{2,1},{\mathrm{R}}_{2,2}}\right\} .
|
| 242 |
+
$$
|
| 243 |
+
|
| 244 |
+
The set perms $\left( {S}_{1}\right) \times \operatorname{perms}\left( {S}_{2}\right)$ contains $5{!}^{2} = {14400}$ list pairs. Now, consider the list pairs $\left( {{P}_{1},{P}_{2}}\right) ,\left( {{Q}_{1},{Q}_{2}}\right)$ , $\left( {{R}_{1},{R}_{2}}\right) \in \operatorname{perms}\left( {S}_{1}\right) \times \operatorname{perms}\left( {S}_{2}\right)$ with
|
| 245 |
+
|
| 246 |
+
$$
|
| 247 |
+
{P}_{1}\left\lbrack {\underline{{\mathrm{R}}_{1,1}},{\mathrm{R}}_{1,5},\underline{{\mathrm{R}}_{1,2}},{\mathrm{R}}_{1,4},{\mathrm{R}}_{1,3}}\right\rbrack ,
|
| 248 |
+
$$
|
| 249 |
+
|
| 250 |
+
$$
|
| 251 |
+
{P}_{2}\left\lbrack {{\mathrm{R}}_{2,1},{\mathrm{R}}_{2,3},{\mathrm{R}}_{2,2},{\mathrm{R}}_{2,5},{\mathrm{R}}_{2,4}}\right\rbrack ;
|
| 252 |
+
$$
|
| 253 |
+
|
| 254 |
+
$$
|
| 255 |
+
{Q}_{1}\left\lbrack {\underline{{\mathrm{R}}_{1,1}},{\mathrm{R}}_{1,3},{\mathrm{R}}_{1,5},{\mathrm{R}}_{1,4},\underline{{\mathrm{R}}_{1,2}}}\right\rbrack ,
|
| 256 |
+
$$
|
| 257 |
+
|
| 258 |
+
$$
|
| 259 |
+
\left. {{Q}_{2}\left\lbrack {{\mathrm{R}}_{2,5},{\mathrm{R}}_{2,4},{\mathrm{R}}_{2,3},\underline{{\mathrm{R}}_{2,2}},{\mathrm{R}}_{2,1}}\right\rbrack }\right\rbrack \text{ ; }
|
| 260 |
+
$$
|
| 261 |
+
|
| 262 |
+
$$
|
| 263 |
+
{R}_{1}\left\lbrack {{\mathrm{R}}_{1,5},{\mathrm{R}}_{1,4},{\mathrm{R}}_{1,3},\underline{{\mathrm{R}}_{1,2}},\underline{{\mathrm{R}}_{1,1}}}\right\rbrack ,
|
| 264 |
+
$$
|
| 265 |
+
|
| 266 |
+
$$
|
| 267 |
+
{R}_{2}\left\lbrack {\underline{{\mathrm{R}}_{2,1}},\underline{{\mathrm{R}}_{2,2}},{\mathrm{R}}_{2,3},{\mathrm{R}}_{2,4},{\mathrm{R}}_{2,5}}\right\rbrack \text{ . }
|
| 268 |
+
$$
|
| 269 |
+
|
| 270 |
+
We have underlined the faulty replicas in each list, and we have ${\begin{Vmatrix}{P}_{1};{P}_{2}\end{Vmatrix}}_{\mathbf{f}} = 2 = {\mathbf{f}}_{{S}_{1}} = {\mathbf{f}}_{{S}_{2}},{\begin{Vmatrix}{Q}_{1};{Q}_{2}\end{Vmatrix}}_{\mathbf{f}} = 3$ , and ${\begin{Vmatrix}{R}_{1};{R}_{2}\end{Vmatrix}}_{\mathbf{f}} = 4 = {\mathbf{f}}_{{S}_{1}} + {\mathbf{f}}_{{S}_{2}}.$
|
| 271 |
+
|
| 272 |
+
In the following, we will use a list-pair function $\Phi$ to compute the initial list-pair $\left( {{S}_{1},{S}_{2}}\right)$ of $n$ replicas taken from ${\mathcal{C}}_{1}$ and ${\mathcal{C}}_{2}$ , respectively. We build a cluster-sending protocol that uses $\Phi$ to compute ${S}_{1}$ and ${S}_{2}$ , uses randomization to choose $n$ replica pairs from ${S}_{1} \times {S}_{2}$ , and, finally, performs cluster-sending steps using only these $n$ replica pairs. The pseudo-code of the resultant Synchronous Probabilistic Linear Cluster-Sending protocol CSPL can be found in Figure 5. Next, we prove that CSPL is correct and has a worst-case linear message complexity:
|
| 273 |
+
|
| 274 |
+
Proposition 5.1. Let ${\mathcal{C}}_{1},{\mathcal{C}}_{2}$ be disjoint clusters and let $\Phi$ be a list-pair function with $\left( {{S}_{1},{S}_{2}}\right) \mathrel{\text{ := }} \Phi \left( {{\mathcal{C}}_{1},{\mathcal{C}}_{2}}\right)$ and $n = {\mathbf{n}}_{{S}_{1}} =$ ${\mathbf{n}}_{{S}_{2}}$ . If communication is synchronous and $n > {\mathbf{f}}_{{S}_{1}} + {\mathbf{f}}_{{S}_{2}}$ , then $\operatorname{CSPL}\left( {{\mathcal{C}}_{1},{\mathcal{C}}_{2},v,\Phi }\right)$ results in cluster-sending $v$ from ${\mathcal{C}}_{1}$ to ${\mathcal{C}}_{2}$ .
|
| 275 |
+
|
| 276 |
+
§ PROTOCOL $\OPERATORNAME{CSPL}\LEFT( {{\MATHCAL{C}}_{1},{\MATHCAL{C}}_{2},V,\PHI }\RIGHT)$ :
|
| 277 |
+
|
| 278 |
+
1: Use local consensus on $v$ and construct ${\left\langle \text{ send : }v,{\mathcal{C}}_{2}\right\rangle }_{{\mathcal{C}}_{1}}$ .
|
| 279 |
+
|
| 280 |
+
{Each replica in $\operatorname{nf}\left( {\mathcal{C}}_{1}\right)$ decides AGREE on $v$ .}
|
| 281 |
+
|
| 282 |
+
Let $\left( {{S}_{1},{S}_{2}}\right) \mathrel{\text{ := }} \Phi \left( {{\mathcal{C}}_{1},{\mathcal{C}}_{2}}\right)$ .
|
| 283 |
+
|
| 284 |
+
Choose $\left( {{P}_{1},{P}_{2}}\right) \in \operatorname{perms}\left( {S}_{1}\right) \times \operatorname{perms}\left( {S}_{2}\right)$ fully at random.
|
| 285 |
+
|
| 286 |
+
$i \mathrel{\text{ := }} 0$ .
|
| 287 |
+
|
| 288 |
+
repeat
|
| 289 |
+
|
| 290 |
+
CS-STEP $\left( {{P}_{1}\left\lbrack i\right\rbrack ,{P}_{2}\left\lbrack i\right\rbrack ,v}\right)$
|
| 291 |
+
|
| 292 |
+
Wait three global pulses.
|
| 293 |
+
|
| 294 |
+
$i \mathrel{\text{ := }} i + 1$ .
|
| 295 |
+
|
| 296 |
+
until ${\mathcal{C}}_{1}$ reaches consensus on ${\left\langle \text{ proof : }\langle \text{ send : }v,{\mathcal{C}}_{2}{\rangle }_{{\mathcal{C}}_{1}}\right\rangle }_{{\mathcal{C}}_{2}}$ .
|
| 297 |
+
|
| 298 |
+
Figure 5: The Synchronous Probabilistic Linear Cluster-Sending protocol $\operatorname{CSPL}\left( {{\mathcal{C}}_{1},{\mathcal{C}}_{2},v,\Phi }\right)$ that cluster-sends a value $v$ from ${\mathcal{C}}_{1}$ to ${\mathcal{C}}_{2}$ using list-pair function $\Phi$ .
|
| 299 |
+
|
| 300 |
+
The execution performs two local consensus steps in ${\mathcal{C}}_{1}$ , one local consensus step in ${\mathcal{C}}_{2}$ , and makes worst-case ${\mathbf{f}}_{{S}_{1}} + {\mathbf{f}}_{{S}_{2}} + 1$ cluster-sending steps.
|
| 301 |
+
|
| 302 |
+
Proof. Due to Lines 1-2 of Figure 5, $\operatorname{CSPL}\left( {{\mathcal{C}}_{1},{\mathcal{C}}_{2},v}\right.$ , $\Phi )$ establishes the pre-conditions for any execution of CS- $\operatorname{STEP}\left( {{\mathrm{R}}_{1},{\mathrm{R}}_{2},v}\right)$ with ${\mathrm{R}}_{1} \in {\mathcal{C}}_{1}$ and ${\mathrm{R}}_{2} \in {\mathcal{C}}_{2}$ . Now let $\left( {{P}_{1},{P}_{2}}\right) \in$ perms $\left( {S}_{1}\right) \times$ perms $\left( {S}_{2}\right)$ , as chosen at Line 4 of Figure 5. As ${P}_{i},i \in \{ 1,2\}$ , is a permutation of ${S}_{i}$ , we have ${\mathbf{f}}_{{P}_{i}} = {\mathbf{f}}_{{S}_{i}}$ . Hence, we have ${\begin{Vmatrix}{P}_{1};{P}_{2}\end{Vmatrix}}_{\mathbf{f}} \leq {\mathbf{f}}_{{S}_{1}} + {\mathbf{f}}_{{S}_{2}}$ and there must exist a position $j,0 \leq j < n$ , such that $\left( {{P}_{1}\left\lbrack j\right\rbrack ,{P}_{2}\left\lbrack j\right\rbrack }\right) \in \operatorname{nf}\left( {\mathcal{C}}_{1}\right) \times \operatorname{nf}\left( {\mathcal{C}}_{2}\right)$ . Using the correctness of CS-STEP (Proposition 3.1), we conclude that $\operatorname{CSPL}\left( {{\mathcal{C}}_{1},{\mathcal{C}}_{2},v,\Phi }\right)$ results in cluster-sending $v$ from ${\mathcal{C}}_{1}$ to ${\mathcal{C}}_{2}$ in at most ${\mathbf{f}}_{{S}_{1}} + {\mathbf{f}}_{{S}_{2}} + 1$ cluster-sending steps. Finally, the bounds on the number of consensus steps follow from an argument analogous to the one in the proof of Theorem 4.2.
|
| 303 |
+
|
| 304 |
+
Next, we proceed in two steps to arrive at practical instances of CSPL with expected constant message complexity. First, in Section 5.1, we study the probabilistic nature of CSPL. Then, in Section 5.2, we propose practical list-pair functions and show that these functions yield instances of CSPL with expected constant message complexity.
|
| 305 |
+
|
| 306 |
+
§ 5.1 THE EXPECTED-CASE COMPLEXITY OF CSPL
|
| 307 |
+
|
| 308 |
+
As the first step to determine the expected-case complexity of CSPL, we solve the following abstract problem that captures the probabilistic argument at the core of the expected-case complexity of CSPL:
|
| 309 |
+
|
| 310 |
+
Problem 5.2 (non-faulty position trials). Let ${S}_{1}$ and ${S}_{2}$ be lists of $\left| {S}_{1}\right| = \left| {S}_{2}\right| = n$ replicas. Choose permutations $\left( {{P}_{1},{P}_{2}}\right) \in \operatorname{perms}\left( {S}_{1}\right) \times \operatorname{perms}\left( {S}_{2}\right)$ fully at random. Next, we inspect positions in ${P}_{1}$ and ${P}_{2}$ fully at random (with replacement). The non-faulty position trials problem asks how many positions one expects to inspect to find the first non-faulty position.
|
| 311 |
+
|
| 312 |
+
Let ${S}_{1}$ and ${S}_{2}$ be list of $\left| {S}_{1}\right| = \left| {S}_{2}\right| = n$ replicas. To answer the non-faulty position trials problem, we first look at the combinatorics of faulty positions in pairs $\left( {{P}_{1},{P}_{2}}\right) \in$ $\operatorname{perms}\left( {S}_{1}\right) \times \operatorname{perms}\left( {S}_{2}\right)$ . Let ${m}_{1} = {\mathbf{f}}_{{S}_{1}}$ and ${m}_{2} = {\mathbf{f}}_{{S}_{2}}$ . By $\mathbb{F}\left( {n,{m}_{1},{m}_{2},k}\right)$ , we denote the number of distinct pairs $\left( {{P}_{1},{P}_{2}}\right)$ one can construct that have exactly $k$ faulty positions, hence, with ${\begin{Vmatrix}{P}_{1};{P}_{2}\end{Vmatrix}}_{\mathbf{f}} = k$ . As observed, we have $\max \left( {{m}_{1},{m}_{2}}\right) \leq$ ${\begin{Vmatrix}{P}_{1};{P}_{2}\end{Vmatrix}}_{\mathbf{f}} \leq \min \left( {n,{m}_{1} + {m}_{2}}\right)$ for any pair $\left( {{P}_{1},{P}_{2}}\right)$ . Hence, we have $\mathbb{F}\left( {n,{m}_{1},{m}_{2},k}\right) = 0$ for all $k < \max \left( {{m}_{1},{m}_{2}}\right)$ and $k > \min \left( {n,{m}_{1} + {m}_{2}}\right)$ .
|
| 313 |
+
|
| 314 |
+
Now consider the step-wise construction of any permutation $\left( {{P}_{1},{P}_{2}}\right) \in \operatorname{perms}\left( {S}_{1}\right) \times \operatorname{perms}\left( {S}_{2}\right)$ with $k$ faulty positions. First, we choose $\left( {{P}_{1}\left\lbrack 0\right\rbrack ,{P}_{2}\left\lbrack 0\right\rbrack }\right)$ , the pair at position 0, after which we choose pairs for the remaining $n - 1$ positions. For ${P}_{i}\left\lbrack 0\right\rbrack ,i \in \{ 1,2\}$ , we can choose $n$ distinct replicas, of which ${m}_{i}$ are faulty. If we pick a non-faulty replica, then the remainder of ${P}_{i}$ is constructed out of $n - 1$ replicas, of which ${m}_{i}$ are faulty. Otherwise, the remainder of ${P}_{i}$ is constructed out of $n - 1$ replicas of which ${m}_{i} - 1$ are faulty. If, due to our choice of $\left( {{P}_{1}\left\lbrack 0\right\rbrack ,{P}_{2}\left\lbrack 0\right\rbrack }\right)$ , the first position is faulty, then only $k - 1$ out of the $n - 1$ remaining positions must be faulty. Otherwise, $k$ out of the $n - 1$ remaining positions must be faulty. Combining this analysis yields four types for the first pair $\left( {{P}_{1}\left\lbrack 0\right\rbrack ,{P}_{2}\left\lbrack 0\right\rbrack }\right)$ :
|
| 315 |
+
|
| 316 |
+
1. A non-faulty pair $\left( {{P}_{1}\left\lbrack 0\right\rbrack ,{P}_{2}\left\lbrack 0\right\rbrack }\right) \in \operatorname{nf}\left( {P}_{1}\right) \times \operatorname{nf}\left( {P}_{2}\right)$ . We have $\left( {n - {m}_{1}}\right) \left( {n - {m}_{2}}\right)$ such pairs, and we have $\mathbb{F}(n -$ $\left. {1,{m}_{1},{m}_{2},k}\right)$ different ways to construct the remainder of ${P}_{1}$ and ${P}_{2}$ .
|
| 317 |
+
|
| 318 |
+
2. A 1-faulty pair $\left( {{P}_{1}\left\lbrack 0\right\rbrack ,{P}_{2}\left\lbrack 0\right\rbrack }\right) \in \mathrm{f}\left( {P}_{1}\right) \times \operatorname{nf}\left( {P}_{2}\right)$ . We have ${m}_{1}\left( {n - {m}_{2}}\right)$ such pairs, and we have $\mathbb{F}\left( {n - 1,{m}_{1} - }\right.$ $1,{m}_{2},k - 1)$ different ways to construct the remainder of ${P}_{1}$ and ${P}_{2}$ .
|
| 319 |
+
|
| 320 |
+
3. A 2-faulty pair $\left( {{P}_{1}\left\lbrack 0\right\rbrack ,{P}_{2}\left\lbrack 0\right\rbrack }\right) \in \operatorname{nf}\left( {P}_{1}\right) \times \mathrm{f}\left( {P}_{2}\right)$ . We have $\left( {n - {m}_{1}}\right) {m}_{2}$ such pairs, and we have $\mathbb{F}\left( {n - 1,{m}_{1},{m}_{2} - }\right.$ $2,k - 1)$ different ways to construct the remainder of ${P}_{1}$ and ${P}_{2}$ .
|
| 321 |
+
|
| 322 |
+
4. A both-faulty pair $\left( {{P}_{1}\left\lbrack 0\right\rbrack ,{P}_{2}\left\lbrack 0\right\rbrack }\right) \in \mathrm{f}\left( {P}_{1}\right) \times \mathrm{f}\left( {P}_{2}\right)$ . We have ${m}_{1}{m}_{2}$ such pairs, and we have $\mathbb{F}\left( {n - 1,{m}_{1} - 1,{m}_{2} - }\right.$ $1,k - 1)$ different ways to construct the remainder of ${P}_{1}$ and ${P}_{2}$ .
|
| 323 |
+
|
| 324 |
+
Hence, for all $k,\max \left( {{m}_{1},{m}_{2}}\right) \leq k \leq \min \left( {n,{m}_{1} + {m}_{2}}\right)$ ,
|
| 325 |
+
|
| 326 |
+
$\mathbb{F}\left( {n,{m}_{1},{m}_{2},k}\right)$ is recursively defined by:
|
| 327 |
+
|
| 328 |
+
$$
|
| 329 |
+
\mathbb{F}\left( {n,{m}_{1},{m}_{2},k}\right) = \left( {n - {m}_{1}}\right) \left( {n - {m}_{2}}\right) \mathbb{F}\left( {n - 1,{m}_{1},{m}_{2},k}\right)
|
| 330 |
+
$$
|
| 331 |
+
|
| 332 |
+
$$
|
| 333 |
+
\text{ (non-faulty pair) }
|
| 334 |
+
$$
|
| 335 |
+
|
| 336 |
+
$$
|
| 337 |
+
+ {m}_{1}\left( {n - {m}_{2}}\right) \mathbb{F}\left( {n - 1,{m}_{1} - 1,{m}_{2},k - 1}\right)
|
| 338 |
+
$$
|
| 339 |
+
|
| 340 |
+
$$
|
| 341 |
+
\text{ (1-faulty pair) }
|
| 342 |
+
$$
|
| 343 |
+
|
| 344 |
+
$$
|
| 345 |
+
+ \left( {n - {m}_{1}}\right) {m}_{2}\mathbb{F}\left( {n - 1,{m}_{1},{m}_{2} - 1,k - 1}\right)
|
| 346 |
+
$$
|
| 347 |
+
|
| 348 |
+
$$
|
| 349 |
+
\text{ (2-faulty pair) }
|
| 350 |
+
$$
|
| 351 |
+
|
| 352 |
+
$$
|
| 353 |
+
+ {m}_{1}{m}_{2}\mathbb{F}\left( {n - 1,{m}_{1} - 1,{m}_{2} - 1,k - 1}\right) \text{ , }
|
| 354 |
+
$$
|
| 355 |
+
|
| 356 |
+
$$
|
| 357 |
+
\text{ (both-faulty pair) }
|
| 358 |
+
$$
|
| 359 |
+
|
| 360 |
+
and the base case for this recursion is $\mathbb{F}\left( {0,0,0,0}\right) = 1$ .
|
| 361 |
+
|
| 362 |
+
Example 5.3. Reconsider the list pairs $\left( {{P}_{1},{P}_{2}}\right) ,\left( {{Q}_{1},{Q}_{2}}\right)$ , and $\left( {{R}_{1},{R}_{2}}\right)$ from Example 5.1. In $\left( {{P}_{1},{P}_{2}}\right)$ , we have both-faulty pairs at positions 0 and 2 and non-faulty pairs at positions 1 , 3, and 4. In $\left( {{Q}_{1},{Q}_{2}}\right)$ , we have a 1-faulty pair at position 0, non-faulty pairs at positions 1 and 2, a 2-faulty pair at position 3, and a both-faulty pair at position 4 . Finally, in $\left( {{R}_{1},{R}_{2}}\right)$ , we have 2-faulty pairs at positions 0 and 1, a non-faulty pair at position 2, and 1-faulty pairs at positions 3 and 4.
|
| 363 |
+
|
| 364 |
+
Using the combinatorics of faulty positions, we formalize an exact solution to the non-faulty position trials problem:
|
| 365 |
+
|
| 366 |
+
Lemma 5.1. Let ${S}_{1}$ and ${S}_{2}$ be lists of $n = {\mathbf{n}}_{{S}_{1}} = {\mathbf{n}}_{{S}_{2}}$ replicas with ${m}_{1} = {\mathbf{f}}_{{S}_{1}}$ and ${m}_{2} = {\mathbf{f}}_{{S}_{2}}$ . If ${m}_{1} + {m}_{2} < n$ , then the nonfaulty position trials problem $\mathbb{E}\left( {n,{m}_{1},{m}_{2}}\right)$ has solution
|
| 367 |
+
|
| 368 |
+
$$
|
| 369 |
+
\frac{1}{n{!}^{2}}\left( {\mathop{\sum }\limits_{{k = \max \left( {{m}_{1},{m}_{2}}\right) }}^{{{m}_{1} + {m}_{2}}}\frac{n}{n - k}\mathbb{F}\left( {n,{m}_{1},{m}_{2},k}\right) }\right) .
|
| 370 |
+
$$
|
| 371 |
+
|
| 372 |
+
Proof. We have $\left| {\operatorname{perms}\left( {S}_{1}\right) }\right| = \left| {\operatorname{perms}\left( {S}_{2}\right) }\right| = n$ !. Consequently, we have $\left| {\operatorname{perms}\left( {S}_{1}\right) \times \operatorname{perms}\left( {S}_{2}\right) }\right| = n{!}^{2}$ and we have probability $1/\left( {n{!}^{2}}\right)$ to choose any pair $\left( {{P}_{1},{P}_{2}}\right) \in \operatorname{perms}\left( {S}_{1}\right) \times$ perms $\left( {S}_{2}\right)$ . Now consider such a pair $\left( {{P}_{1},{P}_{2}}\right) \in \operatorname{perms}\left( {S}_{1}\right) \times$ perms $\left( {S}_{2}\right)$ . As there are ${\begin{Vmatrix}{P}_{1};{P}_{2}\end{Vmatrix}}_{\mathbf{f}}$ faulty positions in $\left( {{P}_{1},{P}_{2}}\right)$ , we have probability $p\left( {{P}_{1},{P}_{2}}\right) = \left( {n - {\begin{Vmatrix}{P}_{1};{P}_{2}\end{Vmatrix}}_{\mathbf{f}}}\right) /n$ to inspect a non-faulty position. Notice that $\max \left( {{m}_{1},{m}_{2}}\right) \leq {\begin{Vmatrix}{P}_{1};{P}_{2}\end{Vmatrix}}_{\mathbf{f}} \leq$ ${m}_{1} + {m}_{2} < n$ and, hence, $0 < p\left( {{P}_{1},{P}_{2}}\right) \leq 1$ . Each of the inspected positions in $\left( {{P}_{1},{P}_{2}}\right)$ is chosen fully at random. Hence, each inspection is a Bernoulli trial with probability of success $p\left( {{P}_{1},{P}_{2}}\right)$ , and we expect to inspect a first non-faulty position in the $p{\left( {P}_{1},{P}_{2}\right) }^{-1} = n/\left( {n - {\begin{Vmatrix}{P}_{1};{P}_{2}\end{Vmatrix}}_{\mathbf{f}}}\right)$ -th attempt. We conclude that the non-faulty position trials problem $\mathbb{E}\left( {n,{m}_{1},{m}_{2}}\right)$ has solution
|
| 373 |
+
|
| 374 |
+
$$
|
| 375 |
+
\frac{1}{n{!}^{2}}\left( {\mathop{\sum }\limits_{{\left( {{P}_{1},{P}_{2}}\right) \in \operatorname{perms}\left( {S}_{1}\right) \times \operatorname{perms}\left( {S}_{2}\right) }}\frac{n}{n - {\begin{Vmatrix}{P}_{1};{P}_{2}\end{Vmatrix}}_{\mathbf{f}}}}\right) .
|
| 376 |
+
$$
|
| 377 |
+
|
| 378 |
+
Notice that there are $\mathbb{F}\left( {n,{m}_{1},{m}_{2},k}\right)$ distinct pairs $\left( {{P}_{1},{P}_{2}}\right) \in$ $\operatorname{perms}\left( {S}_{1}\right) \times \operatorname{perms}\left( {S}_{2}\right)$ with ${\begin{Vmatrix}{P}_{1}^{\prime };{P}_{2}^{\prime }\end{Vmatrix}}_{\mathbf{f}} = k$ for each $k$ , $\max \left( {{m}_{1},{m}_{2}}\right) \leq k \leq {m}_{1} + {m}_{2} < n$ . Hence, in the above expression for $\mathbb{E}\left( {n,{m}_{1},{m}_{2}}\right)$ , we can group on these pairs $\left( {{P}_{1}^{\prime },{P}_{2}^{\prime }}\right)$ to obtain the searched-for solution.
|
| 379 |
+
|
| 380 |
+
To further solve the non-faulty position trials problem, we work towards a closed form for $\mathbb{F}\left( {n,{m}_{1},{m}_{2},k}\right)$ . Consider any pair $\left( {{P}_{1},{P}_{2}}\right) \in \operatorname{perms}\left( {S}_{1}\right) \times \operatorname{perms}\left( {S}_{2}\right)$ with ${\begin{Vmatrix}{P}_{1};{P}_{2}\end{Vmatrix}}_{\mathbf{f}} =$ $k$ obtained via the outlined step-wise construction. Let ${b}_{1}$ be the number of 1-faulty pairs, let ${b}_{2}$ be the number of 2- faulty pairs, and let ${b}_{1,2}$ be the number of both-faulty pairs in $\left( {{P}_{1},{P}_{2}}\right)$ . By construction, we must have $k = {b}_{1} + {b}_{2} + {b}_{1,2}$ , ${m}_{1} = {b}_{1} + {b}_{1,2}$ , and ${m}_{2} = {b}_{2} + {b}_{1,2}$ and by rearranging terms, we can derive
|
| 381 |
+
|
| 382 |
+
$$
|
| 383 |
+
{b}_{1,2} = \left( {{m}_{1} + {m}_{2}}\right) - k,\;{b}_{1} = k - {m}_{2},\;{b}_{2} = k - {m}_{1}.
|
| 384 |
+
$$
|
| 385 |
+
|
| 386 |
+
Example 5.4. Consider
|
| 387 |
+
|
| 388 |
+
$$
|
| 389 |
+
{S}_{1} = \left\lbrack {{\mathrm{R}}_{1,1},\ldots ,{\mathrm{R}}_{1,5}}\right\rbrack ,\;\mathrm{f}\left( {S}_{1}\right) = \left\{ {{\mathrm{R}}_{1,1},{\mathrm{R}}_{1,2},{\mathrm{R}}_{1,3}}\right\} ;
|
| 390 |
+
$$
|
| 391 |
+
|
| 392 |
+
$$
|
| 393 |
+
{S}_{2} = \left\lbrack {{\mathrm{R}}_{2,1},\ldots ,{\mathrm{R}}_{2,5}}\right\rbrack ,\;\mathrm{f}\left( {S}_{2}\right) = \left\{ {\mathrm{R}}_{2,1}\right\} .
|
| 394 |
+
$$
|
| 395 |
+
|
| 396 |
+
Hence, we have $n = 5,{m}_{1} = {\mathbf{f}}_{{S}_{1}} = 3$ , and ${m}_{2} = {\mathbf{f}}_{{S}_{2}} = 1$ . If we want to create a pair $\left( {{P}_{1},{P}_{2}}\right) \in \operatorname{perms}\left( {S}_{1}\right) \times \operatorname{perms}\left( {S}_{2}\right)$ with $k = {\begin{Vmatrix}{P}_{1};{P}_{2}\end{Vmatrix}}_{\mathbf{f}} = 3$ faulty positions, then $\left( {{P}_{1},{P}_{2}}\right)$ must have two non-faulty pairs, two 1-faulty pairs, no 2-faulty pairs, and one both-faulty pair. Hence, we have $n - k = 2,{b}_{1} = 2$ , ${b}_{2} = 0$ , and ${b}_{1,2} = 1$ .
|
| 397 |
+
|
| 398 |
+
The above analysis only depends on the choice of ${m}_{1},{m}_{2}$ , and $k$ , and not on our choice of $\left( {{P}_{1},{P}_{2}}\right)$ . Next, we use this analysis to express $\mathbb{F}\left( {n,{m}_{1},{m}_{2},k}\right)$ in terms of the number of distinct ways in which one can construct
|
| 399 |
+
|
| 400 |
+
(A) lists of ${b}_{1}1$ -faulty pairs out of faulty replicas from ${S}_{1}$ and non-faulty replicas from ${S}_{2}$ ,
|
| 401 |
+
|
| 402 |
+
(B) lists of ${b}_{2}2$ -faulty pairs out of non-faulty replicas from ${S}_{1}$ and faulty replicas from ${S}_{2}$ ,
|
| 403 |
+
|
| 404 |
+
(C) lists of ${b}_{1,2}$ both-faulty pairs out of the remaining faulty replicas in ${S}_{1}$ and ${S}_{2}$ that are not used in the previous two cases, and
|
| 405 |
+
|
| 406 |
+
(D) lists of $n - k$ non-faulty pairs out of the remaining (nonfaulty) replicas in ${S}_{1}$ and ${S}_{2}$ that are not used in the previous three cases;
|
| 407 |
+
|
| 408 |
+
and in terms of the number of distinct ways one can merge these lists. As the first step, we look at how many distinct ways we can merge two lists together:
|
| 409 |
+
|
| 410 |
+
Lemma 5.2. For any two disjoint lists $S$ and $T$ with $\left| S\right| = v$ and $\left| T\right| = w$ , there exist $\mathbb{M}\left( {v,w}\right) = \left( {v + w}\right) !/\left( {v!w!}\right)$ distinct lists $L$ with ${\left. L\right| }_{S} = {\left. S\text{ and }L\right| }_{T} = T$ , in which ${\left. L\right| }_{M},M \in \{ S,T\}$ , is the list obtained from $L$ by only keeping the values that also appear in list $M$ .
|
| 411 |
+
|
| 412 |
+
Next, we look at the number of distinct ways in which one can construct lists of type A, B, C, and D. Consider the construction of a list of type A. We can choose $\left( \begin{array}{l} {m}_{1} \\ {b}_{1} \end{array}\right)$ distinct sets of ${b}_{1}$ faulty replicas from ${S}_{1}$ and we can choose $\left( \begin{matrix} n - {m}_{2} \\ {b}_{1} \end{matrix}\right)$ distinct sets of ${b}_{1}$ non-faulty replicas from ${S}_{2}$ . As we can order the chosen values from ${S}_{1}$ and ${S}_{2}$ in ${b}_{1}$ ! distinct ways, we can construct ${b}_{1}{!}^{2}\left( \begin{matrix} {m}_{1} \\ {b}_{1} \end{matrix}\right) \left( \begin{matrix} n - {m}_{2} \\ {b}_{1} \end{matrix}\right)$ distinct lists of type A. Likewise, we can construct ${b}_{2}{!}^{2}\left( \begin{matrix} n - {m}_{1} \\ {b}_{2} \end{matrix}\right) \left( \begin{array}{l} {m}_{2} \\ {b}_{2} \end{array}\right)$ distinct lists of type B.
|
| 413 |
+
|
| 414 |
+
Example 5.5. We continue from the setting of Example 5.4: we want to create a pair $\left( {{P}_{1},{P}_{2}}\right) \in \operatorname{perms}\left( {S}_{1}\right) \times \operatorname{perms}\left( {S}_{2}\right)$ with $k = {\begin{Vmatrix}{P}_{1};{P}_{2}\end{Vmatrix}}_{\mathbf{f}} = 3$ faulty positions. To create $\left( {{P}_{1},{P}_{2}}\right)$ , we need to create ${b}_{1} = 2$ pairs that are 1 -faulty. We have $\left( \begin{array}{l} {m}_{1} \\ {b}_{1} \end{array}\right) = \left( \begin{array}{l} 3 \\ 2 \end{array}\right) = 3$ sets of two faulty replicas in ${S}_{1}$ that we can choose, namely the sets $\left\{ {{\mathrm{R}}_{1,1},{\mathrm{R}}_{1,2}}\right\} ,\left\{ {{\mathrm{R}}_{1,1},{\mathrm{R}}_{1,3}}\right\}$ , and $\left\{ {{\mathrm{R}}_{1,2},{\mathrm{R}}_{1,3}}\right\}$ . Likewise, we have $\left( \begin{matrix} n - {m}_{2} \\ {b}_{1} \end{matrix}\right) = \left( \begin{array}{l} 4 \\ 2 \end{array}\right) = 6$ sets of two non-faulty replicas in ${S}_{2}$ that we can choose. Assume we choose ${T}_{1} = \left\{ {{\mathrm{R}}_{1,1},{\mathrm{R}}_{1,3}}\right\}$ from ${S}_{1}$ and ${T}_{2} = \left\{ {{\mathrm{R}}_{2,4},{\mathrm{R}}_{2,5}}\right\}$ from ${S}_{2}$ . The two replicas in ${T}_{1}$ can be ordered in ${\mathbf{n}}_{{T}_{1}}! = 2! = 2$ ways, namely $\left\lbrack {{\mathrm{R}}_{1,1},{\mathrm{R}}_{1,3}}\right\rbrack$ and $\left\lbrack {{\mathrm{R}}_{1,3},{\mathrm{R}}_{1,1}}\right\rbrack$ . Likewise, the two replicas in ${T}_{2}$ can be ordered in ${\mathbf{n}}_{{T}_{2}}! = 2! = 2$ ways. Hence, we can construct $2 \cdot 2 = 4$ distinct lists of type A out of this single choice for ${T}_{1}$ and ${T}_{2}$ , and the sequences ${S}_{1}$ and ${S}_{2}$ provide us with $\left( \begin{matrix} {m}_{1} \\ {b}_{1} \end{matrix}\right) \left( \begin{matrix} n - {m}_{2} \\ {b}_{1} \end{matrix}\right) = {18}$ distinct choices for ${T}_{1}$ and ${T}_{2}$ . We conclude that we can construct 72 distinct lists of type A from ${S}_{1}$ and ${S}_{2}$ .
|
| 415 |
+
|
| 416 |
+
By construction, lists of type A and type B cannot utilize the same replicas from ${S}_{1}$ or ${S}_{2}$ . After choosing ${b}_{1} + {b}_{2}$ replicas in ${S}_{1}$ and ${S}_{2}$ for the construction of lists of type A and B, the remaining ${b}_{1,2}$ faulty replicas in ${S}_{1}$ and ${S}_{2}$ are all used for constructing lists of type C. As we can order these remaining values from ${S}_{1}$ and ${S}_{2}$ in ${b}_{1,2}$ ! distinct ways, we can construct ${b}_{1,2}{!}^{2}$ distinct lists of type $\mathrm{C}$ (per choice of lists of type $\mathrm{A}$ and B). Likewise, the remaining $n - k$ non-faulty replicas in ${S}_{1}$ and ${S}_{2}$ are all used for constructing lists of type D, and we can construct $\left( {n - k}\right) {!}^{2}$ distinct lists of type D (per choice of lists of type A and B).
|
| 417 |
+
|
| 418 |
+
As the final steps, we merge lists of type A and B into lists of type AB. We can do so in $\mathbb{M}\left( {{b}_{1},{b}_{2}}\right)$ ways and the resultant lists have size ${b}_{1} + {b}_{2}$ . Next, we merge lists of type AB and C into lists of type ABC. We can do so in $\mathbb{M}\left( {{b}_{1} + {b}_{2},{b}_{1,2}}\right)$ ways and the resultant lists have size $k$ . Finally, we merge list of type ABC and D together, which we can do in $\mathbb{M}\left( {k,n - k}\right)$ ways. From this construction, we derive that $\mathbb{F}\left( {n,{m}_{1},{m}_{2},k}\right)$ is equivalent to
|
| 419 |
+
|
| 420 |
+
$$
|
| 421 |
+
{b}_{1}{!}^{2}\left( \begin{matrix} {m}_{1} \\ {b}_{1} \end{matrix}\right) \left( \begin{matrix} n - {m}_{2} \\ {b}_{1} \end{matrix}\right) {b}_{2}{!}^{2}\left( \begin{matrix} n - {m}_{1} \\ {b}_{2} \end{matrix}\right) \left( \begin{matrix} {m}_{2} \\ {b}_{2} \end{matrix}\right) .
|
| 422 |
+
$$
|
| 423 |
+
|
| 424 |
+
$$
|
| 425 |
+
\mathbb{M}\left( {{b}_{1},{b}_{2}}\right) {b}_{1,2}{!}^{2}\mathbb{M}\left( {{b}_{1} + {b}_{2},{b}_{1,2}}\right) \left( {n - k}\right) {!}^{2}\mathbb{M}\left( {k,n - k}\right) ,
|
| 426 |
+
$$
|
| 427 |
+
|
| 428 |
+
which can be simplified to the following:
|
| 429 |
+
|
| 430 |
+
Lemma 5.3. Let $\max \left( {{m}_{1},{m}_{2}}\right) \leq k \leq \min \left( {n,{m}_{1} + {m}_{2}}\right)$ and let ${b}_{1} = k - {m}_{2},{b}_{2} = k - {m}_{1}$ , and ${b}_{1,2} = \left( {{m}_{1} + {m}_{2}}\right) - k$ . We have
|
| 431 |
+
|
| 432 |
+
$$
|
| 433 |
+
\mathbb{F}\left( {n,{m}_{1},{m}_{2},k}\right) = \frac{{m}_{1}!{m}_{2}!\left( {n - {m}_{1}}\right) !\left( {n - {m}_{2}}\right) n!}{{b}_{1}!{b}_{2}!{b}_{1,2}!\left( {n - k}\right) !}.
|
| 434 |
+
$$
|
| 435 |
+
|
| 436 |
+
We combine Lemma 5.1 and Lemma 5.3 to conclude
|
| 437 |
+
|
| 438 |
+
Proposition 5.2. Let ${S}_{1}$ and ${S}_{2}$ be lists of $n = {\mathbf{n}}_{{S}_{1}} = {\mathbf{n}}_{{S}_{2}}$ replicas with ${m}_{1} = {\mathbf{f}}_{{S}_{1}},{m}_{2} = {\mathbf{f}}_{{S}_{2}},{b}_{1} = k - {m}_{2},{b}_{2} = k - {m}_{1}$ , and ${b}_{1,2} = \left( {{m}_{1} + {m}_{2}}\right) - k$ . If ${m}_{1} + {m}_{2} < n$ , then the non-faulty position trials problem $\mathbb{E}\left( {n,{m}_{1},{m}_{2}}\right)$ has solution
|
| 439 |
+
|
| 440 |
+
$$
|
| 441 |
+
\frac{1}{n{!}^{2}}\left( {\mathop{\sum }\limits_{{k = \max \left( {{m}_{1},{m}_{2}}\right) }}^{{{m}_{1} + {m}_{2}}}\frac{n}{n - k}\frac{{m}_{1}!{m}_{2}!\left( {n - {m}_{1}}\right) !\left( {n - {m}_{2}}\right) !n!}{{b}_{1}!{b}_{2}!{b}_{1,2}!\left( {n - k}\right) !}}\right) .
|
| 442 |
+
$$
|
| 443 |
+
|
| 444 |
+
Finally, we use Proposition 5.2 to derive
|
| 445 |
+
|
| 446 |
+
Proposition 5.3. Let ${\mathcal{C}}_{1},{\mathcal{C}}_{2}$ be disjoint clusters and let $\Phi$ be a list-pair function with $\left( {{S}_{1},{S}_{2}}\right) \mathrel{\text{ := }} \Phi \left( {{\mathcal{C}}_{1},{\mathcal{C}}_{2}}\right)$ and $n = {\mathbf{n}}_{{S}_{1}} =$ ${\mathbf{n}}_{{S}_{2}}$ . If communication is synchronous and ${\mathbf{f}}_{{S}_{1}} + {\mathbf{f}}_{{S}_{2}} < n$ , then the expected number of cluster-sending steps performed by $\operatorname{CSPL}\left( {{\mathcal{C}}_{1},{\mathcal{C}}_{2},v,\Phi }\right)$ is less than $\mathbb{E}\left( {n,{\mathbf{f}}_{{S}_{1}},{\mathbf{f}}_{{S}_{2}}}\right)$ .
|
| 447 |
+
|
| 448 |
+
Proof. Let $\left( {{P}_{1},{P}_{2}}\right) \in \operatorname{perms}\left( {S}_{1}\right) \times \operatorname{perms}\left( {S}_{2}\right)$ . We notice that CSPL inspects positions in ${P}_{1}$ and ${P}_{2}$ in a different way than the non-faulty trials problem: at Line 7 of Figure 5, positions are inspected one-by-one in a predetermined order and not fully at random (with replacement). Next, we will argue that $\mathbb{E}\left( {n,{\mathbf{f}}_{{S}_{1}},{\mathbf{f}}_{{S}_{2}}}\right)$ provides an upper bound on the expected number of cluster-sending steps regardless of these differences. Without loss of generality, we assume that ${S}_{1}$ and ${S}_{2}$ each have $n$ distinct replicas. Consequently, the pair $\left( {{P}_{1},{P}_{2}}\right)$ represents a set $R$ of $n$ distinct replica pairs taken from ${\mathcal{C}}_{1} \times {\mathcal{C}}_{2}$ . We notice that each of the $n$ ! permutations of $R$ is represented by a single pair $\left( {{P}_{1}^{\prime },{P}_{2}^{\prime }}\right) \in \operatorname{perms}\left( {S}_{1}\right) \times \operatorname{perms}\left( {S}_{2}\right)$ .
|
| 449 |
+
|
| 450 |
+
Now consider the selection of positions in $\left( {{P}_{1},{P}_{2}}\right)$ fully at random, but without replacement. This process will yield a list $\left\lbrack {{j}_{0},\ldots ,{j}_{n - 1}}\right\rbrack \in \operatorname{perms}\left( \left\lbrack {0,\ldots ,n - 1}\right\rbrack \right)$ of positions fully at random. Let ${Q}_{i} = \left\lbrack {{P}_{i}\left\lbrack {j}_{0}\right\rbrack ,\ldots ,{P}_{i}\left\lbrack {j}_{n - 1}\right\rbrack }\right\rbrack ,i \in \{ 1,2\}$ . We notice that the pair $\left( {{Q}_{1},{Q}_{2}}\right)$ also represents $R$ and we have $\left( {{Q}_{1},{Q}_{2}}\right) \in \operatorname{perms}\left( {S}_{1}\right) \times \operatorname{perms}\left( {S}_{2}\right)$ . Hence, by choosing a pair $\left( {{P}_{1},{P}_{2}}\right) \in \operatorname{perms}\left( {S}_{1}\right) \times \operatorname{perms}\left( {S}_{2}\right)$ , we choose set $R$ fully at random and, at the same time, we choose the order in which replica pairs in $R$ are inspected fully at random.
|
| 451 |
+
|
| 452 |
+
Finally, we note that CSPL inspects positions without replacement. As the number of expected positions inspected in the non-faulty position trials problem decreases if we choose positions without replacement, we have proven that $\mathbb{E}\left( {n,{\mathbf{f}}_{{S}_{1}},{\mathbf{f}}_{{S}_{2}}}\right)$ is an upper bound on the expected number of cluster-sending steps.
|
| 453 |
+
|
| 454 |
+
§ 5.2 PRACTICAL INSTANCES OF CSPL
|
| 455 |
+
|
| 456 |
+
As the last step in providing practical instances of CSPL, we need to provide practical list-pair functions to be used in conjunction with CSPL. We provide two such functions that address most practical environments. Let ${\mathcal{C}}_{1},{\mathcal{C}}_{2}$ be disjoint clusters, let ${n}_{\min } = \min \left( {{\mathbf{n}}_{{\mathcal{C}}_{1}},{\mathbf{n}}_{{\mathcal{C}}_{2}}}\right)$ , and let ${n}_{\max } = \max \left( {{\mathbf{n}}_{{\mathcal{C}}_{1}},{\mathbf{n}}_{{\mathcal{C}}_{2}}}\right)$ . We provide list-pair functions
|
| 457 |
+
|
| 458 |
+
$$
|
| 459 |
+
{\Phi }_{\min }\left( {{\mathcal{C}}_{1},{\mathcal{C}}_{2}}\right) \mapsto \left( {\operatorname{list}{\left( {\mathcal{C}}_{1}\right) }^{ : {n}_{\min }},\operatorname{list}{\left( {\mathcal{C}}_{2}\right) }^{ : {n}_{\min }}}\right) ,
|
| 460 |
+
$$
|
| 461 |
+
|
| 462 |
+
$$
|
| 463 |
+
{\Phi }_{\max }\left( {{\mathcal{C}}_{1},{\mathcal{C}}_{2}}\right) \mapsto \left( {\operatorname{list}{\left( {\mathcal{C}}_{2}\right) }^{ : {n}_{\max }},\operatorname{list}{\left( {\mathcal{C}}_{2}\right) }^{ : {n}_{\max }}}\right) ,
|
| 464 |
+
$$
|
| 465 |
+
|
| 466 |
+
in which ${L}^{ : n}$ denotes the first $n$ values in the list obtained by repeating list $L$ . Next, we illustrate usage of these functions:
|
| 467 |
+
|
| 468 |
+
Example 5.6. Consider clusters ${\mathcal{C}}_{1},{\mathcal{C}}_{2}$ with
|
| 469 |
+
|
| 470 |
+
$$
|
| 471 |
+
{S}_{1} = \operatorname{list}\left( {\mathcal{C}}_{1}\right) = \left\lbrack {{\mathrm{R}}_{1,1},\ldots ,{\mathrm{R}}_{1,9}}\right\rbrack
|
| 472 |
+
$$
|
| 473 |
+
|
| 474 |
+
$$
|
| 475 |
+
{S}_{2} = \operatorname{list}\left( {\mathcal{C}}_{2}\right) = \left\lbrack {{\mathrm{R}}_{2,1},\ldots ,{\mathrm{R}}_{2,4}}\right\rbrack .
|
| 476 |
+
$$
|
| 477 |
+
|
| 478 |
+
We have
|
| 479 |
+
|
| 480 |
+
$$
|
| 481 |
+
{\Phi }_{\min }\left( {{\mathcal{C}}_{1},{\mathcal{C}}_{2}}\right) = \left( {\left\lbrack {{\mathrm{R}}_{1,1},\ldots ,{\mathrm{R}}_{1,4}}\right\rbrack ,\operatorname{list}\left( {\mathcal{C}}_{2}\right) }\right) ;
|
| 482 |
+
$$
|
| 483 |
+
|
| 484 |
+
$$
|
| 485 |
+
{\Phi }_{\max }\left( {{\mathcal{C}}_{1},{\mathcal{C}}_{2}}\right) = \left( {\operatorname{list}\left( {\mathcal{C}}_{1}\right) ,\left\lbrack {{\mathrm{R}}_{2,1},\ldots ,{\mathrm{R}}_{2,4},{\mathrm{R}}_{2,1},\ldots ,{\mathrm{R}}_{2,4},{\mathrm{R}}_{2,1}}\right\rbrack }\right) .
|
| 486 |
+
$$
|
| 487 |
+
|
| 488 |
+
Next, we combine ${\Phi }_{\min }$ and ${\Phi }_{\max }$ with CSPL, show that in practical environments ${\Phi }_{\min }$ and ${\Phi }_{\max }$ satisfy the requirements put on list-pair functions in Proposition 5.1 to guarantee termination and cluster-sending, and use these results to determine the expected constant complexity of the resulting instances of CSPL.
|
| 489 |
+
|
| 490 |
+
Theorem 5.7. Let ${\mathcal{C}}_{1},{\mathcal{C}}_{2}$ be disjoint clusters with synchronous communication.
|
| 491 |
+
|
| 492 |
+
1. If $n = \min \left( {{\mathbf{n}}_{{\mathcal{C}}_{1}},{\mathbf{n}}_{{\mathcal{C}}_{2}}}\right) > 2\max \left( {{\mathbf{f}}_{{\mathcal{C}}_{1}},{\mathbf{f}}_{{\mathcal{C}}_{2}}}\right)$ , then the expected number of cluster-sending steps performed by $\operatorname{CSPL}\left( {\mathcal{C}}_{1}\right.$ , ${\mathcal{C}}_{2},v,{\Phi }_{\min }$ ) is upper bounded by 4 . For every $\left( {{S}_{1},{S}_{2}}\right) \mathrel{\text{ := }}$ ${\Phi }_{\min }\left( {{\mathcal{C}}_{1},{\mathcal{C}}_{2}}\right)$ , we have $n = {\mathbf{n}}_{{S}_{1}} = {\mathbf{n}}_{{S}_{2}},n > 2{\mathbf{f}}_{{S}_{1}},n > 2{\mathbf{f}}_{{S}_{2}}$ , and $n > {\mathbf{f}}_{{S}_{1}} + {\mathbf{f}}_{{S}_{2}}$
|
| 493 |
+
|
| 494 |
+
2. If $n = \min \left( {{\mathbf{n}}_{{\mathcal{C}}_{1}},{\mathbf{n}}_{{\mathcal{C}}_{2}}}\right) > 3\max \left( {{\mathbf{f}}_{{\mathcal{C}}_{1}},{\mathbf{f}}_{{\mathcal{C}}_{2}}}\right)$ , then the expected number of cluster-sending steps performed by $\operatorname{CSPL}\left( {{\mathcal{C}}_{1},{\mathcal{C}}_{2},v,{\Phi }_{\min }}\right)$ is upper bounded by $2\frac{1}{4}$ . For every $\left( {{S}_{1},{S}_{2}}\right) \mathrel{\text{ := }} {\Phi }_{\min }\left( {{\mathcal{C}}_{1},{\mathcal{C}}_{2}}\right)$ , we have $n = {\mathbf{n}}_{{S}_{1}} = {\mathbf{n}}_{{S}_{2}}$ , $n > 3{\mathbf{f}}_{{S}_{1}},n > 3{\mathbf{f}}_{{S}_{2}}$ , and $n > {\mathbf{f}}_{{S}_{1}} + {\mathbf{f}}_{{S}_{2}}$ .
|
| 495 |
+
|
| 496 |
+
3. If ${\mathbf{n}}_{{\mathcal{C}}_{1}} > 3{\mathbf{f}}_{{\mathcal{C}}_{1}}$ and ${\mathbf{n}}_{{\mathcal{C}}_{2}} > 3{\mathbf{f}}_{{\mathcal{C}}_{2}}$ , then the expected number of cluster-sending steps performed by $\operatorname{CSPL}\left( {\mathcal{C}}_{1}\right.$ , ${\mathcal{C}}_{2},v,{\Phi }_{\max }$ ) is upper bounded by 3 . For every $\left( {{S}_{1},{S}_{2}}\right) \mathrel{\text{ := }} {\Phi }_{\max }\left( {{\mathcal{C}}_{1},{\mathcal{C}}_{2}}\right)$ , we have $n = {\mathbf{n}}_{{S}_{1}} = {\mathbf{n}}_{{S}_{2}} =$ $\max \left( {{\mathbf{n}}_{{\mathcal{C}}_{1}},{\mathbf{n}}_{{\mathcal{C}}_{2}}}\right) > {\mathbf{f}}_{{S}_{1}} + {\mathbf{f}}_{{S}_{2}}$ and either we have ${\mathbf{n}}_{{\mathcal{C}}_{1}} \geq {\mathbf{n}}_{{\mathcal{C}}_{2}}$ , $n > 3{\mathbf{f}}_{{S}_{1}}$ , and $n > 2{\mathbf{f}}_{{S}_{2}}$ ; or we have ${\mathbf{n}}_{{\mathcal{C}}_{2}} \geq {\mathbf{n}}_{{\mathcal{C}}_{1}},n > 2{\mathbf{f}}_{{S}_{1}}$ , and $n > 3{\mathbf{f}}_{{S}_{2}}$ .
|
| 497 |
+
|
| 498 |
+
Each of these instance of CSPL results in cluster-sending $v$ from ${\mathcal{C}}_{1}$ to ${\mathcal{C}}_{2}$ .
|
| 499 |
+
|
| 500 |
+
Proof. First, we prove the properties of ${\Phi }_{\min }$ and ${\Phi }_{\max }$ claimed in the three statements of the theorem. In the first and second statement of the theorem, we have $\min \left( {{\mathbf{n}}_{{\mathcal{C}}_{1}},{\mathbf{n}}_{{\mathcal{C}}_{2}}}\right) >$ $c\max \left( {{\mathbf{f}}_{{\mathcal{C}}_{1}},{\mathbf{f}}_{{\mathcal{C}}_{2}}}\right) ,c \in \{ 2,3\}$ . Let $\left( {{S}_{1},{S}_{2}}\right) \mathrel{\text{ := }} {\Phi }_{\min }\left( {{\mathcal{C}}_{1},{\mathcal{C}}_{2}}\right)$ and $n = {\mathbf{n}}_{{S}_{1}} = {\mathbf{n}}_{{S}_{2}}$ . By definition of ${\Phi }_{\min }$ , we have $n =$ $\min \left( {{\mathbf{n}}_{{\mathcal{C}}_{1}},{\mathbf{n}}_{{\mathcal{C}}_{2}}}\right)$ , in which case ${S}_{i},i \in \{ 1,2\}$ , holds $n$ distinct replicas from ${\mathcal{C}}_{i}$ . Hence, we have ${\mathbf{f}}_{{\mathcal{C}}_{i}} \geq {\mathbf{f}}_{{S}_{i}}$ and, as $n > c\max \left( {{\mathbf{f}}_{{\mathcal{C}}_{1}},{\mathbf{f}}_{{\mathcal{C}}_{2}}}\right) \geq c{\mathbf{f}}_{{\mathcal{C}}_{i}}$ , also $n > c{\mathbf{f}}_{{S}_{i}}$ . Finally, as $n > 2{\mathbf{f}}_{{S}_{1}}$ and $n > 2{\mathbf{f}}_{{S}_{2}}$ , also ${2n} > 2{\mathbf{f}}_{{S}_{1}} + 2{\mathbf{f}}_{{S}_{2}}$ and $n > {\mathbf{f}}_{{S}_{1}} + {\mathbf{f}}_{{S}_{2}}$ holds.
|
| 501 |
+
|
| 502 |
+
In the last statement of the theorem, we have ${\mathbf{n}}_{{\mathcal{C}}_{1}} > 3{\mathbf{f}}_{{\mathcal{C}}_{1}}$ and ${\mathbf{n}}_{{\mathcal{C}}_{2}} > 3{\mathbf{f}}_{{\mathcal{C}}_{2}}$ . Without loss of generality, we assume ${\mathbf{n}}_{{\mathcal{C}}_{1}} \geq$ ${\mathbf{n}}_{{\mathcal{C}}_{2}}$ . Let $\left( {{S}_{1},{S}_{2}}\right) \mathrel{\text{ := }} {\Phi }_{\max }\left( {{\mathcal{C}}_{1},{\mathcal{C}}_{2}}\right)$ and $n = {\mathbf{n}}_{{S}_{1}} = {\mathbf{n}}_{{S}_{2}}$ . By definition of ${\Phi }_{\max }$ , we have $n = \max \left( {{\mathbf{n}}_{{\mathcal{C}}_{1}},{\mathbf{n}}_{{\mathcal{C}}_{2}}}\right) = {\mathbf{n}}_{{\mathcal{C}}_{1}}$ . As $n = {\mathbf{n}}_{{\mathcal{C}}_{1}}$ , we have ${S}_{1} = \operatorname{list}\left( {\mathcal{C}}_{1}\right)$ . Consequently, we also have ${\mathbf{f}}_{{S}_{1}} = {\mathbf{f}}_{{\mathcal{C}}_{1}}$ and, hence, ${\mathbf{n}}_{{S}_{1}} > 3{\mathbf{f}}_{{\mathcal{C}}_{1}}$ . Next, we will show that ${\mathbf{n}}_{{S}_{2}} > 2{\mathbf{f}}_{{S}_{2}}$ . Let $q = {\mathbf{n}}_{{\mathcal{C}}_{1}}\operatorname{div}{\mathbf{n}}_{{\mathcal{C}}_{2}}$ and $r = {\mathbf{n}}_{{\mathcal{C}}_{1}}{\;\operatorname{mod}\;{\mathbf{n}}_{{\mathcal{C}}_{2}}}$ . We note that $\operatorname{list}{\left( {\mathcal{C}}_{2}\right) }^{ : n}$ contains $q$ full copies of $\operatorname{list}\left( {\mathcal{C}}_{2}\right)$ and one partial copy of $\operatorname{list}\left( {\mathcal{C}}_{2}\right)$ . Let $T \subset {\mathcal{C}}_{2}$ be the set of replicas in this partial copy. By construction, we have ${\mathbf{n}}_{{S}_{2}} = q{\mathbf{n}}_{{C}_{2}} + r >$ ${q3}{\mathbf{f}}_{{\mathcal{C}}_{2}} + {\mathbf{f}}_{T} + \mathbf{n}{\mathbf{f}}_{T}$ and ${\mathbf{f}}_{{S}_{2}} = q{\mathbf{f}}_{{\mathcal{C}}_{2}} + {\mathbf{f}}_{T}$ with ${\mathbf{f}}_{T} \leq \min \left( {{\mathbf{f}}_{{\mathcal{C}}_{2}},r}\right)$ . As $q > 1$ and ${\mathbf{f}}_{{\mathcal{C}}_{2}} \geq {\mathbf{f}}_{T}$ , we have $q{\mathbf{f}}_{{\mathcal{C}}_{2}} \geq {\mathbf{f}}_{{\mathcal{C}}_{2}} \geq {\mathbf{f}}_{T}$ . Hence, ${\mathbf{n}}_{{S}_{2}} > {3q}{\mathbf{f}}_{{\mathcal{C}}_{2}} + {\mathbf{f}}_{T} + {\mathbf{{nf}}}_{T} > {2q}{\mathbf{f}}_{{\mathcal{C}}_{2}} + {\mathbf{f}}_{{\mathcal{C}}_{2}} + {\mathbf{f}}_{T} + {\mathbf{{nf}}}_{T} \geq 2\left( {q{\mathbf{f}}_{{\mathcal{C}}_{2}} + }\right.$ $\left. {\mathbf{f}}_{T}\right) + {\mathbf{{nf}}}_{T} \geq 2{\mathbf{f}}_{{S}_{2}}$ . Finally, as $n > 3{\mathbf{f}}_{{S}_{1}}$ and $n > 2{\mathbf{f}}_{{S}_{2}}$ , also ${2n} > 3{\mathbf{f}}_{{S}_{1}} + 2{\mathbf{f}}_{{S}_{2}}$ and $n > {\mathbf{f}}_{{S}_{1}} + {\mathbf{f}}_{{S}_{2}}$ holds.
|
| 503 |
+
|
| 504 |
+
Now, we prove the upper bounds on the expected number of cluster-sending steps for $\operatorname{CSPL}\left( {{\mathcal{C}}_{1},{\mathcal{C}}_{2},v,{\Phi }_{\min }}\right)$ with $\min \left( {{\mathbf{n}}_{{\mathcal{C}}_{1}},{\mathbf{n}}_{{\mathcal{C}}_{2}}}\right) > 2\max \left( {{\mathbf{f}}_{{\mathcal{C}}_{1}},{\mathbf{f}}_{{\mathcal{C}}_{2}}}\right)$ . By Proposition 5.3, the expected number of cluster-sending steps is upper bounded by $\mathbb{E}\left( {n,{\mathbf{f}}_{{S}_{1}},{\mathbf{f}}_{{S}_{2}}}\right)$ . In the worst case, we have $n = {2f} + 1$ with $f = {\mathbf{f}}_{{S}_{1}} = {\mathbf{f}}_{{S}_{2}}$ . Hence, the expected number of cluster-sending steps is upper bounded by $\mathbb{E}\left( {{2f} + 1,f,f}\right) ,f \geq 0$ . We claim that $\mathbb{E}\left( {{2f} + 1,f,f}\right)$ simplifies to $\mathbb{E}\left( {{2f} + 1,f,f}\right) =$ $4 - 2/\left( {f + 1}\right) - f{!}^{2}/\left( {2f}\right) !$ . Hence, for all ${S}_{1}$ and ${S}_{2}$ , we have $\mathbb{E}\left( {n,{\mathbf{f}}_{{S}_{1}},{\mathbf{f}}_{{S}_{2}}}\right) < 4$ . An analogous argument can be used to prove the other upper bounds.
|
| 505 |
+
|
| 506 |
+
Note that the third case of Theorem 5.7 corresponds with cluster-sending between arbitrary-sized resilient clusters that each operate using Byzantine fault-tolerant consensus protocols.
|
| 507 |
+
|
| 508 |
+
Remark 5.8. The upper bounds on the expected-case complexity of instances of CSPL presented in Theorem 5.7 match the upper bounds for CSP presented in Corollary 4.4. This does not imply that the expected-case complexity for these protocols is the same, however, as the probability distributions that yield these expected-case complexities are very different. To see this, consider a system in which all clusters have $n$ replicas of which $f,n = {2f} + 1$ , are faulty. Next, we denote the expected number of cluster-sending steps of protocol $P$ by ${\mathbf{E}}_{P}$ , and we have
|
| 509 |
+
|
| 510 |
+
$$
|
| 511 |
+
{\mathbf{E}}_{\mathrm{{CSP}}} = \frac{{\left( 2f + 1\right) }^{2}}{{\left( f + 1\right) }^{2}} = 4 - \frac{{4f} + 3}{{\left( f + 1\right) }^{2}};
|
| 512 |
+
$$
|
| 513 |
+
|
| 514 |
+
$$
|
| 515 |
+
{\mathbf{E}}_{\mathrm{{CSPL}}} = \mathbb{E}\left( {{2f} + 1,f,f}\right) = 4 - \frac{2}{\left( f + 1\right) } - \frac{f{!}^{2}}{\left( {2f}\right) !}.
|
| 516 |
+
$$
|
| 517 |
+
|
| 518 |
+
In Figure 6, we have illustrated this difference by plotting the expected-case complexity of CSP and CSPL for systems with equal-sized clusters. In practice, we see that the expected-case complexity for CSP is slightly lower than the expected-case complexity for CSPL.
|
| 519 |
+
|
| 520 |
+
§ 6 ASYNCHRONOUS COMMUNICATION
|
| 521 |
+
|
| 522 |
+
In the previous sections, we introduced CSP, CSPP, and CSPL, three probabilistic cluster-sending protocols with expected constant message complexity. To simplify presentation, we have presented their design with respect to a synchronous environment. Next, we consider their usage in environments with asynchronous inter-cluster communication due to which messages can get arbitrary delayed, duplicated, or dropped.
|
| 523 |
+
|
| 524 |
+
< g r a p h i c s >
|
| 525 |
+
|
| 526 |
+
Figure 6: Comparison of the expected-case complexity of CSPL and CSP as a function of the number of faulty replicas.
|
| 527 |
+
|
| 528 |
+
We notice that the presented protocols only depend on synchronous communication to minimize communication: at the core of the correctness of CSP, CSPP, and CSPL is the cluster-sending step performed by CS-STEP, which does not make any assumptions on communication (Proposition 3.1). Consequently, CSP, CSPP, and CSPL can easily be generalized to operate in environments with asynchronous communication:
|
| 529 |
+
|
| 530 |
+
1. First, we observe that message duplication and out-of-order delivery have no impact on the cluster-sending step performed by CS-STEP. Hence, we do not need to take precautions against such asynchronous behavior.
|
| 531 |
+
|
| 532 |
+
2. If communication is asynchronous, but reliable (messages do not get lost, but can get duplicated, be delivered out-of-order, or get arbitrarily delayed), both CSPP and CSPL will be able to always perform cluster-sending in a finite number of steps. If communication becomes unreliable, however, messages sent between non-faulty replicas can get lost and all cluster-sending steps can fail. To deal with this, replicas in ${\mathcal{C}}_{1}$ simply continue cluster-sending steps until a step succeeds (CSP) or rerun the protocol until a step succeeds (CSPP, and CSPL), which will eventually happen in an expected constant number steps whenever communication becomes reliable again.
|
| 533 |
+
|
| 534 |
+
3. If communication is asynchronous, then messages can get arbitrarily delayed. Fortunately, practical environments operate with large periods of reliable communication in which the majority of the messages arrive within some bounded delay unknown to ${\mathcal{C}}_{1}$ and ${\mathcal{C}}_{2}$ . Hence, replicas in ${\mathcal{C}}_{1}$ can simply assume some delay $\delta$ . If this delay is too short, then a cluster-sending step can appear to fail simply because the proof of receipt is still under way. In this case, cluster-sending will still be achieved when the proof of receipt arrives, but spurious cluster-sending steps can be initiated in the meantime. To reduce the number of such spurious cluster-sending steps, all non-faulty replicas in ${\mathcal{C}}_{1}$ can use exponential backup to increase the message delay $\delta$ up-to-some reasonable upper bound (e.g., 100 s).
|
| 535 |
+
|
| 536 |
+
4. Finally, asynchronous environments often necessitate rather high assumptions on the message delay $\delta$ . Consequently, the duration of a single failed cluster-sending step performed by CS-STEP will be high. Here, a tradeoff can be made between message complexity and ${du}$ - ration by starting several rounds of the cluster-sending step at once. E.g., when communication is sufficiently reliable, then all three protocols are expected to finish in four rounds or less, due to which starting four rounds initially will sharply reduce the duration of the protocol with only a constant increase in expected message complexity.
|
| 537 |
+
|
| 538 |
+
§ 7 PERFORMANCE EVALUATION
|
| 539 |
+
|
| 540 |
+
In the previous sections, we introduced probabilistic cluster-sending protocols with expected-case constant message complexity. To gain further insight in the performance attainable by these protocols, especially in environments with unreliable communication, we implemented these protocols in a simulated sharded resilient environment that allows us to control the faulty replicas and the message loss rates. ${}^{1}$ As a baseline of comparison, we also evaluated three cluster-sending protocols from the literature:
|
| 541 |
+
|
| 542 |
+
1. The worst-case optimal cluster-sending protocol PBS-CS of Hellings et al. [17] that can perform cluster-sending using only ${\mathbf{f}}_{{\mathcal{C}}_{1}} + {\mathbf{f}}_{{\mathcal{C}}_{2}} + 1$ messages, which is worst-case optimal. This protocol requires reliable communication.
|
| 543 |
+
|
| 544 |
+
2. The broadcast-based cluster-sending protocol of CHAINSPACE [1] that can perform cluster-sending using ${\mathbf{n}}_{{\mathcal{C}}_{1}}{\mathbf{n}}_{{\mathcal{C}}_{2}}$ messages. This protocol requires reliable communication.
|
| 545 |
+
|
| 546 |
+
3. The global sharing protocol of GEOBFT [15], an optimistic cluster-sending protocol that assumes that each cluster uses a primary-backup consensus protocol (e.g., PBFT [6]) and optimizes for the case in which the coordinating primary of ${\mathcal{C}}_{1}$ is non-faulty. In this optimistic case, GEOBFT can perform cluster-sending using only ${\mathbf{f}}_{{\mathcal{C}}_{2}} + 1$ messages. To deal with faulty primaries and
|
| 547 |
+
|
| 548 |
+
< g r a p h i c s >
|
| 549 |
+
|
| 550 |
+
Figure 7: A comparison of the number of message exchange steps as a function of the number of faulty replicas in both clusters by our probabilistic cluster-sending protocols CSP, CSPP, and CSPL, and by three protocols from the literature. For each protocol, we measured the number of message exchange steps to send 10000 values between two equally-sized clusters, each cluster having $n = 3\mathbf{f} + 1$ replicas. ${}^{ \dagger }$ The results for GEOBFT are a plot of the best-case optimistic phase of that protocol.
|
| 551 |
+
|
| 552 |
+
§ UNRELIABLE COMMUNICATION, GEOBFT EMPLOYS A COSTLY REMOTE VIEW-CHANGE PROTOCOL, HOWEVER.
|
| 553 |
+
|
| 554 |
+
We refer to Figure 2 for an analytical comparison between these three cluster-sending protocols and our three probabilistic cluster-sending protocols.
|
| 555 |
+
|
| 556 |
+
In each experiment, we measured the number of messages exchanged in 10000 runs of the cluster-sending protocol under consideration. In specific, in each run we measure the number of messages exchanged when sending a value $v$ from a cluster ${\mathcal{C}}_{1}$ to a cluster ${\mathcal{C}}_{2}$ with ${\mathbf{n}}_{{\mathcal{C}}_{1}} = {\mathbf{n}}_{{\mathcal{C}}_{2}} = 3{\mathbf{f}}_{{\mathcal{C}}_{1}} + 1 = 3{\mathbf{f}}_{{\mathcal{C}}_{2}} + 1$ , and we aggregate this data over 10000 runs. As we use equal-sized clusters, we have ${\Phi }_{\min }\left( {{\mathcal{C}}_{1},{\mathcal{C}}_{2}}\right) = {\Phi }_{\max }\left( {{\mathcal{C}}_{1},{\mathcal{C}}_{2}}\right)$ and, hence, we use a singe instance of CSPL.
|
| 557 |
+
|
| 558 |
+
Next, we detail the two experiments we performed and look at their results.
|
| 559 |
+
|
| 560 |
+
§ 7.1 PERFORMANCE OF CLUSTER-SENDING PROTOCOLS
|
| 561 |
+
|
| 562 |
+
In our first experiment, we measure the number of messages exchanged as a function of the number of faulty replicas. In this case, we assumed reliable communication, due to which we could include all six protocols. The results of this experiment can be found in Figure 7.
|
| 563 |
+
|
| 564 |
+
As is clear from the results, our probabilistic cluster-sending protocols are able to perform cluster-sending with only a constant number of messages exchanged. Furthermore, we see that the performance of our cluster-sending protocols matches the theoretical expected-case analysis in this paper and closely follows the expected performance illustrated in Figure 6 (note that Figure 6 plots cluster-sending steps and each cluster-sending step involves the exchange of two messages between clusters).
|
| 565 |
+
|
| 566 |
+
${}^{1}$ The full implementation of this experiment is available at anonymized.
|
| 567 |
+
|
| 568 |
+
As all other cluster-sending protocols have a linear (PBS-CS and GEOBFT) or quadratic (CHAINSPACE) message complexity, our probabilistic cluster-sending protocols outperform the other cluster-sending protocols. This is especially the case when dealing with bigger clusters, in which case the expected-case constant message complexity of our probabilistic cluster-sending protocols shows the biggest advantage. Only in the case of the smallest clusters can the other cluster-sending protocols outperform our probabilistic cluster-sending protocols, as PBS-CS, GEOBFT, and CHAINSPACE use reliable communication to their advantage to eliminate any acknowledgment messages send from the receiving cluster to the sending cluster. We believe that the slightly higher cost of our probabilistic cluster-sending protocols in these cases is justified, as our protocols can effectively deal with unreliable communication.
|
| 569 |
+
|
| 570 |
+
§ 7.2 MESSAGE LOSS
|
| 571 |
+
|
| 572 |
+
In our second experiment, we measure the number of messages exchanged as a function of the number of faulty replicas and as a function of the message loss (in percent) between the two clusters. We assume that communication within each cluster is reliable. In this case, we only included our probabilistic cluster-sending protocols as PBS-CS and CHAINSPACE both assume reliable communication and GEOBFT is only able to perform recovery via remote view-changes in periods of reliable communication. The results of this experiment can be found in Figure 8.
|
| 573 |
+
|
| 574 |
+
We note that with a message loss of $x\%$ , the probability $p\left( x\right)$ of a successful cluster-sending step is only ${\left( 1 - \frac{x}{100}\right) }^{2}$ . E.g., $p\left( {{30}\% }\right) \approx {0.49}$ . As expected, the message complexity increases with an increase in message loss. Furthermore, the probabilistic cluster-sending protocols perform as expected (when taking into account the added cost to deal with message loss). These results further underline the practical benefits of each of the probabilistic cluster-sending protocols, especially for larger clusters: even in the case of high message loss rates, each of our probabilistic cluster-sending protocols are able to outperform the cluster-sending protocols PBS-CS, CHAINSPACE, and GEOBFT, which can only operate with reliable-communication.
|
| 575 |
+
|
| 576 |
+
§ 8 RELATED WORK
|
| 577 |
+
|
| 578 |
+
Although there is abundant literature on distributed systems and on consensus-based resilient systems (e.g., $\lbrack 2,5,8,{14}$ , ${16},{27},{31}\rbrack )$ , there is only limited work on communication between resilient systems $\left\lbrack {1,{15},{17}}\right\rbrack$ . In the previous section, we have already compared CSP, CSPP, and CSPL with the worst-case optimal cluster-sending protocols of Hellings et al. [17], the optimistic cluster-sending protocol of GEOBFT [15], and the broadcast-based cluster-sending protocols of CHAINSPACE [1]. Furthermore, we notice that cluster-sending can be solved using well-known Byzantine primitives such as consensus, interactive consistency, and Byzantine broadcasts $\left\lbrack {6,9,{24}}\right\rbrack$ . These primitives are much more costly than cluster-sending protocols, however, and require huge amounts of communication between all involved replicas.
|
| 579 |
+
|
| 580 |
+
In parallel to the development of traditional resilient systems and permissioned blockchains, there has been promising work on sharding in permissionless blockchains such as BITCOIN [25] and ETHEREUM [32]. Examples include techniques for enabling reliable cross-chain coordination via sidechains, blockchain relays, atomic swaps, atomic commitment, and cross-chain deals [12, 13, 19, 21, 22, 33, 34]. Unfortunately, these techniques are deeply intertwined with the design goals of permissionless blockchains in mind (e.g., cryptocurrency-oriented), and are not readily applicable to traditional consensus-based Byzantine clusters.
|
| 581 |
+
|
| 582 |
+
§ 9 CONCLUSION
|
| 583 |
+
|
| 584 |
+
In this paper, we presented probabilistic cluster-sending protocols that each provide highly-efficient solutions to the cluster-sending problem. In specific, our probabilistic cluster-sending protocols can facilitate communication between Byzantine fault-tolerant clusters with expected constant communication between clusters. For practical environments, our protocols can support worst-case linear communication between clusters, which is optimal, and deal with asynchronous and unreliable communication. The low practical cost of our cluster-sending protocols further enables the development and deployment of high-performance systems that are constructed out of Byzantine fault-tolerant clusters, e.g., fault-resilient geo-aware sharded data processing systems.
|
papers/JSYS/JSYS 2022/JSYS 2022 May_Papers/HyUoiQKimL2/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
papers/JSYS/JSYS 2022/JSYS 2022 May_Papers/HyUoiQKimL2/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
papers/JSYS/JSYS 2022/JSYS 2022 Oct_Papers/-0sywUv8ryL/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,685 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# SOK: VIRTUALIZATION CLASSIFICATION ON ISOLATION CAPABILITIES
|
| 2 |
+
|
| 3 |
+
Anonymous authors
|
| 4 |
+
|
| 5 |
+
Paper under double-blind review
|
| 6 |
+
|
| 7 |
+
## Abstract
|
| 8 |
+
|
| 9 |
+
Within the Linux ecosystem, hypervisor and container-based virtualization are the two most prevalent and well-known server virtualization approaches. As it is often the case, the choice is much more complex than a binary decision between those distinct approaches. Recently emerging technologies, concepts and approaches, have greatly diversified the "server virtualization landscape". For example, the enabling concepts of container-based virtualization are ever changing and improve upon every upcoming Kernel release. Moreover, novel sandbox-based approaches leverage traditional and recent Operating System (OS) functionality to intercept system calls for their isolation needs. Hybrid systems utilize classic hyper-visors in order to run a specific purpose built unikernel to run container-based virtualization within themselves.
|
| 10 |
+
|
| 11 |
+
In this work, we present an approach to classify virtualiza-tion aspects by their isolation capability. For this purpose, we decompose them into their respective enabling components and describe them in detail. Finally, we present a multi-level classification of server virtualization. This classification aims to enable a quick assessment of virtualization technologies and their induced implications.
|
| 12 |
+
|
| 13 |
+
## 1 Introduction
|
| 14 |
+
|
| 15 |
+
Virtualization technology isolation capabilities impose a challenge for many researchers, businesses and service providers alike. The isolation among processes, containers, Virtual Machines (VMs) or other containing units, is significant for a number of reasons. (i) Researchers aim for isolated experiments, without interference from unintentional foreign noise caused by other tenants. (ii) Businesses strive for the best possible infrastructure division while maintaining Service Level Agreements and maximizing profit. (iii) Service providers want to consolidate their infrastructure to keep total cost of ownership as low as possible. Naturally, poor isolation would negatively impact all the use cases above. These demands towards isolation are enabled by virtualization technologies.
|
| 16 |
+
|
| 17 |
+
Since its emergence in the 1960s, virtualization is ever-changing. Starting from experiments with time-sharing systems on mainframes [18], it has evolved into a broad landscape of technologies. These technologies are an integral part of the business models of many major organizations. Today, the application domains of virtualization are vast and the incentives for their adoptions are manifold.
|
| 18 |
+
|
| 19 |
+
This is particularly true for cloud computing and the direction it is progressing to. Areas and special research interest of the recent years include the Internet of things domain, fog computing, and edge computing. An encompassing term for these fields is the "Cloud-To-Thing continuum" [41]. Said tenants typically compete for resources for a variety of reasons like overbooking or arbitrary co-location [61]. Other emerging cloud computing models like Function-As-A-Service offerings also leverage virtualization to a great extent [50]. As Raza et al. further describe, they have complex demands for resource isolation, but also non-functional requirements like a fast cold start and low performance overhead. It is an essential requirement for virtualization software to be able to isolate them sufficiently.
|
| 20 |
+
|
| 21 |
+
Cloud computing and related domains are not the only fields where resource contention among tenants happens. In fact, distinct tenants on infrastructure are not necessarily distinct persons or customers. A simple but very common use-case is the demand to subdivide existing server hardware to improve its utilization [34]. For example, a company might operate a server with a database software, that is not able to fully utilize. This could be due to any reason like workload specifics or be imposed by the database software architecture itself. These underutilized resources could be used to operate another database software for another project, a scale out, or something completely different as a result of server consolidation $\left\lbrack {{10},{13}}\right\rbrack$ . An incentive therefore could be better energy efficiency [38] and reduced total cost of ownership [34]. The trend towards the decomposition of monolithic applications and thus the enabling of distribution, as well as consolidation of application components, further diversified the virtualization landscape. These microservices pattern as described by Fowler and Lewis [27] are certainly widely applied in industry and research [57]. What is important though, is sufficient isolation among those applications, so that they do not negatively impact each other.
|
| 22 |
+
|
| 23 |
+
Besides the business oriented use cases, High Performance Computing (HPC) data centres and in consequence researchers utilizing them greatly benefit from the possibilities of virtualization. Every progress made in virtualization techniques is evaluated and frequently applied within these centres $\left\lbrack {{29},{53},{65}}\right\rbrack$ . While they usually conclude, that native non-virtualized execution of experiments yields higher performance, this gap becomes smaller. In some cases the non-performance related aspects and the convenience of vir-tualization can outplay the raw performance. Projects like Singularity ${}^{1}$ for example aim to provide reproducible environments for HPC experiments built upon virtualization features of the Linux kernel [37].
|
| 24 |
+
|
| 25 |
+
Even though all these application domains are highly relevant and represent a multitude of research areas, publication utilizing virtualization technologies often neglect the details of their respective implementations [39, 49]. Even within a seemingly narrow category like container based virtualization, implementations details make a huge difference regarding aspects like performance overhead and degree of isolation.
|
| 26 |
+
|
| 27 |
+
This paper follows a systematic approach in analysing vir-tualization technologies. We therefore review existing technologies and deconstruct them into their isolation enabling technologies. Along this perspective we aim to provide a multi-level classification of virtualization technologies. This classification enables an elaborated decision on which technology to choose and what to expect.
|
| 28 |
+
|
| 29 |
+
To provide a holistic view on the enabling aspects of virtu-alization technologies we make the following contributions to achieve a classification of those, based on their isolation capabilities:
|
| 30 |
+
|
| 31 |
+
- Virtualization Technology Categorization: We categorize virtualization technologies into three distinct categories: hypervisor-based, container-based and sandbox-based.
|
| 32 |
+
|
| 33 |
+
- Elaboration on Virtualization Enablers: For each virtual-ization category, we highlight the virtualization enabling aspects of those. These are integrated into the classification as subsidiaries.
|
| 34 |
+
|
| 35 |
+
- Presentation of Dynamic Taxonomy: Based on the categories and virtualization enablers, we present a multilevel taxonomy. We further introduce a cross-section hybrid-based approach that combines aspects of the previously established categories and thus integrates possible future developments.
|
| 36 |
+
|
| 37 |
+
The remainder of the paper is structured as follows: section 2 presents important background knowledge, frequently referred to in upcoming sections. This includes Linux fundamentals that describe essential levers for virtualization. Section 3 then presents a methodology on how the actual virtual-ization technology classification is pursued. This is followed by the implementation of said method in section 4. Based on this resulting classification, a brief overview over existing and widely adopted virtualization technologies is given in section 6. Within this section, said technologies are aligned to that classification followed by a short discussion. Afterwards, a review of related work is conducted in section 7. Finally, a conclusion is drawn in section 8 including a brief general discussion as well as some thoughts on possible future work.
|
| 38 |
+
|
| 39 |
+
## 2 Background
|
| 40 |
+
|
| 41 |
+
This section briefly presents some Linux OS specific fundamentals that tightly interact with virtualization concepts. We therefore highlight how the kernel interaction happens and how processes and memory are managed. Moreover, a short description of how the I/O devices disk and network are interfaced follows. All these resources are leveraged by virtu-alization approaches as described in the upcoming sections.
|
| 42 |
+
|
| 43 |
+
Linux kernels are monolithic kernels. They manage Central Processing Unit (CPU) scheduling, memory, file systems, network protocols and system devices. Kernels are typically depicted as a layered ring graphs as shown in fig. 1. Notable here is, that applications are able to directly execute system calls or use an indirection via system libraries like $1{\mathrm{{ibc}}}^{2}$ .
|
| 44 |
+
|
| 45 |
+
System calls act as levers for applications to transit from user to kernel space. Further, the kernel provides an interface to the hardware, which in turn is interfaced via system calls again.
|
| 46 |
+
|
| 47 |
+

|
| 48 |
+
|
| 49 |
+
Figure 1: Linux Kernel
|
| 50 |
+
|
| 51 |
+
Based on this model there is a distinction made between (i) kernel mode and (ii) user mode. These special CPU modes provide distinct privileges to executed code. Executions within the(i)kernel mode are granted full access to devices and other privileged instructions, whereas user programs run in (ii) user mode. Execution in user mode runs unprivileged and needs to request privileges via system calls. Switching between user and kernel mode is called "mode switch". Examples for system calls include the opening of a file with open, mapping a file to memory with mmap or creating a new process with fork.
|
| 52 |
+
|
| 53 |
+
---
|
| 54 |
+
|
| 55 |
+
'https://sylabs.io/singularity/
|
| 56 |
+
|
| 57 |
+
${}^{2}$ https://man7.org/linux/man-pages/man7/libc.7.html
|
| 58 |
+
|
| 59 |
+
---
|
| 60 |
+
|
| 61 |
+
Processes are the vessels for program code execution. Among other responsibilities, they manage address space, stacks and registers. Depending on the physical CPU attributes, processes can be executed in parallel, which is typically called "multitasking". They are identified by a unique Process Identifier (PID).
|
| 62 |
+
|
| 63 |
+
Processes can spawn other processes and threads. For Linux, all of these are resembled in the task data structure. All tasks on a Linux system together create a tree structure with the root PID being 1 .
|
| 64 |
+
|
| 65 |
+
Thus, all tasks are created by other tasks using the system calls fork (2) ${}^{3}$ or clone (2) ${}^{4}$ . Internally, fork actually wraps clone with some privilege specific flags. After the creation of a new task with its own PID a system call like execve (2) ${}^{5}$ . This task creation flow is visualized in fig. 2. For the remainder of this paper, the term process will be used to refer to a running Linux task with a PID.
|
| 66 |
+
|
| 67 |
+

|
| 68 |
+
|
| 69 |
+
Figure 2: Task creation flow
|
| 70 |
+
|
| 71 |
+
Memory acts as a storage for kernel and application instructions. Alongside them resides their respective workload data. More specifically the "Main Memory" describes the actual physical memory of the system, commonly implemented as DRAM. It is segmented into "Pages" that typically represent 4 or 8 Kbytes, even though there are exceptions for "Huge Pages" if the CPU supports it.
|
| 72 |
+
|
| 73 |
+
Virtual memory on the contrary is an abstraction of the main memory and is presented as non-contended, almost infinite memory to processes. It is only mapped to physical memory on demand by the Memory Management Unit (MMU). Thus, virtual memory can be in four different states:(i)unallocated, (ii) allocated but not mapped yet, (iii) allocated and mapped and (iv) allocated and mapped to a physical swap device.
|
| 74 |
+
|
| 75 |
+
Actually allocated and mapped memory is called "Resident Memory". The Resident Set Size (RSS) describes the total size of resident memory for a given process. This amount is of specific interest for isolation, since it is the actually contended memory resource.
|
| 76 |
+
|
| 77 |
+
The system call mmap ${\left( 2\right) }^{6}$ is usually leveraged to allocate virtual memory. It is Linux' obligation when to map that allocated memory to the physical address space.
|
| 78 |
+
|
| 79 |
+
Disk or in particular disk I/O since it is attached to the I/O bus, represents the access to physical storage devices. The CPU is able to directly communicate with them via this bus. Within a computing system, they are typically represented as storage devices with an automatically generated name, following a system-specific scheme. Modern disks have a capacity in the GByte or TByte range and can be accessed by the kernel and applications.
|
| 80 |
+
|
| 81 |
+
$\mathrm{I}/\mathrm{O}$ operations follow a standardized protocol and mostly consist of read and write commands. An I/O operation targets a sector which represents a small amount of storage on the physical device of typically $4\mathrm{K}$ bytes. On top of a single or multiple disks, file-systems can be installed. They enable easy file-based, often tree like access to the disks.
|
| 82 |
+
|
| 83 |
+
Like disks, network devices are also attached to the I/O bus. Again, the CPU is able to directly communicate with them via this bus. The devices are usually referred to as Network Interface Cards (NICs). Within a computing system, they are typically represented as so-called interfaces or links with a name, generated by a system-specific scheme. The card itself, or the network controller, is defined by its transmission properties, or more specifically by its maximum possible throughput. Typical throughputs of models at the time of writing are $1\mathrm{{Gbit}}/\mathrm{s}$ to ${100}\mathrm{{Gbit}}/\mathrm{s}$ . Apart from that, NICs have one or more ports to connect to other NICs or a switching/routing device. Interconnections feature multiple connector interfaces like RJ-45 or SFP variations, as well as a transmission medium like copper or fibre.
|
| 84 |
+
|
| 85 |
+
Upon the intent of sending something to another link, the payload is split into packets of a previously agreed on size. In TCP/IP this is the "Maximum Transmission Unit (MTU)". These packages are further subdivided into nested frames depending on the applied network stack. For TCP/IP, this could be an "Ethernet Frame". These nested frames are then subsequently sent to a receiving NIC.
|
| 86 |
+
|
| 87 |
+
## 3 Methodology
|
| 88 |
+
|
| 89 |
+
In order to craft a representative and complete virtualization classification, a structured approach is necessary. Therefore, the method described lays out the steps that need to be taken. Foremost, a disambiguation of terms within the virtualization domain is important.
|
| 90 |
+
|
| 91 |
+
---
|
| 92 |
+
|
| 93 |
+
${}^{3}$ https://man7.org/linux/man-pages/man2/fork.2.html
|
| 94 |
+
|
| 95 |
+
${}^{4}$ https://man7.org/linux/man-pages/man2/clone.2.html
|
| 96 |
+
|
| 97 |
+
${}^{5}$ https://man7.org/linux/man-pages/man2/execve.2.html
|
| 98 |
+
|
| 99 |
+
${}^{6}$ https://man7.org/linux/man-pages/man2/mmap.2.html
|
| 100 |
+
|
| 101 |
+
---
|
| 102 |
+
|
| 103 |
+
The term virtualization itself is rather broad and there is no general agreement on it across its applied domains. Many aspects of and resource types of computer systems can be virtualized. This ranges from the virtualization of full servers, over specific resources, towards certain aspects of applications. Within this paper, the focus lies clearly on the virtu-alization of servers or "server virtualization". While other aspects may be part of it, only technologies and approaches towards this goal will be considered. Here, server virtualization is defined as Ameen and Hamo [7] puts it:
|
| 104 |
+
|
| 105 |
+
Definition 1 (Server virtualization). Server virtualization is the ability to run many operating systems with isolation and independences on other operating system.
|
| 106 |
+
|
| 107 |
+
Based on this constraint, a comprehensive literature review is performed to lay out a possible server virtualization classification. This process starts with a very broad categorization and tries to narrow technologies down until sufficient distinction among them can be achieved. This criterion is met once the enabling technologies are identified.
|
| 108 |
+
|
| 109 |
+
The enabling technologies are investigated into detail in order to understand how they create isolation and what the implications are. These identified fundamental technologies act as a specific background and are presented as such in section 2
|
| 110 |
+
|
| 111 |
+
To begin with, the generally agreed on coarse classification of related literature, will be used as a baseline. It agrees on two distinct virtualization categories [20, 52, 54, 60], namely (i) Hypervisor-based and (ii) Container-based. These two categories and newly determined ones are further described during the remainder of section 4 .
|
| 112 |
+
|
| 113 |
+
## 4 Virtualization Technology Classification
|
| 114 |
+
|
| 115 |
+
This section will investigate on possibilities to classify virtu-alization approaches. To begin with, it provides an overview that presents a quick glance at the resulting classification in section 4.1. Along this broad classification, each class is further analysed and investigated including their virtualization enabling components.
|
| 116 |
+
|
| 117 |
+
### 4.1 Overview
|
| 118 |
+
|
| 119 |
+
The anticipated classification is visualized in fig. 3. This classification acts as an overview and is derived from a broad literature research as described in the following sections. Precisely, the process to incrementally compose this figure is described by stepping through these classes. Arbitrary starting from left to right, these are the three virtualization classes hypervisor, container and sandbox. Moreover, a fourth one named hybrid is part of this figure to indicate, that there are virtualization technology implementations, that share characteristics of all classes.
|
| 120 |
+
|
| 121 |
+
### 4.2 Hypervisor-based
|
| 122 |
+
|
| 123 |
+
Like virtualization in general, Hypervisor or Virtual Machine Monitor (VMM) systems have been around since the 1960s. During that time IBM had a huge impact on its development [19]. VMMs create an abstract layer between the hardware and nested OSs running on the same hardware. Resources of the host like CPU, memory, disk and network can be individually and dynamically attached to them. These OSs run with virtualized hardware and therefore instantiate "VMs". This term for hypervisor-based virtual servers is used from now on.
|
| 124 |
+
|
| 125 |
+
The following sections will briefly elaborate on various types of hypervisors, in order to distinct them. Further, a short discussion about how they achieve isolation follows. Within a short closing discussion, an initial iteration of the virtualization taxonomy is formed.
|
| 126 |
+
|
| 127 |
+
#### 4.2.1 Architecture Types
|
| 128 |
+
|
| 129 |
+
Goldberg, who was one of the most prominent researchers in the virtualization domain subdivided Hypervisor-based virtualization into two categories; Type-1 and Type-2 [28]. The main distinction among them is whether it runs directly on the hardware, or on top of another OS. Figure 4 illustrates that difference.
|
| 130 |
+
|
| 131 |
+
#### 4.2.2 Hardware abstraction levels
|
| 132 |
+
|
| 133 |
+
While these two distinctions categorize hypervisors, further significant properties can be found. Hwang et al. [33] describes some by highlighting three approaches on how the actual virtualization layer can be provided. These namely are (i) Full Virtualization, (ii) Paravirtualization and (iii) Hardware Assisted (HWA) Virtualization. These will be briefly discussed in the following.
|
| 134 |
+
|
| 135 |
+
(i) Full virtualization aims to run any OS and kernel, independent of its own physical system. No modifications to the guest system is necessary. With this approach, the host's and the guest's kernel and even their processor architecture can differ. This goal is achieved by binary translation and emulation, depending on its implementation [6, 7]. Hereby every device presented to the guest system is fully virtualized and created by the hypervisor. This for example includes CPU, mainboard, memory and NIC. If applicable, the translation between the virtual devices within the guest virtual system and the actual physical devices on the host system is done by the combination of guest and host drivers, managed by the hypervisor.
|
| 136 |
+
|
| 137 |
+

|
| 138 |
+
|
| 139 |
+
Figure 3: Virtualization Classification Overview
|
| 140 |
+
|
| 141 |
+

|
| 142 |
+
|
| 143 |
+
Figure 4: Hypervisor architectures
|
| 144 |
+
|
| 145 |
+
(ii) Paravirtualization aims to minimize the overhead vir-tualization of hardware brings [14]. It does so by providing and leveraging a special abstraction layer. This layer can be utilized by the VM to run privileged system calls on the hardware rather than in its own virtualized domain. These are also called "hypercalls". However, to achieve this, the guest OS has to be adapted and aware of these hypercalls. Depending on the implementation and configuration, the performance benefit can be significant [25].
|
| 146 |
+
|
| 147 |
+
(iii) Hardware-assisted virtualization is another way to reduce the performance impact on full virtualization. This technique came forward with the development of processor features dedicated to virtualization [15]. These features allow the trapping of certain calls without the need for binary translation or paravirtualization. While both, full- and paravir-tualization can benefit from hardware-assisted virtualization, it can still be seen as a distinct category, since vendors decide whether to use that feature or implement it themselves within their hypervisor [62].
|
| 148 |
+
|
| 149 |
+
Neither of these approaches are mutually exclusive and specific implementations might apply different combinations or degrees of adaptation. However, these choices have significant impact on isolation characteristics as mentioned in section 5.4.
|
| 150 |
+
|
| 151 |
+
#### 4.2.3 Classification Impact
|
| 152 |
+
|
| 153 |
+
To summarize, the following taxonomy for hypervisor based systems is crafted. However, while implementations that represent instances within this taxonomy share common isolation characteristics, specific implementation details impact the factual isolation. Figure 5 illustrates this taxonomy in a small tree like structure. Since hypervisor types and its means to provision virtualization are not mutually exclusive, every possible combination has to exist.
|
| 154 |
+
|
| 155 |
+

|
| 156 |
+
|
| 157 |
+
Figure 5: Hypervisor taxonomy
|
| 158 |
+
|
| 159 |
+
### 4.3 Container-based
|
| 160 |
+
|
| 161 |
+
Container, or more specifically within the context of this paper "Linux container" are isolated processes on a Linux system that have their own view on most system resources. In contrast to VMs they do not utilize a hypervisor, but in consequence share the host kernel. However, since they are able to provide virtual servers including resource isolation, they are included within the virtualization taxonomy.
|
| 162 |
+
|
| 163 |
+
This section outlines distinct characteristics of container-based virtualization based on technologies applied. First, isolation targets and their relation to the technologies highlighted in section 2 are presented. Following up, the architecture of a typical container engine is discussed. Finally an extension of the previously mentioned taxonomy from section 4.2.1 is proposed.
|
| 164 |
+
|
| 165 |
+
#### 4.3.1 Isolation targets
|
| 166 |
+
|
| 167 |
+
Compared to hypervisor based virtualization, container-based virtualization is fundamentally different. It does not allow for full virtualizations like the usage of any host kernel or different CPU architecture. It also does not make any use of paravirtualization, since no device or component is emulated. Furthermore, there is no hardware-assisted virtualization possible and necessary. Container-based virtualization solely makes use of the features of the host OS. However, the goals and use-cases for both approaches overlap to a certain degree. This namely is the provisioning of virtual servers [60]. Container based virtual servers are called "containers" from now on.
|
| 168 |
+
|
| 169 |
+
There are no virtual devices that are being presented to the virtual server as in hypervisor-based virtualization, since no emulation and binary translation is taking place. Instead, Linux kernel features are being used to limit access, view and utilization to the resources provided by and shared with the host. Dua et al. [23] present an overview of the aspects of resources that need to be handled by the kernel on an abstract level. More in-depth information can be found within this chapter in section 2. Specifically, these are (i) process, (ii) resource, (iii) network, (iv) filesystem, (v) storage, (vi) device and (vii) capabilities.
|
| 170 |
+
|
| 171 |
+
These aspects are briefly described in the following:
|
| 172 |
+
|
| 173 |
+
(i) Process isolation creates a limited view of the process tree from the perspective of the container. All processes within the container are branched of a new process with the PID 1. This PID and its underlying tree is also visible from the host, but with different PIDs dependent on previous process state. This aspect is realized by using namespaces as described in section 5.1. More specifically, PID namespaces.
|
| 174 |
+
|
| 175 |
+
(ii) Resource limitation affects all typically used resources of a server. This includes CPU shares, memory, disk I/O and net I/O. Access to those can be limited and isolated dependant of the applied virtualization technique. This aspect is realized by using cGroups as described in section 5.2.
|
| 176 |
+
|
| 177 |
+
(iii) Network interfaces isolation is separate from the actual possible utilization of a device. The container needs its own personal network stack and only sees configuration affecting it directly. This aspect is realized by using names-paces as described in section 5.1. More specifically, network namespaces.
|
| 178 |
+
|
| 179 |
+
(iv) Filesystem tree isolation provides containers with their own root filesystem to not interfere with the host. Files, installed packages and configurations of the host are invisible to the container client, if not explicitly configured differently. This enables the installation of packages and changing of configurations without interference. This aspect is realized by using mount namespaces as described in section 5.1.
|
| 180 |
+
|
| 181 |
+
(v) Storage isolation gives containers their own storage area for any kind of state. This could be a mounted filesys-tem externally managed by the host. Apart from simple bind mounts, container engines frequently leverage more sophisticated storage engines, to provide containers with filesystems. These range from overlay filesystems promising maintainability benefits, while suffering from performance issues [45], to clustered ones like Ceph [68] where isolation is completely handled out of system. In simple cases this aspect is realized by using mount namespaces as described in section 5.1. More sophisticated approaches are directly offered by the container engine.
|
| 182 |
+
|
| 183 |
+
(vi) Device isolation makes containers aware of specific devices on the host system. Specific ones like Intelligent Platform Management Interface (IPMI), Graphics Processing Units (GPU) or disks can be made available to the container. This aspect is realized by using namespaces as described in section 5.1. More specifically, mount namespaces.
|
| 184 |
+
|
| 185 |
+
(vii) Capabilities describe which kind of operations the processes within the container are allowed to execute. These include operations like mounting a filesystem or binding to a network device. This aspect is realized by Linux capabilities as described in section 5.3.
|
| 186 |
+
|
| 187 |
+
#### 4.3.2 Example architecture
|
| 188 |
+
|
| 189 |
+
Linux offers many levers to enact the actual isolation of all the aspects described above. Namespaces provide the necessary isolation mechanisms, cGroups regulate limits on resource utilization and Capabilities grant required permissions.
|
| 190 |
+
|
| 191 |
+
Figure 6 shows a superficial and slightly simplified container engine architecture on the example of Docker ${}^{7}$ . This architecture however can be easily adapted to other classic container engines that also make use of the three mechanisms mentioned above $\left\lbrack {5,{35}}\right\rbrack$ . The following will briefly discuss the elements of the Docker architecture.
|
| 192 |
+
|
| 193 |
+
---
|
| 194 |
+
|
| 195 |
+
${}^{7}$ https://www.docker.com/
|
| 196 |
+
|
| 197 |
+
---
|
| 198 |
+
|
| 199 |
+

|
| 200 |
+
|
| 201 |
+
Figure 6: Docker architecture
|
| 202 |
+
|
| 203 |
+
The Docker engine itself is merely a Command Line Interface (CLI). Its primary purpose is user interaction and convenience of bringing all the container related features together. It therefore sensibly abstracts them to achieve an appropriate usability.
|
| 204 |
+
|
| 205 |
+
Containerd ${}^{8}$ is the actual daemon process running on a host, that is interacted with by using the Docker CLI. It therefore acts as a proxy towards the actual enactment of containers via runc and storage related features.
|
| 206 |
+
|
| 207 |
+
The storage engine enables containerd to provide storage for containers. This includes the authentication at container image registries to download base images a container is created from. Moreover, it provides access to storage for state, typically called volumes. Volumes could be provided using a local overlayFS or devicemapper concept, or be consumed from an external provider by leveraging specific storage drivers. Additionally, overlayFS is typically used to merge layered data including existing images, modifications and user data. This aspect is analysed by Mizusawa et al., who find many performance benefits in that approach compared to others existing at that time [46].
|
| 208 |
+
|
| 209 |
+
Finally, runc ${}^{9}$ is the component actually utilizing names-paces, cGroups and capabilities in order to create a running container.
|
| 210 |
+
|
| 211 |
+
Within the container domain, there are two important industry standards and specifications available. One being the (i) Container Runtime Interface (CRI), the other one being the (ii) Open Container Initiative (OCI) specifications. The (i) CRI defines an Application Programming Interface (API) towards the container engine. In the example above, this would be containerd. This enables container orchestrators like Kubernetes ${}^{10}$ to transparently utilize different engines, as long as they are compliant to that API. The (ii) OCI on the other hand, describes how container images are supposed to look like, in order to be accessed and executed independent of the actual runtime like runc.
|
| 212 |
+
|
| 213 |
+
#### 4.3.3 Classification Impact
|
| 214 |
+
|
| 215 |
+
To summarize, container-based virtualization fully depends on the degree Linux tools like namespaces, cGroups and capabilities are used. Moreover, storage is often handled outside the container perspective and is merely mounted into the respective namespace. This extends the taxonomy shown in fig. 5 as highlighted in fig. 7.
|
| 216 |
+
|
| 217 |
+

|
| 218 |
+
|
| 219 |
+
Figure 7: Container taxonomy
|
| 220 |
+
|
| 221 |
+
### 4.4 Sandbox-based
|
| 222 |
+
|
| 223 |
+
While most containerization technologies make use of the same Linux kernel fundamentals, there are some emerging technologies that pursue a different route. In order to better distinguish these from hypervisor and container-based vir-tualization they will be called sandbox-based from now on. However, this term is not an established term yet, but can be found among popular implementations of this approach.
|
| 224 |
+
|
| 225 |
+
#### 4.4.1 Concept
|
| 226 |
+
|
| 227 |
+
Sandboxes can be created by utilizing system call filtering provided by the Kernel. Linux offers some mechanisms in order to do so. More background information on the system call filtering, and thus sandbox creation is presented in section 5.5.
|
| 228 |
+
|
| 229 |
+
These kinds of containers may still use all the principles highlighted in section 4.3 but are extended by the application of sandboxing methods. Wan et al. thoroughly investigate sandboxing possibilities for container for the purpose of isolation. They implement a two-step process. They first profile and record system calls a container executes, to limit those afterwards in a second step [66].
|
| 230 |
+
|
| 231 |
+
---
|
| 232 |
+
|
| 233 |
+
8 https://github.com/containerd/containerd
|
| 234 |
+
|
| 235 |
+
${}^{9}$ https://github.com/opencontainers/runc
|
| 236 |
+
|
| 237 |
+
${}^{10}$ https://kubernetes.io/
|
| 238 |
+
|
| 239 |
+
---
|
| 240 |
+
|
| 241 |
+
One representative technology of this class of container based virtualization is Googles gVisor ${}^{11}$ . Their approach is to reimplement fundamental Linux capabilities within the user space to gain more control and thus improve isolation [70].
|
| 242 |
+
|
| 243 |
+
#### 4.4.2 Example Architecture
|
| 244 |
+
|
| 245 |
+
gVisor offers two operational modes. One is the ptrace mode discussed in this section. The other one utilizes Kernel Virtual Machine (KVM) in order to process system calls. This approach is discussed in section 4.5. A simplified architectural image is presented in fig. 8 as seen accordingly in its documentation [3]. As visible from that figure, there are two units between the application and the host; (i) Sentry and (ii) Gofer. These two and their relationship are briefly discussed in the following
|
| 246 |
+
|
| 247 |
+

|
| 248 |
+
|
| 249 |
+
Figure 8: gVisor architecture
|
| 250 |
+
|
| 251 |
+
(i) Sentry itself implements Linux and is responsible for handling system calls. A container breaching security would only reach into Sentry and not into the host. It therefore exposes most of the Linux system calls, intercepts and reim-plements them, in order to delegate them to the host.
|
| 252 |
+
|
| 253 |
+
(ii) Gofer is responsible for handling files outside of Sentry's own domain. Hence, it enables filesystem access for Sentry.
|
| 254 |
+
|
| 255 |
+
Due to the fact that many operations enacted by Sentry and Gofer are executed or proxied via user space, the performance overhead of such an approach is very high. Most operations take at least double the amount of time compared to traditional container based virtualization approaches [70]. However, Young et al. also conclude, that sandboxes significantly improve security and isolation. Wang et al. [67] come to a similar conclusion.
|
| 256 |
+
|
| 257 |
+
#### 4.4.3 Classification Impact
|
| 258 |
+
|
| 259 |
+
The sandbox based virtualization is a powerful method to improve isolation, but comes with a performance penalty. The approaches they use, most specifically the system call filtering, makes them an important addition within the virtualization classification and are thus added to the taxonomy. Hence, the taxonomy in fig. 7 is extended as presented in fig. 9
|
| 260 |
+
|
| 261 |
+

|
| 262 |
+
|
| 263 |
+
Figure 9: Sandbox taxonomy
|
| 264 |
+
|
| 265 |
+
### 4.5 Emerging and Hybrid Technologies
|
| 266 |
+
|
| 267 |
+
Besides the previously mentioned hypervisor, container and sandbox-based approaches, further technologies have recently emerged. Some of them claim to combine beneficial approaches of existing technologies, while minimizing their drawbacks. They often minimize choices in order to optimize and focus on details. On superficial observation however, they are not easily categorized among the previously introduced categories.
|
| 268 |
+
|
| 269 |
+
#### 4.5.1 Concept
|
| 270 |
+
|
| 271 |
+
On deeper investigation though, all these solutions make use of previously existing technologies and thus follow the same approaches. As previously noted, these have impact on performance, security and isolation characteristics.
|
| 272 |
+
|
| 273 |
+
The combination of technologies enables the vendors to file opinionated decisions on specific implementations, yielding benefits for certain scenarios. Combining for example hypervisor and container-based solutions, the decision for a very specific OS within the virtual machine is made possible. The Kernel can be minimized to only enable necessities for container execution in order to reduce overhead and thus combining isolation capabilities of both approaches. Kata ${}^{12}$ containers is a popular implementation pursuing that concept. While isolation capabilities improve, performance is slightly degraded compared to traditional container-based virtualiza-tion using runc as an example [36]. The previously discussed gVisor also offers a so called KVM mode, which follows a similar approach and is used as an alternative to ptrace.
|
| 274 |
+
|
| 275 |
+
---
|
| 276 |
+
|
| 277 |
+
11 https://gvisor.dev/
|
| 278 |
+
|
| 279 |
+
---
|
| 280 |
+
|
| 281 |
+
A slightly more sophisticated form of the combination of existing technologies are unikernel or "library operating system" based systems. With the rise of cloud computing and convenient tools within this ecosystem they became a viable alternative to fully fledged Linux based VMs [42, 51]. Conceptually, they compile an application down to machine executable code being able to run directly on hardware without a general purpose OS involved. During that process only mandatory functionality is included. The resulting image can then be booted by a machine, which usually is a virtual one. The adaptation of virtual servers in this context is a key factor, since it significantly reduces the amount of hardware compatibility code necessary. However, as for the combination of virtualization approaches, this still relies on hypervisor-based virtualization and thus shares the same isolation capabilities. It does make a difference in the performance and security domain as shown by Compastié et al. [17] with their approach towards Software-Defined Security (SDSec). IBM's implementation called Nabla ${}^{13}$ represents a well-known representation of this approach.
|
| 282 |
+
|
| 283 |
+
#### 4.5.2 Classification Impact
|
| 284 |
+
|
| 285 |
+
Even though not strictly being a virtualization class on its own, hybrid approaches shall also be included within the taxonomy. What is most important though, is the fact that any virtualization technology implementation may leverage any of the concepts highlighted within this taxonomy and described throughout this section. The resulting taxonomy is highlighted in fig. 10. Simultaneously, this figure also represents the final taxonomy and thus also includes hypervisor, container and sandbox-based virtualization.
|
| 286 |
+
|
| 287 |
+
### 4.6 Summary
|
| 288 |
+
|
| 289 |
+
This section proposes a taxonomy for virtualization technologies with respect to isolation capability. It therefore analyses existing approaches of prevalent technologies to categorize them as a first step. Those categories are (i) hypervisor, (ii) container and (iii) sandbox based ones. These are further subdivided into their enabling technologies and methods. Hence, the resulting taxonomy resembles a tree.
|
| 290 |
+
|
| 291 |
+
Within that tree, all leaf notes are considered to be options, whereas every other node represents a dimension.
|
| 292 |
+
|
| 293 |
+
However, modern solutions have evolved in ways, that utilize approaches of previously foreign domains. They do so in order to counter their own drawbacks or to optimize on different aspects. For this reason, a (iv) hybrid cross-section over all aspects as shown in the final taxonomy of fig. 10 is necessary. In consequence, the following definition for virtualization is proposed.
|
| 294 |
+
|
| 295 |
+
Definition 2. A virtualization technology's isolation is defined by the degree of realization of option leaves within the virtualization taxonomy dimensions.
|
| 296 |
+
|
| 297 |
+
## 5 Virtualization Enablers
|
| 298 |
+
|
| 299 |
+
This section highlights the details for virtualization in relation to the virtualization classification of section 4 . Hereby, we describe fundamental enabling technologies that are provided by the Linux kernel and leveraged by virtualization technologies.
|
| 300 |
+
|
| 301 |
+
### 5.1 Namespaces
|
| 302 |
+
|
| 303 |
+
Linux offers namespaces ${}^{14}$ in order to isolate system specific resources. It does so by wrapping them into an abstraction, in order to present them to a process [11]. This enables processes to yield completely different views of a system compared to the host system.
|
| 304 |
+
|
| 305 |
+
While this technology is an enabling one for containerization and thus Container-based virtualization, they do not directly relate. Both concepts and technologies can exist without the respective other one.
|
| 306 |
+
|
| 307 |
+
All available namespaces at the time of writing are highlighted in fig. 11. They are constantly adapted and extended in order to meet new demands and solve new challenges like a proposed CPU namespace [55].
|
| 308 |
+
|
| 309 |
+
The following paragraphs will briefly describe all those namespaces. The more prominently used and thus more important ones will be discussed into a little more detail.
|
| 310 |
+
|
| 311 |
+
## (i) cGroup namespaces ${}^{15}$ enable the usage of virtualized cGroups. When applied, a process is able to define its own cGroups, while the hosts cGroups are still active and protected. This allows for nesting of cGroups. For more information on cGroups in general refer to section 5.2.
|
| 312 |
+
|
| 313 |
+
---
|
| 314 |
+
|
| 315 |
+
${}^{14}$ https://man7.org/linux/man-pages/man7/namespaces.7.html
|
| 316 |
+
|
| 317 |
+
15https://man7.org/linux/man-pages/man7/cgroup_namespaces. 7.html
|
| 318 |
+
|
| 319 |
+
${}^{12}$ https://katacontainers.io/
|
| 320 |
+
|
| 321 |
+
${}^{13}$ https://nabla-containers.github.io/
|
| 322 |
+
|
| 323 |
+
---
|
| 324 |
+
|
| 325 |
+

|
| 326 |
+
|
| 327 |
+
Figure 10: Virtualization Taxonomy
|
| 328 |
+
|
| 329 |
+

|
| 330 |
+
|
| 331 |
+
Figure 11: Linux Namespaces
|
| 332 |
+
|
| 333 |
+
(ii) IPC namespaces ${}^{16}$ isolate Inter Process Communication (IPC) resources. These mostly refer to message queues and the usage of shared memory between processes. By applying these namespaces, processes are able to generate their own identifiers for them without inheriting their parent ones.
|
| 334 |
+
|
| 335 |
+
(iii) Network namespaces ${}^{17}$ isolate networking related resources for a process. This includes interfaces, protocol stacks, routing tables and more. In practice, virtual veth ${}^{18}$ network interfaces are created, which pair physical or other virtual interfaces to form a pipe-like tunnel. This enables the creation of a bridge between those interfaces and in consequence, between network namespaces in order to create arbitrary virtual network topologies. Together with the mount, PID and user namespaces as described in the following, they provide essential levers for container virtualization.
|
| 336 |
+
|
| 337 |
+
(iv) Mount namespaces ${}^{19}$ isolate the list of mounts a process is able to see. Moreover, it allows the process to define its own mounts without interfering with other processes or the host. This important namespace allows to present a full root filesystem tree to a container including bind mounts for possible state as yet another layer.
|
| 338 |
+
|
| 339 |
+
(v) PID namespaces ${}^{20}$ isolate process related resources and abstractions. Processes in a PID namespace get their own PID starting at 1. Subsequently started processes invoked by that process will have this new PID 1 as parent and will be assigned another unique one within that namespace. Collisions with other PID namespaces can not happen.
|
| 340 |
+
|
| 341 |
+
---
|
| 342 |
+
|
| 343 |
+
${}^{16}$ https://man7.org/linux/man-pages/man7/ipc_namespaces.7.html
|
| 344 |
+
|
| 345 |
+
17https://man7.org/linux/man-pages/man7/network_ namespaces.7.html
|
| 346 |
+
|
| 347 |
+
18https://man7.org/linux/man-pages/man4/veth.4.html
|
| 348 |
+
|
| 349 |
+
19https://man7.org/linux/man-pages/man7/mount_namespaces. 7.html
|
| 350 |
+
|
| 351 |
+
${}^{20}$ https://man7.org/linux/man-pages/man7/pid_namespaces.7.html
|
| 352 |
+
|
| 353 |
+
---
|
| 354 |
+
|
| 355 |
+
(vi) Time namespaces ${}^{21}$ isolate the settings for the system clocks. This very recent addition to the Linux kernel mainline enables to set a process specific time which influences derived values like uptime. Moreover, it can also be leveraged for checkpoint restore methods for processes and container migration [44].
|
| 356 |
+
|
| 357 |
+
(vii) User namespaces ${}^{22}$ isolate user related aspects for a process. These include user and group IDs, home directory, and capabilities. The latter is being described in section 5.3. This implies, that a user can have different capabilities within a user namespace than outside. In the case of a container, coupled with other namespaces, this allows an unprivileged host user to install packages within namespaces, that otherwise would require elevated privileges.
|
| 358 |
+
|
| 359 |
+
(viii) UTS namespaces ${}^{23}$ isolates host and domain name. Processes within the same UNIX Time-Sharing (UTS) names-pace are able to see and resolve to these names among them. Container engines typically leverage that to identify themselves. Moreover, container orchestration engines might use these namespaces to set up a cluster wide name resolution [43].
|
| 360 |
+
|
| 361 |
+
As already hinted throughout the description of names-paces, combining them makes them especially powerful. Using them in conjunction with cGroups extends that even more. This important building block for containers is discussed in the following section 5.2.
|
| 362 |
+
|
| 363 |
+
### 5.2 cGroups
|
| 364 |
+
|
| 365 |
+
Control ${}^{24}$ groups are a Linux feature that allows fine-grained control over different system resources [31]. More specifically, it enables to limit access to them. Typically, they are referred to as " $\mathrm{c}$ Groups". They are called "groups" because they can be applied to a group of processes which all share the same limits. Moreover, cGroups can be nested and are thus arranged in a hierarchical structure.
|
| 366 |
+
|
| 367 |
+
The cGroups project went under a significant restructuring effort, resulting in the release of cGroups v2. This effort was first merged into the kernel with version 4.5 and is able to fully replace v1 since kernel version 5.6 [22]. This paper focuses on the usage of v2 and thus this is the version being discussed in the following.
|
| 368 |
+
|
| 369 |
+
Resources are being controlled by resource controllers, sometimes also called subsystems. Figure 12 presents all those controllers visually. Like namespaces, they are constantly extended and improved like the most recent addition of a "misc" controller that is not yet part of most distributions [24].
|
| 370 |
+
|
| 371 |
+

|
| 372 |
+
|
| 373 |
+
Figure 12: Linux Cgroups
|
| 374 |
+
|
| 375 |
+
The following paragraphs will briefly describe all those cGroups. The more prominently used and thus more important ones will be discussed into a little more detail.
|
| 376 |
+
|
| 377 |
+
(i) cpu controllers set the amount of CPU cycles allowed. Apart from a raw value for cycles, aspects like weighted priorities and min/max utilization percentages can be set.
|
| 378 |
+
|
| 379 |
+
(ii) cpuset controllers set constraints on CPU and memory placement. Only values specified are allowed for affected processes. This is especially helpful for Non-Uniform Memory Access (NUMA) systems [32].
|
| 380 |
+
|
| 381 |
+
(iii) freezer controllers are able effectively freeze and thaw process groups. Oh et al. [47] have shown that this can be useful in order to dynamically increase system response time. They use freeze cGroups to freeze certain processes on demand to process a user input.
|
| 382 |
+
|
| 383 |
+
(iv) hugetlb controllers limit size for huge pages for the affected group. This can have an effect on memory performance but is considered to be a complex topic. Panwar et al.
|
| 384 |
+
|
| 385 |
+
---
|
| 386 |
+
|
| 387 |
+
${}^{21}$ https://man7.org/linux/man-pages/man7/time_namespaces.7.html
|
| 388 |
+
|
| 389 |
+
${}^{22}$ https://man7.org/linux/man-pages/man7/user_namespaces.7.html
|
| 390 |
+
|
| 391 |
+
${}^{23}$ https://man7.org/linux/man-pages/man7/uts_namespaces.7.html
|
| 392 |
+
|
| 393 |
+
${}^{24}$ https://man7.org/linux/man-pages/man7/cgroups.7.html
|
| 394 |
+
|
| 395 |
+
---
|
| 396 |
+
|
| 397 |
+
[48] elaborated on that and proposed a strategy to utilize them in and outside of virtualization. For a brief memory specific background refer to section 2 .
|
| 398 |
+
|
| 399 |
+
(v) io controllers enable the setting of both, bandwidth and Input Output Operations Per Second (IOPS) based limits to block devices for process groups.
|
| 400 |
+
|
| 401 |
+
(vi) memory controllers set the amount of allocatable memory per process group. Moreover, it is possible to set hints for the Out Of Memory (OOM) killer. Without specific configuration, only processes within a cGroup are killed by it.
|
| 402 |
+
|
| 403 |
+
(vii) perf_event controllers allow the gathering of cGroup specific perf events. These events are a means for kernel instrumentation and possibly contain sensible information like CPU counter or specific kernel function calls including payload.
|
| 404 |
+
|
| 405 |
+
(viii) pids controllers are able to impose a limit on process generation for the affected process group. It can be configured with a maximum amount of possible fork and clone operations.
|
| 406 |
+
|
| 407 |
+
(ix) rdma controllers regulate the access to Remote Direct Memory Access (RDMA) resources. This can be important for RDMA based devices like Infiniband NIC [40].
|
| 408 |
+
|
| 409 |
+
Similar to namespaces cGroups offer powerful measures to control, limit and possibly isolate resources. Used in conjunction with namespaces from the previous section 5.1, most enabling aspects for container virtualization are available. User specific capabilities are the last fundamental building block for isolation and are thus briefly presented in the following section.
|
| 410 |
+
|
| 411 |
+
### 5.3 Capabilities
|
| 412 |
+
|
| 413 |
+
Linux capabilities ${}^{25}$ are distinct units that allow the execution of very specific actions. These capabilities can be granted to a user or group.
|
| 414 |
+
|
| 415 |
+
At the time of writing the list of capabilities contain at least 80 different ones. They range from simple file operations over logging permissions towards complex admin like rights like kernel module loading. Certainly, this list is too extensive to discuss them here in a useful way.
|
| 416 |
+
|
| 417 |
+
Generally speaking, these capabilities exist to improve security. Fine-grained control over minimal operations allow system administrators to protect resources and to forbid certain actions. Hallyn and Morgan [30] have shown, that these are very effective. Moreover, there is a strong synergy with user namespaces as described in section 5.1.
|
| 418 |
+
|
| 419 |
+
### 5.4 Hypervisor specific isolation
|
| 420 |
+
|
| 421 |
+
Isolation capabilities and the levers the hypervisor uses to achieve it highly depends on the choices described in the previous section 4.2.1. These do not always align with the possibilities Linux offers which may be due to arbitrary preference or the fact, that these possibilities have not been developed yet. Hence, the following will present some examples for the resources CPU, memory and I/O by highlighting how specific implementations solve isolation challenges.
|
| 422 |
+
|
| 423 |
+
CPU: The Xen ${}^{26}$ hypervisor represents an interesting example for CPU isolation, since it offers the possibility to choose different CPU schedulers in order to control how this resource is shared. All approaches try to utilize this scheduler and therefore request shares. The scheduler will then schedule time for a VM for example based on its deadline, runtime or a credit system. Cherkasova et al. [16] discuss those schedulers in depth. They conclude, that the applied scheduler is highly dependent on the use-case but also state, that the default settings are not usable beyond experiments.
|
| 424 |
+
|
| 425 |
+
${\mathrm{{KVM}}}^{27}$ on the other side offers the possibility to utilize cGroups as described in section 5.2. This is possible due to the deep integration of KVM into the Linux kernel. Both approaches offer the possibility to dynamically adapt or change granted CPU shares to a VM.
|
| 426 |
+
|
| 427 |
+
Memory: Silva et al. [59] state, that there are principally two distinct methods for memory isolation. One being cGoups as shown in section 5.2, the other one is static memory assignment. The hypervisor therefore requests memory from the host and completely blocks it for allocation to its managed VMs. Static allocation of memory is undesirable, since the risk of non-utilized memory is very high. Therefore, Wald-spurger invented the technique "ballooning" [64]. They developed this for VMware ESX Server ${}^{28}$ a very popular enterprise VMM. By using their technique, they can prove that they can successfully reclaim or extend memory from a VM without negatively affecting it. The terminology they use for these respective terms are called "deflating" and "inflating".
|
| 428 |
+
|
| 429 |
+
## I/O: According to Waldspurger and Rosenblum two main approaches for I/O isolation can be pursued [63]. One be- ing the (i) emulation of devices, the other one is (ii) para- virtualization. Both were previously discussed in section 4.2.1. Hence, limiting utilization can be provided by implementation details of the emulated device, or by leveraging cGroups.
|
| 430 |
+
|
| 431 |
+
---
|
| 432 |
+
|
| 433 |
+
${}^{26}$ https://xenproject.org/
|
| 434 |
+
|
| 435 |
+
${}^{27}$ https://www.linux-kvm.org/
|
| 436 |
+
|
| 437 |
+
${}^{28}$ https://www.vmware.com/de/products/esxi-and-esx.html
|
| 438 |
+
|
| 439 |
+
${}^{25}$ https://man7.org/linux/man-pages/man7/capabilities.7.html
|
| 440 |
+
|
| 441 |
+
---
|
| 442 |
+
|
| 443 |
+
### 5.5 Syscall Filtering
|
| 444 |
+
|
| 445 |
+
As described in section 2 system calls are used to create an interaction between user and kernel space. Linux offers the ability to intercept these system calls for debugging and manipulation purposes.
|
| 446 |
+
|
| 447 |
+
The latter allows them to be utilized for virtualization purposes similar to hypercalls as mentioned in section 4.2.1. This technique enables the creation of so-called "sandboxes", a mechanism applied in various application domains from embedded systems to cloud-computing [12]. According to Schrammel et al. there are three distinct levers available to intercept system calls. These are (i) ptrace, (ii) Second-BPF and (iii) Syscall User Dispatch (SUD) [56].
|
| 448 |
+
|
| 449 |
+
(i) ptrace is a system call itself ${}^{29}$ . It is able to examine other processes memory and registers, and is therefore primarily used for breakpoint debugging and system call tracing. Due to the fact that it can also filter system calls it is applicable to implement sandboxing.
|
| 450 |
+
|
| 451 |
+
(ii) Seccomp-BPF is a kernel feature that allows for system call filtering. It therefore makes use of Berkeley Packet Filter (BPF) mechanisms. BPF or the recent implementation "Extended Berkeley Packet Filter (eBPF)", is a special VM running within the Linux kernel. This VM is able to execute code in kernel space that a user compiled in user space. This enables complex instrumentation and even runtime manipulation of kernel functionality. As a practical example, this technology is also used by modern browsers like the chromium project [2].
|
| 452 |
+
|
| 453 |
+
(iii) SUD is the most recent addition to the Linux kernel [4] invented for Windows emulation. It enables the filtering of syscalls made from a specified memory region and can subsequently be dispatched to user space.
|
| 454 |
+
|
| 455 |
+
## 6 Validation
|
| 456 |
+
|
| 457 |
+
In order to validate the classification proposed in section 4 we present a list of popular virtualization technologies and arrange them along this classification. Table 1 highlights those in a tabular view. This list does not claim to be complete in any way but serves to purpose to get an idea of the existing landscape with respect to isolation techniques.
|
| 458 |
+
|
| 459 |
+
This table clearly reflects the variety of virtualization technology implementations and the fact that many solutions already implement hybrid approaches. However, distinct silos can still be perceived. Hypervisor and container based implementations tend to utilize isolation aspects from their own domain. There are exceptions though.
|
| 460 |
+
|
| 461 |
+
${\mathrm{{KVM}}}^{30}$ for example makes use of Cgroups if configured accordingly, while gVisor ${}^{31}$ utilizes a type-I hypervisor and sandboxing concepts. ${\mathrm{{XEN}}}^{32}$ on the other hand acts as a traditional type-I hypervisor. Compared to KVM, it does not try to act as a general purpose OS. In this list VirtualBox ${}^{33}$ acts as an example for a widely adopted type-II hypervisor that is still capable of a wide range of virtualization techniques.
|
| 462 |
+
|
| 463 |
+
Docker ${}^{34}$ , Podman ${}^{35}$ and Flatpak ${}^{36}$ are all representatives of container virtualization domain. Flatpak is slightly special here, as it aims to package graphical end user applications in contrast to the former ones. They do however utilize all the same Linux functionalities (Namespaces, Cgroups and Capabilities) to achieve isolation.
|
| 464 |
+
|
| 465 |
+
The sandbox domain is comparatively new in relation to the hypervisor and container based virtualization. gVisor ${}^{37}$ in particular is a very interesting representative here. It offers the possibility to utilize KVM as a type-I hypervisor to achieve sandbox functionality, but also presents the option to use ptrace instead. Both effectively perform system call filtering While still being a research project and thus not being widely adopted, bpfContain [26] utilize modern BPF functionality to achieve this. Findlay et al. state, that they work on an integration into container runtime standards which seems very promising.
|
| 466 |
+
|
| 467 |
+
This list could certainly be extended indefinitely but gives an idea of the currently prevalent virtualization landscape
|
| 468 |
+
|
| 469 |
+
## 7 Related Work
|
| 470 |
+
|
| 471 |
+
Most publications arbitrary pick a list of popular or widely applied virtualization technologies in order to compare them. While they usually explain how the virtualization is enabled, these aspects typically come short [39, 49].
|
| 472 |
+
|
| 473 |
+
A similar situation applies to releases during and after the advent of container-based virtualization. Classifications of prevalent typically stop at a broader, more superficial point of view. The reason therefore is that they usually focus on something completely different instead of a mere classification of technologies [20, 52, 54, 60].
|
| 474 |
+
|
| 475 |
+
In contrast, Anjali et al. aim to classify virtualization technologies according to a scale based on "Location of functionality" [8]. Hereby they assume higher isolation, the less functionality is actually executed on the host kernel, compared to a guest Kernel. The scale itself ranges from low isolation like native Linux processes over gVisor hybrid approaches towards full KVM virtualization. While this claim seems intuitive, they do not measure performance degradation impact by competing tenants, but rather performance overhead imposed by the technologies. Each of those technologies are highlighted by their own approach to achieve isolation, analysing the amount and call pattern of system calls. Combined with the results of this paper, their assumptions could be experimentally determined.
|
| 476 |
+
|
| 477 |
+
---
|
| 478 |
+
|
| 479 |
+
${}^{30}$ https://www.linux-kvm.org/
|
| 480 |
+
|
| 481 |
+
31https://gvisor.dev/
|
| 482 |
+
|
| 483 |
+
32https://xenproject.org/
|
| 484 |
+
|
| 485 |
+
33https://www.virtualbox.org/
|
| 486 |
+
|
| 487 |
+
${}^{34}$ https://www.docker.com/
|
| 488 |
+
|
| 489 |
+
${}^{35}$ https://podman.io/
|
| 490 |
+
|
| 491 |
+
36https://flatpak.org/
|
| 492 |
+
|
| 493 |
+
37https://gvisor.dev/
|
| 494 |
+
|
| 495 |
+
${}^{29}$ https://man7.org/linux/man-pages/man2/ptrace.2.html
|
| 496 |
+
|
| 497 |
+
---
|
| 498 |
+
|
| 499 |
+
<table><tr><td rowspan="2">Name</td><td rowspan="2">Version</td><td rowspan="2">Comment</td><td colspan="5">Hypervisor</td><td colspan="3">Container</td><td colspan="3">Sandbox</td></tr><tr><td>I</td><td>II</td><td>Full</td><td>Para</td><td>HWA</td><td>Names- paces</td><td>Cgroups</td><td>Capa- bilities</td><td>ptrace</td><td>BPF</td><td>SUD</td></tr><tr><td>KVM</td><td>2.3</td><td>with Cgroups</td><td>$X$</td><td/><td>$x$</td><td>$x$</td><td>$X$</td><td/><td>$x$</td><td/><td/><td/><td/></tr><tr><td>XEN</td><td>4.15</td><td/><td>$X$</td><td/><td>$x$</td><td>$X$</td><td>$X$</td><td/><td/><td/><td/><td/><td/></tr><tr><td>VirtualBox</td><td>6.1</td><td/><td/><td>$x$</td><td>$x$</td><td>$x$</td><td>$x$</td><td/><td/><td/><td/><td/><td/></tr><tr><td>Docker</td><td>20.10</td><td/><td/><td/><td/><td/><td/><td>$X$</td><td>$X$</td><td>$x$</td><td/><td/><td/></tr><tr><td>Podman</td><td>4.1</td><td/><td/><td/><td/><td/><td/><td>$x$</td><td>$X$</td><td>$x$</td><td/><td/><td/></tr><tr><td>Flatpak</td><td>1.14</td><td/><td/><td/><td/><td/><td/><td>$x$</td><td>$x$</td><td>$x$</td><td/><td/><td/></tr><tr><td>gVisor</td><td>2022</td><td>with KVM</td><td>$x$</td><td/><td/><td>$x$</td><td>$x$</td><td>$x$</td><td>$X$</td><td>$x$</td><td/><td/><td/></tr><tr><td>gVisor</td><td>2022</td><td>with ptrace</td><td/><td/><td/><td/><td/><td>$x$</td><td>$X$</td><td>$x$</td><td>$x$</td><td/><td/></tr><tr><td>bpfContain[26]</td><td>2021</td><td/><td/><td/><td/><td/><td/><td>$X$</td><td>$x$</td><td>$x$</td><td/><td>$x$</td><td/></tr></table>
|
| 500 |
+
|
| 501 |
+
Table 1: Virtualization technology classification of popular implementations
|
| 502 |
+
|
| 503 |
+
While this paper pursues a classification of virtualization technologies, the measurement of performance within vir-tualization technologies is tightly related. Various authors perform comparative studies regarding the performance degradation for virtualization technologies [9, 58]. Most find that containers are able to deliver almost bare-metal like performance, but also show promising results for hybrid solutions. Isolation on the other hand seems better for hypervisor-based virtualization. They do however imply, that there is a relation between the class of virtualization technology and performance. We, in contrast, classify virtualization technologies along their mechanisms for isolation, whereas they classify them along performance.
|
| 504 |
+
|
| 505 |
+
## 8 Conclusion
|
| 506 |
+
|
| 507 |
+
This paper aimed to craft a virtualization classification. It was done by dissecting established virtualization technologies and by studying scientific articles published in the virtual-ization domain. Implementing process led to a taxonomy that presents every substantial building block that enables isolation. On the most superficial level this taxonomy divided technologies into the three categories(i)hypervisor,(ii) container and (iii) sandbox based. Since applying enabling concepts within these categories are not limited to their respective category, (iv) hybrid based extends this list by one. The final resulting taxonomy is presented in fig. 10
|
| 508 |
+
|
| 509 |
+
Besides this summary, reflections on the resulting classification are discussed in section 8.1. Moreover, thoughts regarding possible future work are presented in section 8.2. This section takes on ideas that raised during the work on this paper.
|
| 510 |
+
|
| 511 |
+
### 8.1 Discussion
|
| 512 |
+
|
| 513 |
+
What is yet to be shown though, is if this is also the case for any thinkable manifestation of virtualization technology. While this paper carefully crafted a classification taxonomy, the virtualization landscape is ever-changing. There might be minor adaptations necessary in order to assess any past or upcoming technology. Table 1 briefly shows a small proportion of those manifestations including upcoming ones.
|
| 514 |
+
|
| 515 |
+
Moreover, the classification performed here does not create an ordinal scale. An ordering based on isolation capability, startup time or performance overhead can only be performed based on measurements.
|
| 516 |
+
|
| 517 |
+
A central limitation regarding virtualization technologies certainly is the focus on the Linux OS. Other OSs also offer virtualization technologies including Microsoft's solutions like Hyper-V ${}^{38}$ or closed source Hypervisors like VMwares ${\mathrm{{ESC}}}^{39}$ . Moreover, other UNIX based OSs offer solutions for container based virtualization like FreeBSD's Jails[1]. The methodology to measure those systems does not change, the profiling technology however needs to.
|
| 518 |
+
|
| 519 |
+
### 8.2 Future Work
|
| 520 |
+
|
| 521 |
+
Furthermore, since all the technologies presented in this paper are very Linux focussed, an adaptation to different OSs might be interesting. Especially technologies only applicable for Microsoft’s OS Windows Server ${}^{40}$ could yield additional insights and possibly even a new class in the taxonomy.
|
| 522 |
+
|
| 523 |
+
As this taxonomy shows, there is a broad amount of vir-tualization technology manifestations possible. Especially considering different versions and configurations of those result in even more actually implemented solutions. This aspect is an essential reason to not try to analyse every technology regarding their isolation capabilities, but rather craft a method to sensibly measure it on demand. Such an approach enables to compare isolation for a specific use case. This is not limited to isolation though. As mentioned in section 8.1 other ordinal scales like performance impact could be of interest. Even a "multi criteria decision making" approach could be applied like pursued by Domaschka et al. [21].
|
| 524 |
+
|
| 525 |
+
---
|
| 526 |
+
|
| 527 |
+
${}^{38}$ https://docs.microsoft.com/virtualization
|
| 528 |
+
|
| 529 |
+
${}^{39}$ https://www.vmware.com/de/products/esxi-and-esx.html
|
| 530 |
+
|
| 531 |
+
${}^{40}$ https://www.microsoft.com/en-gb/windows-server
|
| 532 |
+
|
| 533 |
+
---
|
| 534 |
+
|
| 535 |
+
As mentioned before in section 8 Anjali et al. presented a scale for virtualization [8]. Combined with the findings of this paper and a sensible measurement methodology, both could be verified against their capability to isolate. Further, these could support the challenge of Williams et al. who questions virtualization for cloud computing in general [69]
|
| 536 |
+
|
| 537 |
+
## References
|
| 538 |
+
|
| 539 |
+
[1] Chapter 15. Jails. URL https://docs.freebsd.org/
|
| 540 |
+
|
| 541 |
+
en/books/handbook/jails/.
|
| 542 |
+
|
| 543 |
+
[2] Chromium Docs - Linux Sandboxing, URL https://chromium.googlesource.com/chromium/ src/+/master/docs/linux/sandboxing.md.
|
| 544 |
+
|
| 545 |
+
[3] gVisor Security Basics - Part 1, . URL https://gvisor.dev/blog/2019/11/18/ gvisor-security-basics-part-1/.
|
| 546 |
+
|
| 547 |
+
[4] Syscall User Dispatch - The Linux Kernel documentation, . URL https://www.kernel.org/doc/html/ latest/admin-guide/syscall-user-dispatch. html.
|
| 548 |
+
|
| 549 |
+
[5] S. Abraham, A. K. Paul, R. I. S. Khan, and A. R. Butt. On the Use of Containers in High Performance Computing Environments. In 2020 IEEE 13th International Conference on Cloud Computing (CLOUD), pages 284-293, Beijing, China, Oct. 2020. IEEE. ISBN 978-1-72818-780-8. doi: 10.1109/CLOUD49709. 2020.00048. URL https://ieeexplore.ieee.org/ document/9284294/.
|
| 550 |
+
|
| 551 |
+
[6] K. Adams and O. Agesen. A comparison of software and hardware techniques for x86 virtualization. ACM SIGPLAN Notices, 41(11):2-13, Oct. 2006. ISSN 0362-1340. doi: 10.1145/1168918.1168860. URL https://doi.org/10.1145/1168918.1168860.
|
| 552 |
+
|
| 553 |
+
[7] R. Y. Ameen and A. Y. Hamo. Survey of Server Virtualization. Apr. 2013. URL http://arxiv.org/ abs/1304.3557. arXiv: 1304.3557.
|
| 554 |
+
|
| 555 |
+
[8] Anjali, T. Caraza-Harter, and M. M. Swift. Blending containers and virtual machines: a study of firecracker and gVisor. In Proceedings of the 16th ACM
|
| 556 |
+
|
| 557 |
+
SIGPLAN/SIGOPS International Conference on Virtual Execution Environments, VEE '20, pages 101- 113, New York, NY, USA, Mar. 2020. Association for Computing Machinery. ISBN 978-1-4503-7554- 2. doi: 10.1145/3381052.3381315. URL https: //doi.org/10.1145/3381052.3381315.
|
| 558 |
+
|
| 559 |
+
[9] N. G. Bachiega, P. S. L. Souza, S. M. Bruschi, and S. d. R. S. de Souza. Container-Based Performance Evaluation: A Survey and Challenges. In 2018 IEEE International Conference on Cloud Engineering (IC2E), pages 398-403, Apr. 2018. doi: 10/gpfjh3.
|
| 560 |
+
|
| 561 |
+
[10] B. Bermejo and C. Juiz. A general method for evaluating the overhead when consolidating servers: performance degradation in virtual machines and containers. The Journal of Supercomputing, Feb. 2022. ISSN 1573- 0484. doi: 10.1007/s11227-022-04318-5. URL https: //doi.org/10.1007/s11227-022-04318-5.
|
| 562 |
+
|
| 563 |
+
[11] E. W. Biederman and L. Networx. Multiple instances of the global linux namespaces. In Proceedings of the Linux Symposium, volume 1, pages 101-112. Citeseer, 2006.
|
| 564 |
+
|
| 565 |
+
[12] I. Borate and R. Chavan. Sandboxing in Linux: From Smartphone to Cloud. International Journal of Computer Applications, 148:1-8, Aug. 2016. doi: 10.5120/ijca2016911256.
|
| 566 |
+
|
| 567 |
+
[13] D. Carver. Advanced consolidation for dynamic containers. PhD thesis, 2019.
|
| 568 |
+
|
| 569 |
+
[14] L. Chen, S. Patel, H. Shen, and Z. Zhou. Profiling and Understanding Virtualization Overhead in Cloud. In 2015 44th International Conference on Parallel Processing, pages 31-40, Sept. 2015. doi: 10/gpd5fc. ISSN: 0190-3918.
|
| 570 |
+
|
| 571 |
+
[15] W. Chen, H. Lu, L. Shen, Z. Wang, N. Xiao, and D. Chen. A Novel Hardware Assisted Full Virtualization Technique. In 2008 The 9th International Conference for Young Computer Scientists, pages 1292-1297, Nov. 2008. doi: 10/dwtg4k.
|
| 572 |
+
|
| 573 |
+
[16] L. Cherkasova, D. Gupta, A. Vahdat, and L. Cherkasova. When Virtual is Harder than Real: Resource Allocation Challenges in Virtual Machine Based IT Environments. Hewlett Packard Laboratories, Tech. Rep. HPL-2007- 25, 2007.
|
| 574 |
+
|
| 575 |
+
[17] M. Compastié, R. Badonnel, O. Festor, R. He, and M. Kassi-Lahlou. Unikernel-based approach for software-defined security in cloud infrastructures. In NOMS 2018 - 2018 IEEE/IFIP Network Operations and Management Symposium, pages 1-7, Apr. 2018. doi: 10.1109/NOMS.2018.8406155. ISSN: 2374-9709.
|
| 576 |
+
|
| 577 |
+
[18] F. J. Corbató, M. Merwin-Daggett, and R. C. Daley. An experimental time-sharing system. In Proceedings of the May 1-3, 1962, Spring Joint Computer Conference, pages 335-344.
|
| 578 |
+
|
| 579 |
+
[19] R. J. Creasy. The Origin of the VM/370 Time-Sharing System. IBM Journal of Research and Development, 25(5):483-490, Sept. 1981. ISSN 0018-8646. doi: 10.1147/rd.255.0483.
|
| 580 |
+
|
| 581 |
+
[20] J. Daniels. Server virtualization architecture and implementation. XRDS: Crossroads, The ACM Magazine for Students, 16(1):8-12, Sept. 2009. ISSN 1528-4972, 1528-4980. doi: 10/bvpxrx. URL https: //dl.acm.org/doi/10.1145/1618588.1618592.
|
| 582 |
+
|
| 583 |
+
[21] J. Domaschka, S. Volpert, and D. Seybold. Hathi: An MCDM-based Approach to Capacity Planning for Cloud-hosted DBMS. In 2020 IEEE/ACM 13th International Conference on Utility and Cloud Computing (UCC), pages 143-154. doi: 10.1109/UCC48980.2020. 00033.
|
| 584 |
+
|
| 585 |
+
[22] C. Down. 5 years of cgroup v2: The future of linux resource control. 2021.
|
| 586 |
+
|
| 587 |
+
[23] R. Dua, A. R. Raja, and D. Kakadia. Virtualization vs Containerization to Support PaaS. In 2014 IEEE International Conference on Cloud Engineering, pages 610-614, Boston, MA, USA, Mar. 2014. IEEE. ISBN 978-1-4799-3766-0. doi: 10/ggbwfz. URL http: //ieeexplore.ieee.org/document/6903537/.
|
| 588 |
+
|
| 589 |
+
[24] Edge, Jake. The misc control group [LWN.net], 2021. URL https://lwn.net/Articles/856438/.
|
| 590 |
+
|
| 591 |
+
[25] H. Fayyad-kazan, L. Perneel, and M. Timmerman. Full and Para-Virtualization with Xen: A Performance Comparison 1. 2013.
|
| 592 |
+
|
| 593 |
+
[26] W. Findlay, D. Barrera, and A. Somayaji. BPFContain: Fixing the Soft Underbelly of Container Security.
|
| 594 |
+
|
| 595 |
+
[27] M. Fowler and J. Lewis. Microservices. URL https://martinfowler.com/articles/ microservices.html.
|
| 596 |
+
|
| 597 |
+
[28] R. P. Goldberg. Architectural Principles for Virtual Computer Systems. Technical report, HARVARD UNIV CAMBRIDGE MA DIV OF ENGINEERING AND APPLIED PHYSICS, Feb. 1973. URL https:// apps.dtic.mil/sti/citations/AD0772809. Section: Technical Reports.
|
| 598 |
+
|
| 599 |
+
[29] N. Gordon and J. R. Lange. Lifting and Dropping VMs to Dynamically Transition Between Time-and Space-sharing for Large-Scale HPC Systems. In Proceedings of the 31st International Symposium on
|
| 600 |
+
|
| 601 |
+
High-Performance Parallel and Distributed Computing,
|
| 602 |
+
|
| 603 |
+
pages 30-42. ACM. ISBN 978-1-4503-9199-3. doi: 10.1145/3502181.3531471.
|
| 604 |
+
|
| 605 |
+
[30] S. E. Hallyn and A. G. Morgan. Linux capabilities: making them work. 2008.
|
| 606 |
+
|
| 607 |
+
[31] T. Heo, J. Weiner, V. Davydov, L. Thorvalds, P. Parav, T. Klauser, S. Hallyn, and K. Khlebnikov. Control group v2, 2015. URL https://www.kernel.org/doc/ Documentation/admin-guide/cgroup-v2.rst.
|
| 608 |
+
|
| 609 |
+
[32] C. Hollowell, C. Caramarcu, W. Strecker-Kellogg, A. Wong, and A. Zaytsev. The Effect of NUMA Tunings on CPU Performance. Journal of Physics: Conference Series, 664(9):092010, Dec. 2015. ISSN 1742-6596. doi: 10.1088/1742-6596/664/9/092010. URL https://doi.org/10.1088/1742-6596/664/ 9/092010. Publisher: IOP Publishing.
|
| 610 |
+
|
| 611 |
+
[33] J. Hwang, S. Zeng, F. y. Wu, and T. Wood. A component-based performance comparison of four hy-pervisors. In 2013 IFIP/IEEE International Symposium on Integrated Network Management (IM 2013), pages 269-276, May 2013. ISSN: 1573-0077.
|
| 612 |
+
|
| 613 |
+
[34] G. Khanna, K. Beaty, G. Kar, and A. Kochut. Application Performance Management in Virtualized Server Environments. In 2006 IEEE/IFIP Network Operations and Management Symposium NOMS 2006, pages 373- 381, Apr. 2006. doi: 10.1109/NOMS.2006.1687567. ISSN: 2374-9709.
|
| 614 |
+
|
| 615 |
+
[35] A. Kovacs. Comparison of different Linux containers. In 2017 40th International Conference on Telecommunications and Signal Processing (TSP), pages 47-51, Barcelona, Spain, July 2017. IEEE. ISBN 978-1-5090- 3982-1. doi: 10.1109/TSP.2017.8075934. URL http: //ieeexplore.ieee.org/document/8075934/.
|
| 616 |
+
|
| 617 |
+
[36] R. Kumar and B. Thangaraju. Performance Analysis Between RunC and Kata Container Runtime. In 2020 IEEE International Conference on Electronics, Computing and Communication Technologies (CONECCT), pages 1-4, July 2020. doi: 10.1109/CONECCT50063. 2020.9198653.
|
| 618 |
+
|
| 619 |
+
[37] G. M. Kurtzer, V. Sochat, and M. W. Bauer. Singularity: Scientific containers for mobility of compute. 12(5): e0177459. ISSN 1932-6203. doi: 10.1371/journal. pone. 0177459.
|
| 620 |
+
|
| 621 |
+
[38] Y. C. Lee and A. Y. Zomaya. Energy efficient utilization of resources in cloud computing systems. 60(2):268-280. ISSN 1573-0484. doi: 10.1007/ s11227-010-0421-3.
|
| 622 |
+
|
| 623 |
+
[39] G. Li, K. Takahashi, K. Ichikawa, H. Iida, P. Thieng-buranathum, and P. Phannachitta. Comparative Performance Study of Lightweight Hypervisors Used in Container Environment:. In Proceedings of the 11th International Conference on Cloud Computing and Services Science, pages 215-223, Online Streaming, — Select a Country —, 2021. SCITEPRESS - Science and Technology Publications. ISBN 978-989-758- 510-4. doi: 10.5220/0010440502150223. URL https://www.scitepress.org/DigitalLibrary/ Link.aspx?doi=10.5220/0010440502150223.
|
| 624 |
+
|
| 625 |
+
[40] J. Liu, J. Wu, and D. K. Panda. High performance rdma-based mpi implementation over infiniband. International Journal of Parallel Programming, 32(3):167-198, 2004.
|
| 626 |
+
|
| 627 |
+
[41] T. Lynn, P. T. Endo, A. M. N. C. Ribeiro, G. B. N. Barbosa, and P. Rosati. The Internet of Things: Definitions, Key Concepts, and Reference Architectures. In T. Lynn, J. G. Mooney, B. Lee, and P. T. Endo, editors, The Cloud-to-Thing Continuum: Opportunities and Challenges in Cloud, Fog and Edge Computing, Palgrave Studies in Digital Business & Enabling Technologies, pages 1-22. Springer International Publishing. ISBN 978-3-030-41110-7. doi: 10.1007/978-3-030-41110-7_1.
|
| 628 |
+
|
| 629 |
+
[42] A. Madhavapeddy, R. Mortier, C. Rotsos, D. Scott, B. Singh, T. Gazagnaire, S. Smith, S. Hand, and J. Crowcroft. Unikernels: library operating systems for the cloud. page 12, 2013. doi: 10.1145/2490301. 2451167.
|
| 630 |
+
|
| 631 |
+
[43] V. Marmol, R. Jnagal, and T. Hockin. Networking in Containers and Container Clusters. page 4, 2015.
|
| 632 |
+
|
| 633 |
+
[44] X. Merino and C. Otero. The cost of time virtualization in linux containers. Technical report, 2022.
|
| 634 |
+
|
| 635 |
+
[45] N. Mizusawa, K. Nakazima, and S. Yamaguchi. Performance Evaluation of File Operations on OverlayFS. In 2017 Fifth International Symposium on Computing and Networking (CANDAR), pages 597-599, Nov. 2017. doi: 10.1109/CANDAR.2017.62. ISSN: 2379-1896.
|
| 636 |
+
|
| 637 |
+
[46] N. Mizusawa, Y. Seki, J. Tao, and S. Yamaguchi. A Study on I/O Performance in Highly Consolidated Container-Based Virtualized Environment on Over-layFS with Optimized Synchronization. In 2020 14th International Conference on Ubiquitous Information Management and Communication (IMCOM), pages 1- 4, Taichung, Taiwan, Jan. 2020. IEEE. ISBN 978- 1-72815-453-4. doi: 10.1109/IMCOM48794.2020. 9001733. URL https://ieeexplore.ieee.org/ document/9001733/.
|
| 638 |
+
|
| 639 |
+
[47] S. Oh, C. Hahm, B. Seo, T. Lee, and J. Lee. On the improvements of fast user interactivity in consumer electronic devices using Linux. In 2017 IEEE 7th International Conference on Consumer Electronics - Berlin (ICCE-Berlin), pages 267-270, Berlin, Sept. 2017. IEEE. ISBN 978-1-5090-4014-8. doi: 10.1109/ICCE-Berlin.2017.8210648. URL http:// ieeexplore.ieee.org/document/8210648/.
|
| 640 |
+
|
| 641 |
+
[48] A. Panwar, A. Prasad, and K. Gopinath. Making Huge Pages Actually Useful. In Proceedings of the Twenty-Third International Conference on Architectural Support for Programming Languages and Operating Systems, pages 679-692, Williamsburg VA USA, Mar. 2018. ACM. ISBN 978-1-4503-4911- 6. doi: 10.1145/3173162.3173203. URL https: //dl.acm.org/doi/10.1145/3173162.3173203.
|
| 642 |
+
|
| 643 |
+
[49] M. Plauth, L. Feinbube, and A. Polze. A Performance Survey of Lightweight Virtualization Techniques. In F. De Paoli, S. Schulte, and E. Broch Johnsen, editors, Service-Oriented and Cloud Computing, Lecture Notes in Computer Science, pages 34-48, Cham, 2017. Springer International Publishing. ISBN 978-3-319- 67262-5. doi: 10/ggfktr.
|
| 644 |
+
|
| 645 |
+
[50] A. Raza, I. Matta, N. Akhtar, V. Kalavri, and V. Isaha-gian. SoK: Function-As-A-Service: From An Application Developer's Perspective. 1(1). ISSN 2770-5501. doi: 10.5070/SR31154815.
|
| 646 |
+
|
| 647 |
+
[51] A. Raza, P. Sohal, J. Cadden, J. Appavoo, U. Drep-per, R. Jones, O. Krieger, R. Mancuso, and L. Woodman. Unikernels: The Next Stage of Linux's Dominance. In Proceedings of the Workshop on Hot Topics in Operating Systems, pages 7-13, Bertinoro Italy, May 2019. ACM. ISBN 978-1-4503-6727- 1. doi: 10.1145/3317550.3321445. URL https: //dl.acm.org/doi/10.1145/3317550.3321445.
|
| 648 |
+
|
| 649 |
+
[52] F. Rodríguez-Haro, F. Freitag, L. Navarro, E. Hernánchez-sánchez, N. Farías-Mendoza, J. A. Guerrero-Ibáñez, and A. González-Potes. A summary of virtualization techniques. Procedia Technology, 3: 267-272, 2012. ISSN 22120173. doi: 10/gpddtm. URL https://linkinghub.elsevier.com/retrieve/ pii/S2212017312002587.
|
| 650 |
+
|
| 651 |
+
[53] C. Ruiz, E. Jeanvoine, and L. Nussbaum. Performance Evaluation of Containers for HPC. In S. Hunold, A. Costan, D. Giménez, A. Iosup, L. Ricci, M. E. Gómez Requena, V. Scarano, A. L. Varbanescu, S. L. Scott, S. Lankes, J. Weidendorfer, and M. Alexander, editors, Euro-Par 2015: Parallel Processing Workshops, Lecture Notes in Computer Science, pages 813-824. Springer International Publishing. ISBN 978-3-319- 27308-2. doi: 10.1007/978-3-319-27308-2_65.
|
| 652 |
+
|
| 653 |
+
[54] J. Sahoo, S. Mohapatra, and R. Lath. Virtualization: A Survey on Concepts, Taxonomy and Associated Security Issues. In 2010 Second International Conference on Computer and Network Technology, pages 222-226, Apr. 2010. doi: 10/d4zx2p.
|
| 654 |
+
|
| 655 |
+
[55] P. Sampat. kernel: Introduce CPU Namespace [LWN.net], Oct. 2021. URL https://lwn.net/ Articles/872507/.
|
| 656 |
+
|
| 657 |
+
[56] D. Schrammel, S. Weiser, S. Mangard, and R. Sadek. Jenny: Securing Syscalls for PKU-based Memory Isolation Systems. page 18, 2022.
|
| 658 |
+
|
| 659 |
+
[57] V. Seshagiri, D. Huye, L. Liu, A. Wildani, and R. R. Sambasivan. [SoK] Identifying Mismatches Between Microservice Testbeds and Industrial Perceptions of Microservices. 2(1). ISSN 2770-5501. doi: 10.5070/SR32157839.
|
| 660 |
+
|
| 661 |
+
[58] P. Sharma, L. Chaufournier, P. Shenoy, and Y. C. Tay. Containers and Virtual Machines at Scale: A Comparative Study. In Proceedings of the 17th International Middleware Conference, pages 1-13, Trento Italy, Nov. 2016. ACM. ISBN 978-1-4503-4300- 8. doi: 10.1145/2988336.2988337. URL https: //dl.acm.org/doi/10.1145/2988336.2988337.
|
| 662 |
+
|
| 663 |
+
[59] M. Silva, K. D. Ryu, and D. Da Silva. VM Performance Isolation to Support QoS in Cloud. In 2012 IEEE 26th International Parallel and Distributed Processing Symposium Workshops & PhD Forum, pages 1144-1151, May 2012. doi: 10.1109/IPDPSW.2012.140.
|
| 664 |
+
|
| 665 |
+
[60] S. Soltesz, H. Pötzl, M. E. Fiuczynski, A. Bavier, and L. Peterson. Container-based operating system virtu-alization: a scalable, high-performance alternative to hypervisors. ACM SIGOPS Operating Systems Review, 41(3):275-287, Mar. 2007. ISSN 0163-5980. doi: 10/cr62t6. URL https://doi.org/10.1145/ 1272998.1273025.
|
| 666 |
+
|
| 667 |
+
[61] L. Tomás and J. Tordsson. Improving cloud infrastructure utilization through overbooking. In Proceedings of the 2013 ACM Cloud and Autonomic Computing Conference, CAC '13, pages 1-10. Association for Computing Machinery. ISBN 978-1-4503-2172-3. doi: 10.1145/2494621.2494627.
|
| 668 |
+
|
| 669 |
+
[62] I. VMWARE. Understanding full virtualization. Par-avirtualization and Hardware Assist, 2007.
|
| 670 |
+
|
| 671 |
+
[63] C. Waldspurger and M. Rosenblum. I/O virtualization. Communications of the ACM, 55(1):66-73, Jan. 2012. ISSN 0001-0782, 1557-7317. doi: 10.1145/2063176. 2063194. URL https://dl.acm.org/doi/10.1145/ 2063176.2063194.
|
| 672 |
+
|
| 673 |
+
[64] C. A. Waldspurger. Memory resource management in VMware ESX server. ACM SIGOPS Operating Systems Review, 36(SI):181-194, Dec. 2002. ISSN 0163-5980. doi: 10.1145/844128.844146. URL https: //dl.acm.org/doi/10.1145/844128.844146.
|
| 674 |
+
|
| 675 |
+
[65] J. P. Walters, V. Chaudhary, M. Cha, S. Guercio, and S. Gallo. A Comparison of Virtualization Technologies for HPC. In 22nd International Conference on Advanced Information Networking and Applications (Aina 2008), pages 861-868. IEEE. ISBN 978-0-7695-3095- 6. doi: 10.1109/AINA.2008.45.
|
| 676 |
+
|
| 677 |
+
[66] Z. Wan, D. Lo, X. Xia, and L. Cai. Practical and Effective Sandboxing for Linux Containers. page 41, 2019. doi: 10.1007/s10664-019-09737-2.
|
| 678 |
+
|
| 679 |
+
[67] X. Wang, J. Du, and H. Liu. Performance and isolation analysis of RunC, gVisor and Kata Containers runtimes. Cluster Computing, Jan. 2022. ISSN 1573-7543. doi: 10.1007/s10586-021-03517-8. URL https://doi.org/10.1007/s10586-021-03517-8.
|
| 680 |
+
|
| 681 |
+
[68] S. A. Weil, S. A. Brandt, E. L. Miller, D. D. Long, and C. Maltzahn. Ceph: A scalable, high-performance distributed file system. In Proceedings of the 7th symposium on Operating systems design and implementation, pages 307-320, 2006.
|
| 682 |
+
|
| 683 |
+
[69] D. Williams, R. Koller, and B. Lum. Say Goodbye to Virtualization for a Safer Cloud. page 6.
|
| 684 |
+
|
| 685 |
+
[70] E. G. Young, P. Zhu, T. Caraza-Harter, A. C. Arpaci-Dusseau, and R. H. Arpaci-Dusseau. The True Cost of Containing: A gVisor Case Study. page 6, 2019.
|
papers/JSYS/JSYS 2022/JSYS 2022 Oct_Papers/-0sywUv8ryL/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,525 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ SOK: VIRTUALIZATION CLASSIFICATION ON ISOLATION CAPABILITIES
|
| 2 |
+
|
| 3 |
+
Anonymous authors
|
| 4 |
+
|
| 5 |
+
Paper under double-blind review
|
| 6 |
+
|
| 7 |
+
§ ABSTRACT
|
| 8 |
+
|
| 9 |
+
Within the Linux ecosystem, hypervisor and container-based virtualization are the two most prevalent and well-known server virtualization approaches. As it is often the case, the choice is much more complex than a binary decision between those distinct approaches. Recently emerging technologies, concepts and approaches, have greatly diversified the "server virtualization landscape". For example, the enabling concepts of container-based virtualization are ever changing and improve upon every upcoming Kernel release. Moreover, novel sandbox-based approaches leverage traditional and recent Operating System (OS) functionality to intercept system calls for their isolation needs. Hybrid systems utilize classic hyper-visors in order to run a specific purpose built unikernel to run container-based virtualization within themselves.
|
| 10 |
+
|
| 11 |
+
In this work, we present an approach to classify virtualiza-tion aspects by their isolation capability. For this purpose, we decompose them into their respective enabling components and describe them in detail. Finally, we present a multi-level classification of server virtualization. This classification aims to enable a quick assessment of virtualization technologies and their induced implications.
|
| 12 |
+
|
| 13 |
+
§ 1 INTRODUCTION
|
| 14 |
+
|
| 15 |
+
Virtualization technology isolation capabilities impose a challenge for many researchers, businesses and service providers alike. The isolation among processes, containers, Virtual Machines (VMs) or other containing units, is significant for a number of reasons. (i) Researchers aim for isolated experiments, without interference from unintentional foreign noise caused by other tenants. (ii) Businesses strive for the best possible infrastructure division while maintaining Service Level Agreements and maximizing profit. (iii) Service providers want to consolidate their infrastructure to keep total cost of ownership as low as possible. Naturally, poor isolation would negatively impact all the use cases above. These demands towards isolation are enabled by virtualization technologies.
|
| 16 |
+
|
| 17 |
+
Since its emergence in the 1960s, virtualization is ever-changing. Starting from experiments with time-sharing systems on mainframes [18], it has evolved into a broad landscape of technologies. These technologies are an integral part of the business models of many major organizations. Today, the application domains of virtualization are vast and the incentives for their adoptions are manifold.
|
| 18 |
+
|
| 19 |
+
This is particularly true for cloud computing and the direction it is progressing to. Areas and special research interest of the recent years include the Internet of things domain, fog computing, and edge computing. An encompassing term for these fields is the "Cloud-To-Thing continuum" [41]. Said tenants typically compete for resources for a variety of reasons like overbooking or arbitrary co-location [61]. Other emerging cloud computing models like Function-As-A-Service offerings also leverage virtualization to a great extent [50]. As Raza et al. further describe, they have complex demands for resource isolation, but also non-functional requirements like a fast cold start and low performance overhead. It is an essential requirement for virtualization software to be able to isolate them sufficiently.
|
| 20 |
+
|
| 21 |
+
Cloud computing and related domains are not the only fields where resource contention among tenants happens. In fact, distinct tenants on infrastructure are not necessarily distinct persons or customers. A simple but very common use-case is the demand to subdivide existing server hardware to improve its utilization [34]. For example, a company might operate a server with a database software, that is not able to fully utilize. This could be due to any reason like workload specifics or be imposed by the database software architecture itself. These underutilized resources could be used to operate another database software for another project, a scale out, or something completely different as a result of server consolidation $\left\lbrack {{10},{13}}\right\rbrack$ . An incentive therefore could be better energy efficiency [38] and reduced total cost of ownership [34]. The trend towards the decomposition of monolithic applications and thus the enabling of distribution, as well as consolidation of application components, further diversified the virtualization landscape. These microservices pattern as described by Fowler and Lewis [27] are certainly widely applied in industry and research [57]. What is important though, is sufficient isolation among those applications, so that they do not negatively impact each other.
|
| 22 |
+
|
| 23 |
+
Besides the business oriented use cases, High Performance Computing (HPC) data centres and in consequence researchers utilizing them greatly benefit from the possibilities of virtualization. Every progress made in virtualization techniques is evaluated and frequently applied within these centres $\left\lbrack {{29},{53},{65}}\right\rbrack$ . While they usually conclude, that native non-virtualized execution of experiments yields higher performance, this gap becomes smaller. In some cases the non-performance related aspects and the convenience of vir-tualization can outplay the raw performance. Projects like Singularity ${}^{1}$ for example aim to provide reproducible environments for HPC experiments built upon virtualization features of the Linux kernel [37].
|
| 24 |
+
|
| 25 |
+
Even though all these application domains are highly relevant and represent a multitude of research areas, publication utilizing virtualization technologies often neglect the details of their respective implementations [39, 49]. Even within a seemingly narrow category like container based virtualization, implementations details make a huge difference regarding aspects like performance overhead and degree of isolation.
|
| 26 |
+
|
| 27 |
+
This paper follows a systematic approach in analysing vir-tualization technologies. We therefore review existing technologies and deconstruct them into their isolation enabling technologies. Along this perspective we aim to provide a multi-level classification of virtualization technologies. This classification enables an elaborated decision on which technology to choose and what to expect.
|
| 28 |
+
|
| 29 |
+
To provide a holistic view on the enabling aspects of virtu-alization technologies we make the following contributions to achieve a classification of those, based on their isolation capabilities:
|
| 30 |
+
|
| 31 |
+
* Virtualization Technology Categorization: We categorize virtualization technologies into three distinct categories: hypervisor-based, container-based and sandbox-based.
|
| 32 |
+
|
| 33 |
+
* Elaboration on Virtualization Enablers: For each virtual-ization category, we highlight the virtualization enabling aspects of those. These are integrated into the classification as subsidiaries.
|
| 34 |
+
|
| 35 |
+
* Presentation of Dynamic Taxonomy: Based on the categories and virtualization enablers, we present a multilevel taxonomy. We further introduce a cross-section hybrid-based approach that combines aspects of the previously established categories and thus integrates possible future developments.
|
| 36 |
+
|
| 37 |
+
The remainder of the paper is structured as follows: section 2 presents important background knowledge, frequently referred to in upcoming sections. This includes Linux fundamentals that describe essential levers for virtualization. Section 3 then presents a methodology on how the actual virtual-ization technology classification is pursued. This is followed by the implementation of said method in section 4. Based on this resulting classification, a brief overview over existing and widely adopted virtualization technologies is given in section 6. Within this section, said technologies are aligned to that classification followed by a short discussion. Afterwards, a review of related work is conducted in section 7. Finally, a conclusion is drawn in section 8 including a brief general discussion as well as some thoughts on possible future work.
|
| 38 |
+
|
| 39 |
+
§ 2 BACKGROUND
|
| 40 |
+
|
| 41 |
+
This section briefly presents some Linux OS specific fundamentals that tightly interact with virtualization concepts. We therefore highlight how the kernel interaction happens and how processes and memory are managed. Moreover, a short description of how the I/O devices disk and network are interfaced follows. All these resources are leveraged by virtu-alization approaches as described in the upcoming sections.
|
| 42 |
+
|
| 43 |
+
Linux kernels are monolithic kernels. They manage Central Processing Unit (CPU) scheduling, memory, file systems, network protocols and system devices. Kernels are typically depicted as a layered ring graphs as shown in fig. 1. Notable here is, that applications are able to directly execute system calls or use an indirection via system libraries like $1{\mathrm{{ibc}}}^{2}$ .
|
| 44 |
+
|
| 45 |
+
System calls act as levers for applications to transit from user to kernel space. Further, the kernel provides an interface to the hardware, which in turn is interfaced via system calls again.
|
| 46 |
+
|
| 47 |
+
< g r a p h i c s >
|
| 48 |
+
|
| 49 |
+
Figure 1: Linux Kernel
|
| 50 |
+
|
| 51 |
+
Based on this model there is a distinction made between (i) kernel mode and (ii) user mode. These special CPU modes provide distinct privileges to executed code. Executions within the(i)kernel mode are granted full access to devices and other privileged instructions, whereas user programs run in (ii) user mode. Execution in user mode runs unprivileged and needs to request privileges via system calls. Switching between user and kernel mode is called "mode switch". Examples for system calls include the opening of a file with open, mapping a file to memory with mmap or creating a new process with fork.
|
| 52 |
+
|
| 53 |
+
'https://sylabs.io/singularity/
|
| 54 |
+
|
| 55 |
+
${}^{2}$ https://man7.org/linux/man-pages/man7/libc.7.html
|
| 56 |
+
|
| 57 |
+
Processes are the vessels for program code execution. Among other responsibilities, they manage address space, stacks and registers. Depending on the physical CPU attributes, processes can be executed in parallel, which is typically called "multitasking". They are identified by a unique Process Identifier (PID).
|
| 58 |
+
|
| 59 |
+
Processes can spawn other processes and threads. For Linux, all of these are resembled in the task data structure. All tasks on a Linux system together create a tree structure with the root PID being 1 .
|
| 60 |
+
|
| 61 |
+
Thus, all tasks are created by other tasks using the system calls fork (2) ${}^{3}$ or clone (2) ${}^{4}$ . Internally, fork actually wraps clone with some privilege specific flags. After the creation of a new task with its own PID a system call like execve (2) ${}^{5}$ . This task creation flow is visualized in fig. 2. For the remainder of this paper, the term process will be used to refer to a running Linux task with a PID.
|
| 62 |
+
|
| 63 |
+
< g r a p h i c s >
|
| 64 |
+
|
| 65 |
+
Figure 2: Task creation flow
|
| 66 |
+
|
| 67 |
+
Memory acts as a storage for kernel and application instructions. Alongside them resides their respective workload data. More specifically the "Main Memory" describes the actual physical memory of the system, commonly implemented as DRAM. It is segmented into "Pages" that typically represent 4 or 8 Kbytes, even though there are exceptions for "Huge Pages" if the CPU supports it.
|
| 68 |
+
|
| 69 |
+
Virtual memory on the contrary is an abstraction of the main memory and is presented as non-contended, almost infinite memory to processes. It is only mapped to physical memory on demand by the Memory Management Unit (MMU). Thus, virtual memory can be in four different states:(i)unallocated, (ii) allocated but not mapped yet, (iii) allocated and mapped and (iv) allocated and mapped to a physical swap device.
|
| 70 |
+
|
| 71 |
+
Actually allocated and mapped memory is called "Resident Memory". The Resident Set Size (RSS) describes the total size of resident memory for a given process. This amount is of specific interest for isolation, since it is the actually contended memory resource.
|
| 72 |
+
|
| 73 |
+
The system call mmap ${\left( 2\right) }^{6}$ is usually leveraged to allocate virtual memory. It is Linux' obligation when to map that allocated memory to the physical address space.
|
| 74 |
+
|
| 75 |
+
Disk or in particular disk I/O since it is attached to the I/O bus, represents the access to physical storage devices. The CPU is able to directly communicate with them via this bus. Within a computing system, they are typically represented as storage devices with an automatically generated name, following a system-specific scheme. Modern disks have a capacity in the GByte or TByte range and can be accessed by the kernel and applications.
|
| 76 |
+
|
| 77 |
+
$\mathrm{I}/\mathrm{O}$ operations follow a standardized protocol and mostly consist of read and write commands. An I/O operation targets a sector which represents a small amount of storage on the physical device of typically $4\mathrm{K}$ bytes. On top of a single or multiple disks, file-systems can be installed. They enable easy file-based, often tree like access to the disks.
|
| 78 |
+
|
| 79 |
+
Like disks, network devices are also attached to the I/O bus. Again, the CPU is able to directly communicate with them via this bus. The devices are usually referred to as Network Interface Cards (NICs). Within a computing system, they are typically represented as so-called interfaces or links with a name, generated by a system-specific scheme. The card itself, or the network controller, is defined by its transmission properties, or more specifically by its maximum possible throughput. Typical throughputs of models at the time of writing are $1\mathrm{{Gbit}}/\mathrm{s}$ to ${100}\mathrm{{Gbit}}/\mathrm{s}$ . Apart from that, NICs have one or more ports to connect to other NICs or a switching/routing device. Interconnections feature multiple connector interfaces like RJ-45 or SFP variations, as well as a transmission medium like copper or fibre.
|
| 80 |
+
|
| 81 |
+
Upon the intent of sending something to another link, the payload is split into packets of a previously agreed on size. In TCP/IP this is the "Maximum Transmission Unit (MTU)". These packages are further subdivided into nested frames depending on the applied network stack. For TCP/IP, this could be an "Ethernet Frame". These nested frames are then subsequently sent to a receiving NIC.
|
| 82 |
+
|
| 83 |
+
§ 3 METHODOLOGY
|
| 84 |
+
|
| 85 |
+
In order to craft a representative and complete virtualization classification, a structured approach is necessary. Therefore, the method described lays out the steps that need to be taken. Foremost, a disambiguation of terms within the virtualization domain is important.
|
| 86 |
+
|
| 87 |
+
${}^{3}$ https://man7.org/linux/man-pages/man2/fork.2.html
|
| 88 |
+
|
| 89 |
+
${}^{4}$ https://man7.org/linux/man-pages/man2/clone.2.html
|
| 90 |
+
|
| 91 |
+
${}^{5}$ https://man7.org/linux/man-pages/man2/execve.2.html
|
| 92 |
+
|
| 93 |
+
${}^{6}$ https://man7.org/linux/man-pages/man2/mmap.2.html
|
| 94 |
+
|
| 95 |
+
The term virtualization itself is rather broad and there is no general agreement on it across its applied domains. Many aspects of and resource types of computer systems can be virtualized. This ranges from the virtualization of full servers, over specific resources, towards certain aspects of applications. Within this paper, the focus lies clearly on the virtu-alization of servers or "server virtualization". While other aspects may be part of it, only technologies and approaches towards this goal will be considered. Here, server virtualization is defined as Ameen and Hamo [7] puts it:
|
| 96 |
+
|
| 97 |
+
Definition 1 (Server virtualization). Server virtualization is the ability to run many operating systems with isolation and independences on other operating system.
|
| 98 |
+
|
| 99 |
+
Based on this constraint, a comprehensive literature review is performed to lay out a possible server virtualization classification. This process starts with a very broad categorization and tries to narrow technologies down until sufficient distinction among them can be achieved. This criterion is met once the enabling technologies are identified.
|
| 100 |
+
|
| 101 |
+
The enabling technologies are investigated into detail in order to understand how they create isolation and what the implications are. These identified fundamental technologies act as a specific background and are presented as such in section 2
|
| 102 |
+
|
| 103 |
+
To begin with, the generally agreed on coarse classification of related literature, will be used as a baseline. It agrees on two distinct virtualization categories [20, 52, 54, 60], namely (i) Hypervisor-based and (ii) Container-based. These two categories and newly determined ones are further described during the remainder of section 4 .
|
| 104 |
+
|
| 105 |
+
§ 4 VIRTUALIZATION TECHNOLOGY CLASSIFICATION
|
| 106 |
+
|
| 107 |
+
This section will investigate on possibilities to classify virtu-alization approaches. To begin with, it provides an overview that presents a quick glance at the resulting classification in section 4.1. Along this broad classification, each class is further analysed and investigated including their virtualization enabling components.
|
| 108 |
+
|
| 109 |
+
§ 4.1 OVERVIEW
|
| 110 |
+
|
| 111 |
+
The anticipated classification is visualized in fig. 3. This classification acts as an overview and is derived from a broad literature research as described in the following sections. Precisely, the process to incrementally compose this figure is described by stepping through these classes. Arbitrary starting from left to right, these are the three virtualization classes hypervisor, container and sandbox. Moreover, a fourth one named hybrid is part of this figure to indicate, that there are virtualization technology implementations, that share characteristics of all classes.
|
| 112 |
+
|
| 113 |
+
§ 4.2 HYPERVISOR-BASED
|
| 114 |
+
|
| 115 |
+
Like virtualization in general, Hypervisor or Virtual Machine Monitor (VMM) systems have been around since the 1960s. During that time IBM had a huge impact on its development [19]. VMMs create an abstract layer between the hardware and nested OSs running on the same hardware. Resources of the host like CPU, memory, disk and network can be individually and dynamically attached to them. These OSs run with virtualized hardware and therefore instantiate "VMs". This term for hypervisor-based virtual servers is used from now on.
|
| 116 |
+
|
| 117 |
+
The following sections will briefly elaborate on various types of hypervisors, in order to distinct them. Further, a short discussion about how they achieve isolation follows. Within a short closing discussion, an initial iteration of the virtualization taxonomy is formed.
|
| 118 |
+
|
| 119 |
+
§ 4.2.1 ARCHITECTURE TYPES
|
| 120 |
+
|
| 121 |
+
Goldberg, who was one of the most prominent researchers in the virtualization domain subdivided Hypervisor-based virtualization into two categories; Type-1 and Type-2 [28]. The main distinction among them is whether it runs directly on the hardware, or on top of another OS. Figure 4 illustrates that difference.
|
| 122 |
+
|
| 123 |
+
§ 4.2.2 HARDWARE ABSTRACTION LEVELS
|
| 124 |
+
|
| 125 |
+
While these two distinctions categorize hypervisors, further significant properties can be found. Hwang et al. [33] describes some by highlighting three approaches on how the actual virtualization layer can be provided. These namely are (i) Full Virtualization, (ii) Paravirtualization and (iii) Hardware Assisted (HWA) Virtualization. These will be briefly discussed in the following.
|
| 126 |
+
|
| 127 |
+
(i) Full virtualization aims to run any OS and kernel, independent of its own physical system. No modifications to the guest system is necessary. With this approach, the host's and the guest's kernel and even their processor architecture can differ. This goal is achieved by binary translation and emulation, depending on its implementation [6, 7]. Hereby every device presented to the guest system is fully virtualized and created by the hypervisor. This for example includes CPU, mainboard, memory and NIC. If applicable, the translation between the virtual devices within the guest virtual system and the actual physical devices on the host system is done by the combination of guest and host drivers, managed by the hypervisor.
|
| 128 |
+
|
| 129 |
+
< g r a p h i c s >
|
| 130 |
+
|
| 131 |
+
Figure 3: Virtualization Classification Overview
|
| 132 |
+
|
| 133 |
+
< g r a p h i c s >
|
| 134 |
+
|
| 135 |
+
Figure 4: Hypervisor architectures
|
| 136 |
+
|
| 137 |
+
(ii) Paravirtualization aims to minimize the overhead vir-tualization of hardware brings [14]. It does so by providing and leveraging a special abstraction layer. This layer can be utilized by the VM to run privileged system calls on the hardware rather than in its own virtualized domain. These are also called "hypercalls". However, to achieve this, the guest OS has to be adapted and aware of these hypercalls. Depending on the implementation and configuration, the performance benefit can be significant [25].
|
| 138 |
+
|
| 139 |
+
(iii) Hardware-assisted virtualization is another way to reduce the performance impact on full virtualization. This technique came forward with the development of processor features dedicated to virtualization [15]. These features allow the trapping of certain calls without the need for binary translation or paravirtualization. While both, full- and paravir-tualization can benefit from hardware-assisted virtualization, it can still be seen as a distinct category, since vendors decide whether to use that feature or implement it themselves within their hypervisor [62].
|
| 140 |
+
|
| 141 |
+
Neither of these approaches are mutually exclusive and specific implementations might apply different combinations or degrees of adaptation. However, these choices have significant impact on isolation characteristics as mentioned in section 5.4.
|
| 142 |
+
|
| 143 |
+
§ 4.2.3 CLASSIFICATION IMPACT
|
| 144 |
+
|
| 145 |
+
To summarize, the following taxonomy for hypervisor based systems is crafted. However, while implementations that represent instances within this taxonomy share common isolation characteristics, specific implementation details impact the factual isolation. Figure 5 illustrates this taxonomy in a small tree like structure. Since hypervisor types and its means to provision virtualization are not mutually exclusive, every possible combination has to exist.
|
| 146 |
+
|
| 147 |
+
< g r a p h i c s >
|
| 148 |
+
|
| 149 |
+
Figure 5: Hypervisor taxonomy
|
| 150 |
+
|
| 151 |
+
§ 4.3 CONTAINER-BASED
|
| 152 |
+
|
| 153 |
+
Container, or more specifically within the context of this paper "Linux container" are isolated processes on a Linux system that have their own view on most system resources. In contrast to VMs they do not utilize a hypervisor, but in consequence share the host kernel. However, since they are able to provide virtual servers including resource isolation, they are included within the virtualization taxonomy.
|
| 154 |
+
|
| 155 |
+
This section outlines distinct characteristics of container-based virtualization based on technologies applied. First, isolation targets and their relation to the technologies highlighted in section 2 are presented. Following up, the architecture of a typical container engine is discussed. Finally an extension of the previously mentioned taxonomy from section 4.2.1 is proposed.
|
| 156 |
+
|
| 157 |
+
§ 4.3.1 ISOLATION TARGETS
|
| 158 |
+
|
| 159 |
+
Compared to hypervisor based virtualization, container-based virtualization is fundamentally different. It does not allow for full virtualizations like the usage of any host kernel or different CPU architecture. It also does not make any use of paravirtualization, since no device or component is emulated. Furthermore, there is no hardware-assisted virtualization possible and necessary. Container-based virtualization solely makes use of the features of the host OS. However, the goals and use-cases for both approaches overlap to a certain degree. This namely is the provisioning of virtual servers [60]. Container based virtual servers are called "containers" from now on.
|
| 160 |
+
|
| 161 |
+
There are no virtual devices that are being presented to the virtual server as in hypervisor-based virtualization, since no emulation and binary translation is taking place. Instead, Linux kernel features are being used to limit access, view and utilization to the resources provided by and shared with the host. Dua et al. [23] present an overview of the aspects of resources that need to be handled by the kernel on an abstract level. More in-depth information can be found within this chapter in section 2. Specifically, these are (i) process, (ii) resource, (iii) network, (iv) filesystem, (v) storage, (vi) device and (vii) capabilities.
|
| 162 |
+
|
| 163 |
+
These aspects are briefly described in the following:
|
| 164 |
+
|
| 165 |
+
(i) Process isolation creates a limited view of the process tree from the perspective of the container. All processes within the container are branched of a new process with the PID 1. This PID and its underlying tree is also visible from the host, but with different PIDs dependent on previous process state. This aspect is realized by using namespaces as described in section 5.1. More specifically, PID namespaces.
|
| 166 |
+
|
| 167 |
+
(ii) Resource limitation affects all typically used resources of a server. This includes CPU shares, memory, disk I/O and net I/O. Access to those can be limited and isolated dependant of the applied virtualization technique. This aspect is realized by using cGroups as described in section 5.2.
|
| 168 |
+
|
| 169 |
+
(iii) Network interfaces isolation is separate from the actual possible utilization of a device. The container needs its own personal network stack and only sees configuration affecting it directly. This aspect is realized by using names-paces as described in section 5.1. More specifically, network namespaces.
|
| 170 |
+
|
| 171 |
+
(iv) Filesystem tree isolation provides containers with their own root filesystem to not interfere with the host. Files, installed packages and configurations of the host are invisible to the container client, if not explicitly configured differently. This enables the installation of packages and changing of configurations without interference. This aspect is realized by using mount namespaces as described in section 5.1.
|
| 172 |
+
|
| 173 |
+
(v) Storage isolation gives containers their own storage area for any kind of state. This could be a mounted filesys-tem externally managed by the host. Apart from simple bind mounts, container engines frequently leverage more sophisticated storage engines, to provide containers with filesystems. These range from overlay filesystems promising maintainability benefits, while suffering from performance issues [45], to clustered ones like Ceph [68] where isolation is completely handled out of system. In simple cases this aspect is realized by using mount namespaces as described in section 5.1. More sophisticated approaches are directly offered by the container engine.
|
| 174 |
+
|
| 175 |
+
(vi) Device isolation makes containers aware of specific devices on the host system. Specific ones like Intelligent Platform Management Interface (IPMI), Graphics Processing Units (GPU) or disks can be made available to the container. This aspect is realized by using namespaces as described in section 5.1. More specifically, mount namespaces.
|
| 176 |
+
|
| 177 |
+
(vii) Capabilities describe which kind of operations the processes within the container are allowed to execute. These include operations like mounting a filesystem or binding to a network device. This aspect is realized by Linux capabilities as described in section 5.3.
|
| 178 |
+
|
| 179 |
+
§ 4.3.2 EXAMPLE ARCHITECTURE
|
| 180 |
+
|
| 181 |
+
Linux offers many levers to enact the actual isolation of all the aspects described above. Namespaces provide the necessary isolation mechanisms, cGroups regulate limits on resource utilization and Capabilities grant required permissions.
|
| 182 |
+
|
| 183 |
+
Figure 6 shows a superficial and slightly simplified container engine architecture on the example of Docker ${}^{7}$ . This architecture however can be easily adapted to other classic container engines that also make use of the three mechanisms mentioned above $\left\lbrack {5,{35}}\right\rbrack$ . The following will briefly discuss the elements of the Docker architecture.
|
| 184 |
+
|
| 185 |
+
${}^{7}$ https://www.docker.com/
|
| 186 |
+
|
| 187 |
+
< g r a p h i c s >
|
| 188 |
+
|
| 189 |
+
Figure 6: Docker architecture
|
| 190 |
+
|
| 191 |
+
The Docker engine itself is merely a Command Line Interface (CLI). Its primary purpose is user interaction and convenience of bringing all the container related features together. It therefore sensibly abstracts them to achieve an appropriate usability.
|
| 192 |
+
|
| 193 |
+
Containerd ${}^{8}$ is the actual daemon process running on a host, that is interacted with by using the Docker CLI. It therefore acts as a proxy towards the actual enactment of containers via runc and storage related features.
|
| 194 |
+
|
| 195 |
+
The storage engine enables containerd to provide storage for containers. This includes the authentication at container image registries to download base images a container is created from. Moreover, it provides access to storage for state, typically called volumes. Volumes could be provided using a local overlayFS or devicemapper concept, or be consumed from an external provider by leveraging specific storage drivers. Additionally, overlayFS is typically used to merge layered data including existing images, modifications and user data. This aspect is analysed by Mizusawa et al., who find many performance benefits in that approach compared to others existing at that time [46].
|
| 196 |
+
|
| 197 |
+
Finally, runc ${}^{9}$ is the component actually utilizing names-paces, cGroups and capabilities in order to create a running container.
|
| 198 |
+
|
| 199 |
+
Within the container domain, there are two important industry standards and specifications available. One being the (i) Container Runtime Interface (CRI), the other one being the (ii) Open Container Initiative (OCI) specifications. The (i) CRI defines an Application Programming Interface (API) towards the container engine. In the example above, this would be containerd. This enables container orchestrators like Kubernetes ${}^{10}$ to transparently utilize different engines, as long as they are compliant to that API. The (ii) OCI on the other hand, describes how container images are supposed to look like, in order to be accessed and executed independent of the actual runtime like runc.
|
| 200 |
+
|
| 201 |
+
§ 4.3.3 CLASSIFICATION IMPACT
|
| 202 |
+
|
| 203 |
+
To summarize, container-based virtualization fully depends on the degree Linux tools like namespaces, cGroups and capabilities are used. Moreover, storage is often handled outside the container perspective and is merely mounted into the respective namespace. This extends the taxonomy shown in fig. 5 as highlighted in fig. 7.
|
| 204 |
+
|
| 205 |
+
< g r a p h i c s >
|
| 206 |
+
|
| 207 |
+
Figure 7: Container taxonomy
|
| 208 |
+
|
| 209 |
+
§ 4.4 SANDBOX-BASED
|
| 210 |
+
|
| 211 |
+
While most containerization technologies make use of the same Linux kernel fundamentals, there are some emerging technologies that pursue a different route. In order to better distinguish these from hypervisor and container-based vir-tualization they will be called sandbox-based from now on. However, this term is not an established term yet, but can be found among popular implementations of this approach.
|
| 212 |
+
|
| 213 |
+
§ 4.4.1 CONCEPT
|
| 214 |
+
|
| 215 |
+
Sandboxes can be created by utilizing system call filtering provided by the Kernel. Linux offers some mechanisms in order to do so. More background information on the system call filtering, and thus sandbox creation is presented in section 5.5.
|
| 216 |
+
|
| 217 |
+
These kinds of containers may still use all the principles highlighted in section 4.3 but are extended by the application of sandboxing methods. Wan et al. thoroughly investigate sandboxing possibilities for container for the purpose of isolation. They implement a two-step process. They first profile and record system calls a container executes, to limit those afterwards in a second step [66].
|
| 218 |
+
|
| 219 |
+
8 https://github.com/containerd/containerd
|
| 220 |
+
|
| 221 |
+
${}^{9}$ https://github.com/opencontainers/runc
|
| 222 |
+
|
| 223 |
+
${}^{10}$ https://kubernetes.io/
|
| 224 |
+
|
| 225 |
+
One representative technology of this class of container based virtualization is Googles gVisor ${}^{11}$ . Their approach is to reimplement fundamental Linux capabilities within the user space to gain more control and thus improve isolation [70].
|
| 226 |
+
|
| 227 |
+
§ 4.4.2 EXAMPLE ARCHITECTURE
|
| 228 |
+
|
| 229 |
+
gVisor offers two operational modes. One is the ptrace mode discussed in this section. The other one utilizes Kernel Virtual Machine (KVM) in order to process system calls. This approach is discussed in section 4.5. A simplified architectural image is presented in fig. 8 as seen accordingly in its documentation [3]. As visible from that figure, there are two units between the application and the host; (i) Sentry and (ii) Gofer. These two and their relationship are briefly discussed in the following
|
| 230 |
+
|
| 231 |
+
< g r a p h i c s >
|
| 232 |
+
|
| 233 |
+
Figure 8: gVisor architecture
|
| 234 |
+
|
| 235 |
+
(i) Sentry itself implements Linux and is responsible for handling system calls. A container breaching security would only reach into Sentry and not into the host. It therefore exposes most of the Linux system calls, intercepts and reim-plements them, in order to delegate them to the host.
|
| 236 |
+
|
| 237 |
+
(ii) Gofer is responsible for handling files outside of Sentry's own domain. Hence, it enables filesystem access for Sentry.
|
| 238 |
+
|
| 239 |
+
Due to the fact that many operations enacted by Sentry and Gofer are executed or proxied via user space, the performance overhead of such an approach is very high. Most operations take at least double the amount of time compared to traditional container based virtualization approaches [70]. However, Young et al. also conclude, that sandboxes significantly improve security and isolation. Wang et al. [67] come to a similar conclusion.
|
| 240 |
+
|
| 241 |
+
§ 4.4.3 CLASSIFICATION IMPACT
|
| 242 |
+
|
| 243 |
+
The sandbox based virtualization is a powerful method to improve isolation, but comes with a performance penalty. The approaches they use, most specifically the system call filtering, makes them an important addition within the virtualization classification and are thus added to the taxonomy. Hence, the taxonomy in fig. 7 is extended as presented in fig. 9
|
| 244 |
+
|
| 245 |
+
< g r a p h i c s >
|
| 246 |
+
|
| 247 |
+
Figure 9: Sandbox taxonomy
|
| 248 |
+
|
| 249 |
+
§ 4.5 EMERGING AND HYBRID TECHNOLOGIES
|
| 250 |
+
|
| 251 |
+
Besides the previously mentioned hypervisor, container and sandbox-based approaches, further technologies have recently emerged. Some of them claim to combine beneficial approaches of existing technologies, while minimizing their drawbacks. They often minimize choices in order to optimize and focus on details. On superficial observation however, they are not easily categorized among the previously introduced categories.
|
| 252 |
+
|
| 253 |
+
§ 4.5.1 CONCEPT
|
| 254 |
+
|
| 255 |
+
On deeper investigation though, all these solutions make use of previously existing technologies and thus follow the same approaches. As previously noted, these have impact on performance, security and isolation characteristics.
|
| 256 |
+
|
| 257 |
+
The combination of technologies enables the vendors to file opinionated decisions on specific implementations, yielding benefits for certain scenarios. Combining for example hypervisor and container-based solutions, the decision for a very specific OS within the virtual machine is made possible. The Kernel can be minimized to only enable necessities for container execution in order to reduce overhead and thus combining isolation capabilities of both approaches. Kata ${}^{12}$ containers is a popular implementation pursuing that concept. While isolation capabilities improve, performance is slightly degraded compared to traditional container-based virtualiza-tion using runc as an example [36]. The previously discussed gVisor also offers a so called KVM mode, which follows a similar approach and is used as an alternative to ptrace.
|
| 258 |
+
|
| 259 |
+
11 https://gvisor.dev/
|
| 260 |
+
|
| 261 |
+
A slightly more sophisticated form of the combination of existing technologies are unikernel or "library operating system" based systems. With the rise of cloud computing and convenient tools within this ecosystem they became a viable alternative to fully fledged Linux based VMs [42, 51]. Conceptually, they compile an application down to machine executable code being able to run directly on hardware without a general purpose OS involved. During that process only mandatory functionality is included. The resulting image can then be booted by a machine, which usually is a virtual one. The adaptation of virtual servers in this context is a key factor, since it significantly reduces the amount of hardware compatibility code necessary. However, as for the combination of virtualization approaches, this still relies on hypervisor-based virtualization and thus shares the same isolation capabilities. It does make a difference in the performance and security domain as shown by Compastié et al. [17] with their approach towards Software-Defined Security (SDSec). IBM's implementation called Nabla ${}^{13}$ represents a well-known representation of this approach.
|
| 262 |
+
|
| 263 |
+
§ 4.5.2 CLASSIFICATION IMPACT
|
| 264 |
+
|
| 265 |
+
Even though not strictly being a virtualization class on its own, hybrid approaches shall also be included within the taxonomy. What is most important though, is the fact that any virtualization technology implementation may leverage any of the concepts highlighted within this taxonomy and described throughout this section. The resulting taxonomy is highlighted in fig. 10. Simultaneously, this figure also represents the final taxonomy and thus also includes hypervisor, container and sandbox-based virtualization.
|
| 266 |
+
|
| 267 |
+
§ 4.6 SUMMARY
|
| 268 |
+
|
| 269 |
+
This section proposes a taxonomy for virtualization technologies with respect to isolation capability. It therefore analyses existing approaches of prevalent technologies to categorize them as a first step. Those categories are (i) hypervisor, (ii) container and (iii) sandbox based ones. These are further subdivided into their enabling technologies and methods. Hence, the resulting taxonomy resembles a tree.
|
| 270 |
+
|
| 271 |
+
Within that tree, all leaf notes are considered to be options, whereas every other node represents a dimension.
|
| 272 |
+
|
| 273 |
+
However, modern solutions have evolved in ways, that utilize approaches of previously foreign domains. They do so in order to counter their own drawbacks or to optimize on different aspects. For this reason, a (iv) hybrid cross-section over all aspects as shown in the final taxonomy of fig. 10 is necessary. In consequence, the following definition for virtualization is proposed.
|
| 274 |
+
|
| 275 |
+
Definition 2. A virtualization technology's isolation is defined by the degree of realization of option leaves within the virtualization taxonomy dimensions.
|
| 276 |
+
|
| 277 |
+
§ 5 VIRTUALIZATION ENABLERS
|
| 278 |
+
|
| 279 |
+
This section highlights the details for virtualization in relation to the virtualization classification of section 4 . Hereby, we describe fundamental enabling technologies that are provided by the Linux kernel and leveraged by virtualization technologies.
|
| 280 |
+
|
| 281 |
+
§ 5.1 NAMESPACES
|
| 282 |
+
|
| 283 |
+
Linux offers namespaces ${}^{14}$ in order to isolate system specific resources. It does so by wrapping them into an abstraction, in order to present them to a process [11]. This enables processes to yield completely different views of a system compared to the host system.
|
| 284 |
+
|
| 285 |
+
While this technology is an enabling one for containerization and thus Container-based virtualization, they do not directly relate. Both concepts and technologies can exist without the respective other one.
|
| 286 |
+
|
| 287 |
+
All available namespaces at the time of writing are highlighted in fig. 11. They are constantly adapted and extended in order to meet new demands and solve new challenges like a proposed CPU namespace [55].
|
| 288 |
+
|
| 289 |
+
The following paragraphs will briefly describe all those namespaces. The more prominently used and thus more important ones will be discussed into a little more detail.
|
| 290 |
+
|
| 291 |
+
§ (I) CGROUP NAMESPACES ${}^{15}$ ENABLE THE USAGE OF VIRTUALIZED CGROUPS. WHEN APPLIED, A PROCESS IS ABLE TO DEFINE ITS OWN CGROUPS, WHILE THE HOSTS CGROUPS ARE STILL ACTIVE AND PROTECTED. THIS ALLOWS FOR NESTING OF CGROUPS. FOR MORE INFORMATION ON CGROUPS IN GENERAL REFER TO SECTION 5.2.
|
| 292 |
+
|
| 293 |
+
${}^{14}$ https://man7.org/linux/man-pages/man7/namespaces.7.html
|
| 294 |
+
|
| 295 |
+
15https://man7.org/linux/man-pages/man7/cgroup_namespaces. 7.html
|
| 296 |
+
|
| 297 |
+
${}^{12}$ https://katacontainers.io/
|
| 298 |
+
|
| 299 |
+
${}^{13}$ https://nabla-containers.github.io/
|
| 300 |
+
|
| 301 |
+
< g r a p h i c s >
|
| 302 |
+
|
| 303 |
+
Figure 10: Virtualization Taxonomy
|
| 304 |
+
|
| 305 |
+
< g r a p h i c s >
|
| 306 |
+
|
| 307 |
+
Figure 11: Linux Namespaces
|
| 308 |
+
|
| 309 |
+
(ii) IPC namespaces ${}^{16}$ isolate Inter Process Communication (IPC) resources. These mostly refer to message queues and the usage of shared memory between processes. By applying these namespaces, processes are able to generate their own identifiers for them without inheriting their parent ones.
|
| 310 |
+
|
| 311 |
+
(iii) Network namespaces ${}^{17}$ isolate networking related resources for a process. This includes interfaces, protocol stacks, routing tables and more. In practice, virtual veth ${}^{18}$ network interfaces are created, which pair physical or other virtual interfaces to form a pipe-like tunnel. This enables the creation of a bridge between those interfaces and in consequence, between network namespaces in order to create arbitrary virtual network topologies. Together with the mount, PID and user namespaces as described in the following, they provide essential levers for container virtualization.
|
| 312 |
+
|
| 313 |
+
(iv) Mount namespaces ${}^{19}$ isolate the list of mounts a process is able to see. Moreover, it allows the process to define its own mounts without interfering with other processes or the host. This important namespace allows to present a full root filesystem tree to a container including bind mounts for possible state as yet another layer.
|
| 314 |
+
|
| 315 |
+
(v) PID namespaces ${}^{20}$ isolate process related resources and abstractions. Processes in a PID namespace get their own PID starting at 1. Subsequently started processes invoked by that process will have this new PID 1 as parent and will be assigned another unique one within that namespace. Collisions with other PID namespaces can not happen.
|
| 316 |
+
|
| 317 |
+
${}^{16}$ https://man7.org/linux/man-pages/man7/ipc_namespaces.7.html
|
| 318 |
+
|
| 319 |
+
17https://man7.org/linux/man-pages/man7/network_ namespaces.7.html
|
| 320 |
+
|
| 321 |
+
18https://man7.org/linux/man-pages/man4/veth.4.html
|
| 322 |
+
|
| 323 |
+
19https://man7.org/linux/man-pages/man7/mount_namespaces. 7.html
|
| 324 |
+
|
| 325 |
+
${}^{20}$ https://man7.org/linux/man-pages/man7/pid_namespaces.7.html
|
| 326 |
+
|
| 327 |
+
(vi) Time namespaces ${}^{21}$ isolate the settings for the system clocks. This very recent addition to the Linux kernel mainline enables to set a process specific time which influences derived values like uptime. Moreover, it can also be leveraged for checkpoint restore methods for processes and container migration [44].
|
| 328 |
+
|
| 329 |
+
(vii) User namespaces ${}^{22}$ isolate user related aspects for a process. These include user and group IDs, home directory, and capabilities. The latter is being described in section 5.3. This implies, that a user can have different capabilities within a user namespace than outside. In the case of a container, coupled with other namespaces, this allows an unprivileged host user to install packages within namespaces, that otherwise would require elevated privileges.
|
| 330 |
+
|
| 331 |
+
(viii) UTS namespaces ${}^{23}$ isolates host and domain name. Processes within the same UNIX Time-Sharing (UTS) names-pace are able to see and resolve to these names among them. Container engines typically leverage that to identify themselves. Moreover, container orchestration engines might use these namespaces to set up a cluster wide name resolution [43].
|
| 332 |
+
|
| 333 |
+
As already hinted throughout the description of names-paces, combining them makes them especially powerful. Using them in conjunction with cGroups extends that even more. This important building block for containers is discussed in the following section 5.2.
|
| 334 |
+
|
| 335 |
+
§ 5.2 CGROUPS
|
| 336 |
+
|
| 337 |
+
Control ${}^{24}$ groups are a Linux feature that allows fine-grained control over different system resources [31]. More specifically, it enables to limit access to them. Typically, they are referred to as " $\mathrm{c}$ Groups". They are called "groups" because they can be applied to a group of processes which all share the same limits. Moreover, cGroups can be nested and are thus arranged in a hierarchical structure.
|
| 338 |
+
|
| 339 |
+
The cGroups project went under a significant restructuring effort, resulting in the release of cGroups v2. This effort was first merged into the kernel with version 4.5 and is able to fully replace v1 since kernel version 5.6 [22]. This paper focuses on the usage of v2 and thus this is the version being discussed in the following.
|
| 340 |
+
|
| 341 |
+
Resources are being controlled by resource controllers, sometimes also called subsystems. Figure 12 presents all those controllers visually. Like namespaces, they are constantly extended and improved like the most recent addition of a "misc" controller that is not yet part of most distributions [24].
|
| 342 |
+
|
| 343 |
+
< g r a p h i c s >
|
| 344 |
+
|
| 345 |
+
Figure 12: Linux Cgroups
|
| 346 |
+
|
| 347 |
+
The following paragraphs will briefly describe all those cGroups. The more prominently used and thus more important ones will be discussed into a little more detail.
|
| 348 |
+
|
| 349 |
+
(i) cpu controllers set the amount of CPU cycles allowed. Apart from a raw value for cycles, aspects like weighted priorities and min/max utilization percentages can be set.
|
| 350 |
+
|
| 351 |
+
(ii) cpuset controllers set constraints on CPU and memory placement. Only values specified are allowed for affected processes. This is especially helpful for Non-Uniform Memory Access (NUMA) systems [32].
|
| 352 |
+
|
| 353 |
+
(iii) freezer controllers are able effectively freeze and thaw process groups. Oh et al. [47] have shown that this can be useful in order to dynamically increase system response time. They use freeze cGroups to freeze certain processes on demand to process a user input.
|
| 354 |
+
|
| 355 |
+
(iv) hugetlb controllers limit size for huge pages for the affected group. This can have an effect on memory performance but is considered to be a complex topic. Panwar et al.
|
| 356 |
+
|
| 357 |
+
${}^{21}$ https://man7.org/linux/man-pages/man7/time_namespaces.7.html
|
| 358 |
+
|
| 359 |
+
${}^{22}$ https://man7.org/linux/man-pages/man7/user_namespaces.7.html
|
| 360 |
+
|
| 361 |
+
${}^{23}$ https://man7.org/linux/man-pages/man7/uts_namespaces.7.html
|
| 362 |
+
|
| 363 |
+
${}^{24}$ https://man7.org/linux/man-pages/man7/cgroups.7.html
|
| 364 |
+
|
| 365 |
+
[48] elaborated on that and proposed a strategy to utilize them in and outside of virtualization. For a brief memory specific background refer to section 2 .
|
| 366 |
+
|
| 367 |
+
(v) io controllers enable the setting of both, bandwidth and Input Output Operations Per Second (IOPS) based limits to block devices for process groups.
|
| 368 |
+
|
| 369 |
+
(vi) memory controllers set the amount of allocatable memory per process group. Moreover, it is possible to set hints for the Out Of Memory (OOM) killer. Without specific configuration, only processes within a cGroup are killed by it.
|
| 370 |
+
|
| 371 |
+
(vii) perf_event controllers allow the gathering of cGroup specific perf events. These events are a means for kernel instrumentation and possibly contain sensible information like CPU counter or specific kernel function calls including payload.
|
| 372 |
+
|
| 373 |
+
(viii) pids controllers are able to impose a limit on process generation for the affected process group. It can be configured with a maximum amount of possible fork and clone operations.
|
| 374 |
+
|
| 375 |
+
(ix) rdma controllers regulate the access to Remote Direct Memory Access (RDMA) resources. This can be important for RDMA based devices like Infiniband NIC [40].
|
| 376 |
+
|
| 377 |
+
Similar to namespaces cGroups offer powerful measures to control, limit and possibly isolate resources. Used in conjunction with namespaces from the previous section 5.1, most enabling aspects for container virtualization are available. User specific capabilities are the last fundamental building block for isolation and are thus briefly presented in the following section.
|
| 378 |
+
|
| 379 |
+
§ 5.3 CAPABILITIES
|
| 380 |
+
|
| 381 |
+
Linux capabilities ${}^{25}$ are distinct units that allow the execution of very specific actions. These capabilities can be granted to a user or group.
|
| 382 |
+
|
| 383 |
+
At the time of writing the list of capabilities contain at least 80 different ones. They range from simple file operations over logging permissions towards complex admin like rights like kernel module loading. Certainly, this list is too extensive to discuss them here in a useful way.
|
| 384 |
+
|
| 385 |
+
Generally speaking, these capabilities exist to improve security. Fine-grained control over minimal operations allow system administrators to protect resources and to forbid certain actions. Hallyn and Morgan [30] have shown, that these are very effective. Moreover, there is a strong synergy with user namespaces as described in section 5.1.
|
| 386 |
+
|
| 387 |
+
§ 5.4 HYPERVISOR SPECIFIC ISOLATION
|
| 388 |
+
|
| 389 |
+
Isolation capabilities and the levers the hypervisor uses to achieve it highly depends on the choices described in the previous section 4.2.1. These do not always align with the possibilities Linux offers which may be due to arbitrary preference or the fact, that these possibilities have not been developed yet. Hence, the following will present some examples for the resources CPU, memory and I/O by highlighting how specific implementations solve isolation challenges.
|
| 390 |
+
|
| 391 |
+
CPU: The Xen ${}^{26}$ hypervisor represents an interesting example for CPU isolation, since it offers the possibility to choose different CPU schedulers in order to control how this resource is shared. All approaches try to utilize this scheduler and therefore request shares. The scheduler will then schedule time for a VM for example based on its deadline, runtime or a credit system. Cherkasova et al. [16] discuss those schedulers in depth. They conclude, that the applied scheduler is highly dependent on the use-case but also state, that the default settings are not usable beyond experiments.
|
| 392 |
+
|
| 393 |
+
${\mathrm{{KVM}}}^{27}$ on the other side offers the possibility to utilize cGroups as described in section 5.2. This is possible due to the deep integration of KVM into the Linux kernel. Both approaches offer the possibility to dynamically adapt or change granted CPU shares to a VM.
|
| 394 |
+
|
| 395 |
+
Memory: Silva et al. [59] state, that there are principally two distinct methods for memory isolation. One being cGoups as shown in section 5.2, the other one is static memory assignment. The hypervisor therefore requests memory from the host and completely blocks it for allocation to its managed VMs. Static allocation of memory is undesirable, since the risk of non-utilized memory is very high. Therefore, Wald-spurger invented the technique "ballooning" [64]. They developed this for VMware ESX Server ${}^{28}$ a very popular enterprise VMM. By using their technique, they can prove that they can successfully reclaim or extend memory from a VM without negatively affecting it. The terminology they use for these respective terms are called "deflating" and "inflating".
|
| 396 |
+
|
| 397 |
+
§ I/O: ACCORDING TO WALDSPURGER AND ROSENBLUM TWO MAIN APPROACHES FOR I/O ISOLATION CAN BE PURSUED [63]. ONE BE- ING THE (I) EMULATION OF DEVICES, THE OTHER ONE IS (II) PARA- VIRTUALIZATION. BOTH WERE PREVIOUSLY DISCUSSED IN SECTION 4.2.1. HENCE, LIMITING UTILIZATION CAN BE PROVIDED BY IMPLEMENTATION DETAILS OF THE EMULATED DEVICE, OR BY LEVERAGING CGROUPS.
|
| 398 |
+
|
| 399 |
+
${}^{26}$ https://xenproject.org/
|
| 400 |
+
|
| 401 |
+
${}^{27}$ https://www.linux-kvm.org/
|
| 402 |
+
|
| 403 |
+
${}^{28}$ https://www.vmware.com/de/products/esxi-and-esx.html
|
| 404 |
+
|
| 405 |
+
${}^{25}$ https://man7.org/linux/man-pages/man7/capabilities.7.html
|
| 406 |
+
|
| 407 |
+
§ 5.5 SYSCALL FILTERING
|
| 408 |
+
|
| 409 |
+
As described in section 2 system calls are used to create an interaction between user and kernel space. Linux offers the ability to intercept these system calls for debugging and manipulation purposes.
|
| 410 |
+
|
| 411 |
+
The latter allows them to be utilized for virtualization purposes similar to hypercalls as mentioned in section 4.2.1. This technique enables the creation of so-called "sandboxes", a mechanism applied in various application domains from embedded systems to cloud-computing [12]. According to Schrammel et al. there are three distinct levers available to intercept system calls. These are (i) ptrace, (ii) Second-BPF and (iii) Syscall User Dispatch (SUD) [56].
|
| 412 |
+
|
| 413 |
+
(i) ptrace is a system call itself ${}^{29}$ . It is able to examine other processes memory and registers, and is therefore primarily used for breakpoint debugging and system call tracing. Due to the fact that it can also filter system calls it is applicable to implement sandboxing.
|
| 414 |
+
|
| 415 |
+
(ii) Seccomp-BPF is a kernel feature that allows for system call filtering. It therefore makes use of Berkeley Packet Filter (BPF) mechanisms. BPF or the recent implementation "Extended Berkeley Packet Filter (eBPF)", is a special VM running within the Linux kernel. This VM is able to execute code in kernel space that a user compiled in user space. This enables complex instrumentation and even runtime manipulation of kernel functionality. As a practical example, this technology is also used by modern browsers like the chromium project [2].
|
| 416 |
+
|
| 417 |
+
(iii) SUD is the most recent addition to the Linux kernel [4] invented for Windows emulation. It enables the filtering of syscalls made from a specified memory region and can subsequently be dispatched to user space.
|
| 418 |
+
|
| 419 |
+
§ 6 VALIDATION
|
| 420 |
+
|
| 421 |
+
In order to validate the classification proposed in section 4 we present a list of popular virtualization technologies and arrange them along this classification. Table 1 highlights those in a tabular view. This list does not claim to be complete in any way but serves to purpose to get an idea of the existing landscape with respect to isolation techniques.
|
| 422 |
+
|
| 423 |
+
This table clearly reflects the variety of virtualization technology implementations and the fact that many solutions already implement hybrid approaches. However, distinct silos can still be perceived. Hypervisor and container based implementations tend to utilize isolation aspects from their own domain. There are exceptions though.
|
| 424 |
+
|
| 425 |
+
${\mathrm{{KVM}}}^{30}$ for example makes use of Cgroups if configured accordingly, while gVisor ${}^{31}$ utilizes a type-I hypervisor and sandboxing concepts. ${\mathrm{{XEN}}}^{32}$ on the other hand acts as a traditional type-I hypervisor. Compared to KVM, it does not try to act as a general purpose OS. In this list VirtualBox ${}^{33}$ acts as an example for a widely adopted type-II hypervisor that is still capable of a wide range of virtualization techniques.
|
| 426 |
+
|
| 427 |
+
Docker ${}^{34}$ , Podman ${}^{35}$ and Flatpak ${}^{36}$ are all representatives of container virtualization domain. Flatpak is slightly special here, as it aims to package graphical end user applications in contrast to the former ones. They do however utilize all the same Linux functionalities (Namespaces, Cgroups and Capabilities) to achieve isolation.
|
| 428 |
+
|
| 429 |
+
The sandbox domain is comparatively new in relation to the hypervisor and container based virtualization. gVisor ${}^{37}$ in particular is a very interesting representative here. It offers the possibility to utilize KVM as a type-I hypervisor to achieve sandbox functionality, but also presents the option to use ptrace instead. Both effectively perform system call filtering While still being a research project and thus not being widely adopted, bpfContain [26] utilize modern BPF functionality to achieve this. Findlay et al. state, that they work on an integration into container runtime standards which seems very promising.
|
| 430 |
+
|
| 431 |
+
This list could certainly be extended indefinitely but gives an idea of the currently prevalent virtualization landscape
|
| 432 |
+
|
| 433 |
+
§ 7 RELATED WORK
|
| 434 |
+
|
| 435 |
+
Most publications arbitrary pick a list of popular or widely applied virtualization technologies in order to compare them. While they usually explain how the virtualization is enabled, these aspects typically come short [39, 49].
|
| 436 |
+
|
| 437 |
+
A similar situation applies to releases during and after the advent of container-based virtualization. Classifications of prevalent typically stop at a broader, more superficial point of view. The reason therefore is that they usually focus on something completely different instead of a mere classification of technologies [20, 52, 54, 60].
|
| 438 |
+
|
| 439 |
+
In contrast, Anjali et al. aim to classify virtualization technologies according to a scale based on "Location of functionality" [8]. Hereby they assume higher isolation, the less functionality is actually executed on the host kernel, compared to a guest Kernel. The scale itself ranges from low isolation like native Linux processes over gVisor hybrid approaches towards full KVM virtualization. While this claim seems intuitive, they do not measure performance degradation impact by competing tenants, but rather performance overhead imposed by the technologies. Each of those technologies are highlighted by their own approach to achieve isolation, analysing the amount and call pattern of system calls. Combined with the results of this paper, their assumptions could be experimentally determined.
|
| 440 |
+
|
| 441 |
+
${}^{30}$ https://www.linux-kvm.org/
|
| 442 |
+
|
| 443 |
+
31https://gvisor.dev/
|
| 444 |
+
|
| 445 |
+
32https://xenproject.org/
|
| 446 |
+
|
| 447 |
+
33https://www.virtualbox.org/
|
| 448 |
+
|
| 449 |
+
${}^{34}$ https://www.docker.com/
|
| 450 |
+
|
| 451 |
+
${}^{35}$ https://podman.io/
|
| 452 |
+
|
| 453 |
+
36https://flatpak.org/
|
| 454 |
+
|
| 455 |
+
37https://gvisor.dev/
|
| 456 |
+
|
| 457 |
+
${}^{29}$ https://man7.org/linux/man-pages/man2/ptrace.2.html
|
| 458 |
+
|
| 459 |
+
max width=
|
| 460 |
+
|
| 461 |
+
2*Name 2*Version 2*Comment 5|c|Hypervisor 3|c|Container 3|c|Sandbox
|
| 462 |
+
|
| 463 |
+
4-14
|
| 464 |
+
I II Full Para HWA Names- paces Cgroups Capa- bilities ptrace BPF SUD
|
| 465 |
+
|
| 466 |
+
1-14
|
| 467 |
+
KVM 2.3 with Cgroups $X$ X $x$ $x$ $X$ X $x$ X X X X
|
| 468 |
+
|
| 469 |
+
1-14
|
| 470 |
+
XEN 4.15 X $X$ X $x$ $X$ $X$ X X X X X X
|
| 471 |
+
|
| 472 |
+
1-14
|
| 473 |
+
VirtualBox 6.1 X X $x$ $x$ $x$ $x$ X X X X X X
|
| 474 |
+
|
| 475 |
+
1-14
|
| 476 |
+
Docker 20.10 X X X X X X $X$ $X$ $x$ X X X
|
| 477 |
+
|
| 478 |
+
1-14
|
| 479 |
+
Podman 4.1 X X X X X X $x$ $X$ $x$ X X X
|
| 480 |
+
|
| 481 |
+
1-14
|
| 482 |
+
Flatpak 1.14 X X X X X X $x$ $x$ $x$ X X X
|
| 483 |
+
|
| 484 |
+
1-14
|
| 485 |
+
gVisor 2022 with KVM $x$ X X $x$ $x$ $x$ $X$ $x$ X X X
|
| 486 |
+
|
| 487 |
+
1-14
|
| 488 |
+
gVisor 2022 with ptrace X X X X X $x$ $X$ $x$ $x$ X X
|
| 489 |
+
|
| 490 |
+
1-14
|
| 491 |
+
bpfContain[26] 2021 X X X X X X $X$ $x$ $x$ X $x$ X
|
| 492 |
+
|
| 493 |
+
1-14
|
| 494 |
+
|
| 495 |
+
Table 1: Virtualization technology classification of popular implementations
|
| 496 |
+
|
| 497 |
+
While this paper pursues a classification of virtualization technologies, the measurement of performance within vir-tualization technologies is tightly related. Various authors perform comparative studies regarding the performance degradation for virtualization technologies [9, 58]. Most find that containers are able to deliver almost bare-metal like performance, but also show promising results for hybrid solutions. Isolation on the other hand seems better for hypervisor-based virtualization. They do however imply, that there is a relation between the class of virtualization technology and performance. We, in contrast, classify virtualization technologies along their mechanisms for isolation, whereas they classify them along performance.
|
| 498 |
+
|
| 499 |
+
§ 8 CONCLUSION
|
| 500 |
+
|
| 501 |
+
This paper aimed to craft a virtualization classification. It was done by dissecting established virtualization technologies and by studying scientific articles published in the virtual-ization domain. Implementing process led to a taxonomy that presents every substantial building block that enables isolation. On the most superficial level this taxonomy divided technologies into the three categories(i)hypervisor,(ii) container and (iii) sandbox based. Since applying enabling concepts within these categories are not limited to their respective category, (iv) hybrid based extends this list by one. The final resulting taxonomy is presented in fig. 10
|
| 502 |
+
|
| 503 |
+
Besides this summary, reflections on the resulting classification are discussed in section 8.1. Moreover, thoughts regarding possible future work are presented in section 8.2. This section takes on ideas that raised during the work on this paper.
|
| 504 |
+
|
| 505 |
+
§ 8.1 DISCUSSION
|
| 506 |
+
|
| 507 |
+
What is yet to be shown though, is if this is also the case for any thinkable manifestation of virtualization technology. While this paper carefully crafted a classification taxonomy, the virtualization landscape is ever-changing. There might be minor adaptations necessary in order to assess any past or upcoming technology. Table 1 briefly shows a small proportion of those manifestations including upcoming ones.
|
| 508 |
+
|
| 509 |
+
Moreover, the classification performed here does not create an ordinal scale. An ordering based on isolation capability, startup time or performance overhead can only be performed based on measurements.
|
| 510 |
+
|
| 511 |
+
A central limitation regarding virtualization technologies certainly is the focus on the Linux OS. Other OSs also offer virtualization technologies including Microsoft's solutions like Hyper-V ${}^{38}$ or closed source Hypervisors like VMwares ${\mathrm{{ESC}}}^{39}$ . Moreover, other UNIX based OSs offer solutions for container based virtualization like FreeBSD's Jails[1]. The methodology to measure those systems does not change, the profiling technology however needs to.
|
| 512 |
+
|
| 513 |
+
§ 8.2 FUTURE WORK
|
| 514 |
+
|
| 515 |
+
Furthermore, since all the technologies presented in this paper are very Linux focussed, an adaptation to different OSs might be interesting. Especially technologies only applicable for Microsoft’s OS Windows Server ${}^{40}$ could yield additional insights and possibly even a new class in the taxonomy.
|
| 516 |
+
|
| 517 |
+
As this taxonomy shows, there is a broad amount of vir-tualization technology manifestations possible. Especially considering different versions and configurations of those result in even more actually implemented solutions. This aspect is an essential reason to not try to analyse every technology regarding their isolation capabilities, but rather craft a method to sensibly measure it on demand. Such an approach enables to compare isolation for a specific use case. This is not limited to isolation though. As mentioned in section 8.1 other ordinal scales like performance impact could be of interest. Even a "multi criteria decision making" approach could be applied like pursued by Domaschka et al. [21].
|
| 518 |
+
|
| 519 |
+
${}^{38}$ https://docs.microsoft.com/virtualization
|
| 520 |
+
|
| 521 |
+
${}^{39}$ https://www.vmware.com/de/products/esxi-and-esx.html
|
| 522 |
+
|
| 523 |
+
${}^{40}$ https://www.microsoft.com/en-gb/windows-server
|
| 524 |
+
|
| 525 |
+
As mentioned before in section 8 Anjali et al. presented a scale for virtualization [8]. Combined with the findings of this paper and a sensible measurement methodology, both could be verified against their capability to isolate. Further, these could support the challenge of Williams et al. who questions virtualization for cloud computing in general [69]
|
papers/JSYS/JSYS 2022/JSYS 2022 Oct_Papers/8xwPz-Vx8sq/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
papers/JSYS/JSYS 2022/JSYS 2022 Oct_Papers/8xwPz-Vx8sq/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
papers/JSYS/JSYS 2022/JSYS 2022 Oct_Papers/Ze6AJKHsOP/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
papers/JSYS/JSYS 2022/JSYS 2022 Oct_Papers/Ze6AJKHsOP/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
papers/JSYS/JSYS 2022/JSYS 2022 Oct_Papers/nJ6e-3M2rx/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
papers/JSYS/JSYS 2022/JSYS 2022 Oct_Papers/nJ6e-3M2rx/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,371 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ SOLUTION: SCALABLE, CONTIGUOUS SEQUENCING FOR BUILDING CONSISTENT SERVICES
|
| 2 |
+
|
| 3 |
+
§ ABSTRACT
|
| 4 |
+
|
| 5 |
+
Some recent services use a sequencer to simplify ordering operations on sharded data. The sequencer assigns each operation a multi-sequence number which explicitly orders the operation on each shard it accesses. Existing sequencers have two shortcomings. First, failures can result in some multi-sequence numbers never being assigned, exposing a noncontiguous multi-sequence, which requires complex scaffolding to handle. Second, existing implementations use single-machine sequencers, limiting service throughput to the ordering throughput of one machine.
|
| 6 |
+
|
| 7 |
+
We make two contributions. First, we posit that sequencers should expose our new contiguous multi-sequence abstraction. Contiguity guarantees every sequence number is assigned an operation, simplifying the abstraction. Second, we design and implement MASON, the first system to expose the contiguous multi-sequence abstraction and the first to provide a scalable multi-sequence. MASON is thus an ideal building block for consistent, scalable services. Our evaluation shows MASON unlocks scalable throughput for two strongly-consistent services built on it.
|
| 8 |
+
|
| 9 |
+
§ 1 INTRODUCTION
|
| 10 |
+
|
| 11 |
+
Designers of large-scale distributed services grapple with the tradeoff between strong consistency and high performance. A strongly-consistent distributed service is a useful building block because applications can reason about its behavior as if it were running on a single machine. However, strong consistency requires coordination among a service's servers, adding overhead.
|
| 12 |
+
|
| 13 |
+
Some recent services achieve consistency using a sequencer to explicitly order data accesses a priori, removing the need to coordinate concurrent accesses [35, 53]. This enables sequencer-based designs to achieve strong consistency with higher throughput than other approaches.
|
| 14 |
+
|
| 15 |
+
An existing abstraction enabled by sequencers is the multi-sequence abstraction. This abstraction uses a collection of sequence spaces, i.e., logically independent sequences of strictly increasing integers, to provide a strictly serializable ordering of accesses to different subsets (shards) of the service's data. An operation that needs cross-shard ordering gets an atomically assigned multi-sequence number containing a sequence number from the sequence space of each shard the operation accesses. An execution protocol, designed by the service developer, defines the sequence spaces involved in an operation and how shards use multi-sequence numbers to execute operations. Driven by the execution protocol, the service's servers use the sequence numbers to order operations on the shard(s) they manage, with the multi-sequence numbers atomically ordering operations relative to other operations to provide strong consistency. Operations ordered by multi-sequence numbers can be executed without coordination across servers, enabling strongly consistent, scalable, and efficient services.
|
| 16 |
+
|
| 17 |
+
However, the abstraction used by recent services is a noncontiguous multi-sequence: failures can cause holes in the sequence space, i.e., sequence numbers that are never used. To preserve consistency, a service must identify and reason about all holes. Identifying holes requires service-wide coordination between the service's servers to reach consensus on whether a sequence number has an associated operation that can be recovered. If not, then it is a hole, and the servers must coordinate to avoid using any sequence numbers that are part of the same multi-sequence number as the hole. Implementing consensus and service-wide coordination to handle holes significantly complicates execution protocol design (§2.2).
|
| 18 |
+
|
| 19 |
+
This paper introduces the contiguous multi-sequence abstraction for building consistent, scalable services. The contiguous multi-sequence abstraction assigns exactly one operation to every integer in each sequence space such that no sequence space has a hole. Contiguity strengthens the multi-sequence abstraction over its existing noncontiguous counterpart by hiding consensus and service-wide coordination, simplifying the development of services. Some existing services use the noncontiguous multi-sequence abstraction internally to expose higher-level abstractions like distributed databases $\left\lbrack {{35},{53}}\right\rbrack$ . Compared to higher-level abstractions, the contiguous multi-sequence supports developing more diverse functionality, e.g., ephemeral objects (§6).
|
| 20 |
+
|
| 21 |
+
In addition to being noncontiguous, existing implementations of the multi-sequence abstraction $\left\lbrack {{35},{53}}\right\rbrack$ suffer from a second limitation: they have an ordering throughput ceiling that limits the throughput of any services built on top of them. These implementations use a monolithic sequencer, a single machine whose only task is to hand out multi-sequence numbers, enabling low-latency ordering that is easy to reason about. A monolithic sequencer can order operations with higher throughput than coordination-based mechanisms, but this design can only achieve ordering throughput up to the throughput limit of a single machine. Thus, a service built on a monolithic sequencer cannot scale.
|
| 22 |
+
|
| 23 |
+
Our system, MASON, addresses the ordering throughput limitation. MASON is a building block for distributed services that provides the contiguous multi-sequence abstraction with no ceiling on ordering throughput, unlocking scalabil-ity for services that were previously unscalable. MASON's contiguous multi-sequence implementation enables services to (1) use simple execution protocols that need not incorporate consensus or service-wide coordination and (2) scale to achieve service throughput far higher than what is possible with monolithic sequencers.
|
| 24 |
+
|
| 25 |
+
Our key insight is that MASON can enable simple execution protocols and scalability via a layer of replicated proxies between clients and a monolithic sequencer. To overcome the failure modes that expose holes, the proxy layer provides fault tolerance for clients and the sequencer, guaranteeing the contiguous multi-sequence abstraction. To overcome the monolithic sequencer's ceiling on ordering throughput, proxies batch requests for multi-sequence numbers. This batching is perfect, in that the sequencer does no more work to allocate one million contiguous numbers than it does to allocate a single number. Each replicated proxy operates essentially independently, allowing the proxy layer to scale out; adding more proxies increases ordering throughput. These techniques enable MASON to scale: if the sequencer is the bottleneck, proxies increase batch size; if the proxy layer is the bottleneck, more proxies are added.
|
| 26 |
+
|
| 27 |
+
Our evaluation shows MASON provides scalable ordering throughput: with one sequence space, MASON achieves $\sim {16.7}\mathrm{{Mops}}/\mathrm{{sec}}$ with 24 proxy machines, scaling to $\sim {31.5}$ Mops/sec with 48 proxy machines. MASON’s tradeoff for a stronger abstraction and scalable ordering throughput is higher latency relative to monolithic-sequencer designs, since the proxies and a single round of replication are on path for each request. MASON's latency is still low, however, with a median latency of $\sim {243\mu }\mathrm{s}$ at the reported throughputs.
|
| 28 |
+
|
| 29 |
+
We demonstrate MASON's value as a building block by using it to implement Corfu-MASON, a distributed shared log modeled after CORFU [3]; and ZK-MASON, a distributed prototype of the coordination service ZooKeeper [20]. With MASON's strong abstraction, it was easy to build these services that consistently execute cross-shard operations (§6). MASON also unlocked scalability for them in contrast to their fundamentally unscalable original designs. Specifically, our implementation of CORFU's original design is limited to 1̃4.1 Mops/s (nearly line rate for a sequencer with a ${10}\mathrm{\;G}$ NIC, 1̃4.5 Mops/s). Building it on MASON lets it scale from $\sim {7.3}\mathrm{{Mops}}/\mathrm{s}$ (one server) to $\sim {29.1}\mathrm{{Mops}}/\mathrm{s}$ (four servers). Our implementation of ZooKeeper's original design is limited to $\sim {150}\mathrm{{Kops}}/\mathrm{s}$ ; its MASON-based implementation scales from $\sim {1.3}\mathrm{{Mops}}/\mathrm{s}$ (one server) to $\sim 7\mathrm{{Mops}}/\mathrm{s}$ (eight servers).
|
| 30 |
+
|
| 31 |
+
This paper makes two major contributions. The first is the contiguous multi-sequence abstraction, which simplifies building correct services compared to the previous noncontiguous multi-sequence abstraction. While the noncontiguous multi-sequence abstraction demands significant distributed systems expertise to use correctly, our abstraction shields service developers from the complexity of reasoning about holes ( $\$ 2$ ). By handling this complexity internally, the contiguous multi-sequence abstraction enables faster development of new services, promotes designs with fewer bugs, and enables developers without distributed systems expertise to develop scalable distributed services. The second major contribution is the design of MASON, which notably is the first multi-sequence design that is scalable. MASON's inherent scalability is the foundation for removing the throughout ceiling from existing and future services built on a multi-sequence abstraction (§5). Together, these contributions make it easy to build consistent services with a newfound ability to scale service throughput (§6).
|
| 32 |
+
|
| 33 |
+
§ 2 THE CONTIGUOUS MULTI-SEQUENCE
|
| 34 |
+
|
| 35 |
+
This section is an orientation to the multi-sequence abstraction. Section 2.1 explains how to build strongly-consistent services with the generic multi-sequence abstraction. Section 2.2 describes why building services with the existing noncontiguous multi-sequence abstraction is challenging. Our contiguous multi-sequence abstraction instead makes it easy to use multi-sequences to build scalable, consistent services.
|
| 36 |
+
|
| 37 |
+
§ 2.1 BUILDING SERVICES WITH MULTI-SEQUENCES
|
| 38 |
+
|
| 39 |
+
Services built on the generic multi-sequence abstraction typically include clients, a sequencing component, and servers, each holding one or more shards. Typically, each shard stores a subset of the service's data and is replicated for fault tolerance. Each shard has its own sequence space, a sequence of strictly increasing integers that order operations on the shard's data. To execute an operation, a client identifies the shards involved in the operation, gets a multi-sequence number from the sequencing component with one number from each relevant shard's sequence space, and sends the operation to the shards' servers with the multi-sequence number. Each server locally uses the multi-sequence number to order this operation's data accesses relative to other operations' accesses.
|
| 40 |
+
|
| 41 |
+
We next define multi-sequence numbers, explain how they are assigned to operations consistently, and describe how execution protocols use them to scale execution.
|
| 42 |
+
|
| 43 |
+
Multi-sequence numbers A multi-sequence number, $n$ , is a set of $\langle {ssid},{sn}\rangle$ tuples where ${ssid}$ is a unique number identifying the sequence space, and ${sn}$ is a sequence number in that space. The sequence number in space $s$ in multi-sequence number $n$ is denoted ${n}_{s}$ . For a set of sequence spaces requested by a client, the sequencing component returns a multi-sequence number consisting of the next sequence number ${n}_{s}$ in each relevant space $s$ .
|
| 44 |
+
|
| 45 |
+
Strictly serializable multi-sequence number assignment From clients' perspectives, strictly serializable services process operations one at a time in an order that a single machine could have received them [46]. Concretely, strict serializability requires that there exists a legal total order of operations consistent with the partial ordering of "real-time" precedence, i.e., if $a$ completes before $b$ begins, then $a$ must be ordered before $b\left\lbrack {{19},{46}}\right\rbrack$ .
|
| 46 |
+
|
| 47 |
+
Multi-sequence numbers enable strongly consistent distributed services when assigned to operations in a strictly serializable order. To simplify discussion, we define a default, $\Delta$ , where ${n}_{s} = \Delta$ for all ${n}_{s}$ not mapped to a specific sequence number (i.e., all $s$ not in this multi-sequence number). For the set of all sequence spaces $S$ , we define a partial ordering over all multi-sequence numbers where $a < b \Leftrightarrow \forall s \in$ $S,{a}_{s} \neq \Delta \land {b}_{s} \neq \Delta \Rightarrow {a}_{s} < {b}_{s}$ . The multi-sequence abstraction guarantees that two multi-sequence numbers either share no common sequence spaces or are strictly ordered (i.e., if ${a}_{s} < {b}_{s}$ for one common space $s$ , then ${a}_{{s}^{\prime }} < {b}_{{s}^{\prime }}$ for all common spaces ${s}^{\prime }$ , implying $a < b$ ). The partial ordering of the multi-sequence numbers defines the ordering of operations. If strict serializability imposes an ordering between two operations, then multi-sequence numbers assigned on path with their execution capture that ordering.
|
| 48 |
+
|
| 49 |
+
Execution protocols To use the multi-sequence abstraction, a service developer implements an execution protocol that executes operations in order of their multi-sequence numbers, yielding a strictly serializable service. The execution protocol runs on clients (typically encapsulated in a client library) and on the service's servers. For clients, the execution protocol defines how operations are mapped to the service's shards and which sequence spaces are involved in a given operation. For servers, it determines when shards can safely execute operations, based on the operations' multi-sequence numbers.
|
| 50 |
+
|
| 51 |
+
Scalable execution Multi-sequence numbers enable services to scale throughput up to the rate the sequencer can assign sequence numbers. Execution scales through parallelism: when some shards are executing an operation, other shards can execute a different operation. The sequence spaces in multi-sequence numbers determine which operations can execute in parallel, as operations with disjoint multi-sequence numbers access different shards. As long as multi-sequence number assignment keeps up, the service can increase its throughput by adding more machines and creating more shards. However, existing multi-sequenced services use monolithic (single-machine) sequencers, which can never assign sequence numbers to operations at a higher rate than a single machine can support and hence limit the service's scalability.
|
| 52 |
+
|
| 53 |
+
§ 2.2 FROM NONCONTIGUOUS TO CONTIGUOUS
|
| 54 |
+
|
| 55 |
+
The generic multi-sequence abstraction is realized as a noncontiguous abstraction in existing services, which use it to expose higher-level abstractions [35, 53]. As we explain next, noncontiguity complicates service development. In contrast, the contiguous multi-sequence abstraction simplifies developing services with multi-sequences by encapsulating that complexity within the abstraction.
|
| 56 |
+
|
| 57 |
+
Holes in a noncontiguous sequence complicate the abstraction Holes occur when a sequence number is not used for an operation. For example, a hole occurs if a client fails after receiving a sequence number but before using it. A shard may see, e.g., sequence numbers 1-3 and then receive an operation with sequence number 5, indicating a potential hole at 4. To preserve strict serializability, the shard may only execute operation 5 after 4 is used, since 4 could belong to any operation. To make progress in the absence of an operation, the service must decide that the entire multi-sequence number is a hole and enforce that it is not used on any shard, typically by assigning a no-op to each of its sequence numbers.
|
| 58 |
+
|
| 59 |
+
Handling holes complicates service design. The service must have a mechanism to identify sequence numbers that are potential holes. Existing designs use timeouts $\left\lbrack {3,{53}}\right\rbrack$ or infer holes from out-of-order operation arrival [35, 53]. More challenging is that the service's servers must reach service-wide consensus on whether a sequence number is a hole, then coordinate to ensure that the other numbers in the hole's multi-sequence number are treated as holes to avoid partially executing a cross-shard operation. Existing services achieve this with a global shared $\log \left\lbrack {53}\right\rbrack$ or a failure coordinator [35]. Requiring consensus in the execution protocol makes a service developer's task significantly more difficult. Consensus is hard to implement and incorporate $\left\lbrack {7,{45}}\right\rbrack$ , and requires developers to understand the nuances of the sequencing component and consensus implementation in depth.
|
| 60 |
+
|
| 61 |
+
Although existing services feature workable solutions for handling holes, requiring services to select and properly incorporate a solution does not reflect operational best practices. Much of the purpose of providing infrastructure building blocks (such as an implementation of the multi-sequence abstraction) is to enable services to use them without needing to understand their complexities, via clean abstractions that mask the subtleties of their internal operation and failure modes. Pushing the complexity of handling holes to services increases the chances of one doing so incorrectly, similar to how pushing memory management to individual programmers increases the chances of memory leaks.
|
| 62 |
+
|
| 63 |
+
Our contiguous multi-sequence avoids holes and hides consensus Our abstraction assigns exactly one operation to each sequence number in each sequence space. Service developers can focus on designing execution protocols that achieve their services' goals, a much simpler task when freed from reasoning about holes or implementing consensus. Eris [35] and $\mathrm{{vCorfu}}\left\lbrack {53}\right\rbrack$ , the two existing designs built on the noncontiguous multi-sequence abstraction, were developed by distributed systems experts. With the contiguous multi-sequence abstraction, we aim to empower developers without such expertise to use multi-sequences to build scalable, consistent services, and make it easier and faster for experts.
|
| 64 |
+
|
| 65 |
+
§ 3 MASON OVERVIEW
|
| 66 |
+
|
| 67 |
+
The central contributions of MASON are to shield services from the complexity of dealing with holes by providing the contiguous multi-sequence, and to provide the benefits of the multi-sequence abstraction while allowing ordering throughput to scale beyond what a monolithic sequencer can provide. Section 4 describes how the components work together to guarantee a contiguous multi-sequence. Section 5 describes how MASON enables scalability with two mechanisms that relieve all throughput bottlenecks.
|
| 68 |
+
|
| 69 |
+
< g r a p h i c s >
|
| 70 |
+
|
| 71 |
+
Figure 1: The components of a service built with MASON and an operation's flow through the service. Blue components are part of MASON; yellow components are supplied by the service. Numbers correspond to steps in $§{3.3}$ .
|
| 72 |
+
|
| 73 |
+
§ 3.1 MODEL AND ASSUMPTIONS
|
| 74 |
+
|
| 75 |
+
We assume a set of processes that communicate via point-to-point communication over an asynchronous network, where messages can be arbitrarily delayed and reordered. We assume a crash failure model, where processes execute according to their specification until they cease sending messages and the failure is undetectable to other processes. MASON is safe under these assumptions. We assume service shards implement at-most-once semantics to handle retransmissions.
|
| 76 |
+
|
| 77 |
+
§ 3.2 MASON COMPONENTS
|
| 78 |
+
|
| 79 |
+
Figure 1 shows how MASON is used in a service. It also shows MASON's two types of internal components: a sequencer and replicated proxies. The core of MASON's design is a monolithic sequencer that provides high-throughput operation ordering, surrounded by a replicated proxy layer that handles the failure modes and bottlenecks impeding existing sequencers.
|
| 80 |
+
|
| 81 |
+
The sequencer allocates increasing multi-sequence numbers. It is implemented by a single machine, and only one sequencer is active at a time. MASON keeps a backup sequencer on standby for failure recovery. The monolithic sequencer at MASON's core provides the benefits of existing sequencers: contention-free, high-throughput ordering of operations in a distributed system. In our system, MASON itself is the distributed system, leveraging the sequencer's benefits while managing its drawbacks to provide a simpler, scalable building block to the service.
|
| 82 |
+
|
| 83 |
+
Proxies are replicated state machines (our implementation uses Raft [44], though any RSM would work). A proxy is thus logically a single entity implemented by a leader process and multiple follower processes on separate machines. The leader accepts operations from clients and executes them via the service stub using multi-sequence numbers. The rest of this paper refers to a proxy replica group simply as a proxy. A MASON deployment may have one or more proxies, depending on system load.
|
| 84 |
+
|
| 85 |
+
Identical to many other RSM-based systems, we assume at most $f$ of ${2f} + 1$ proxy replicas fail $\left\lbrack {{28},{34},{39},{44}}\right\rbrack$ . A MASON deployment must be configured so that $f$ is sufficiently large. In the rare event that more than $f$ machines fail, manual intervention by an operator is necessary to restore availability.
|
| 86 |
+
|
| 87 |
+
Service stubs are implemented by the service built on MASON and drive the execution protocol on the proxies. Service developers interact with MASON on the proxies through service stubs which execute within the proxy's process. When a proxy receives an operation from a service's client, it passes the operation to the stub. The stub either requests that MASON order the operation, or executes the operation immediately if it need not be ordered, e.g., an inconsistent read. After ordering and replicating the operation, the proxy returns it back to the stub which begins the execution protocol. Stubs are analogous to client libraries in existing multi-sequenced services. Section 6 shows how stubs are used to develop services.
|
| 88 |
+
|
| 89 |
+
The proxy may batch requests for multi-sequence numbers for scalability, i.e., request multi-sequence numbers for multiple client operations in one sequencer request (§5). The sequencer allocates a multi-sequence number for each operation in the batch. An allocated multi-sequence number is one given to a proxy that the sequencer promises not to allocate again. Proxies are responsible for assigning multi-sequence numbers to client operations. Assignment uses replication to permanently associate a multi-sequence number with an operation and guarantee it will never be assigned to another operation. Once the proxy has replicated the assignment of a multi-sequence number to an operation, it returns the operation and multi-sequence number to the service stub for execution.
|
| 90 |
+
|
| 91 |
+
§ 3.3 NORMAL-CASE OPERATION OF MASON
|
| 92 |
+
|
| 93 |
+
The normal case operation of MASON, shown in Figure 1, includes the following steps:
|
| 94 |
+
|
| 95 |
+
1. A client sends an operation to a proxy.
|
| 96 |
+
|
| 97 |
+
2. The proxy passes the operation to the service stub which determines the relevant sequence spaces.
|
| 98 |
+
|
| 99 |
+
3. The proxy asks the sequencer to allocate a multi-sequence number covering the relevant sequence spaces.
|
| 100 |
+
|
| 101 |
+
4. The proxy replicates the allocated number and operation, assigning the number to the operation.
|
| 102 |
+
|
| 103 |
+
5. The proxy returns the operation and multi-sequence number to the service stub.
|
| 104 |
+
|
| 105 |
+
6. The service stub and shards run the execution protocol.
|
| 106 |
+
|
| 107 |
+
7. The proxy sends the response from the stub to the client.
|
| 108 |
+
|
| 109 |
+
§ 4 ENSURING A CONTIGUOUS MULTI-SEQUENCE
|
| 110 |
+
|
| 111 |
+
MASON provides a contiguous multi-sequence by handling all potential sources of holes: client failures, network drops, sequencer failures, and combinations thereof. This section covers how MASON handles each of these failure scenarios and then sketches a proof of strict serializability.
|
| 112 |
+
|
| 113 |
+
§ 4.1 PROXIES PREVENT HOLES FROM CLIENT FAILURE
|
| 114 |
+
|
| 115 |
+
In a multi-sequenced service, client failure can cause holes when the client obtains a sequence number and fails before using it in the service. MASON prevents such holes with proxies that manage multi-sequence numbers on clients' behalf. Proxies are replicated for fault tolerance, eliminating this source of holes. A proxy will always return an operation that was assigned a multi-sequence number to the service stub even if the client fails and even if a minority of proxy replicas fails.
|
| 116 |
+
|
| 117 |
+
A byproduct of replication is that proxies maintain a record of every assigned sequence number, which is used in sequencer recovery (§4.3). By masking client failure and maintaining state needed for sequencer recovery, proxy replication is a key mechanism for avoiding holes in MASON.
|
| 118 |
+
|
| 119 |
+
The proxy replication strategy is driven by correctness and performance. Proxies must replicate enough information to preserve contiguity and strict serializability. Replicating every input to the proxy leader would be correct, but this would add unacceptable latency to client requests and burden proxies with excessive communication overhead. Fortunately, MASON can skip replication for all but one step in operation processing, because the other steps can be safely retried, including after client, sequencer, and/or proxy replica failure.
|
| 120 |
+
|
| 121 |
+
The exception is step 5 (Fig. 1), returning a multi-sequenced client operation to the service stub. Replicating the mapping of each client operation to a multi-sequence number before this step is critical for correctness in MASON. Suppose the mapping is not replicated. The sequencer and proxy leader could fail concurrently after the leader returns a multi-sequenced client operation to its service stub, but before the stub sends its operation to every relevant shard. The shards that received the operation may execute it, but the operation will not be completed after recovery because the mapping of multi-sequence number to operation was lost. Exposing the partial execution violates strict serializability. Therefore, before returning an operation to the service stub, the proxy must permanently associate the operation with a multi-sequence number through replication. Once replication succeeds, the sequence number is assigned to the operation.
|
| 122 |
+
|
| 123 |
+
We next describe how the proxy processes operations, in order to explain why all other steps are safe to retry. We discuss one operation and a single sequence space for ease of explanation; the reasoning can be easily extended to batches of operations and multiple sequence spaces.
|
| 124 |
+
|
| 125 |
+
Receiving a client operation Clients can send an operation to any proxy. When a proxy leader receives an operation from a client, it passes the operation to the service stub. If the stub requests that the operation be ordered, the leader allocates a sequencer request ${ID}$ for that operation (step 3 in Figure 1). Sequencer request IDs are allocated only by the leader, so they are trivially contiguous and strictly increasing. Sequencer request IDs are used during proxy failover to recover sequence numbers that were allocated but not yet assigned to any operation, i.e., potential holes.
|
| 126 |
+
|
| 127 |
+
Requesting a sequence number The leader then requests a sequence number from the sequencer, with the sequencer request ID (step 3). If the sequencer has not seen this sequencer request ID from this proxy, the sequencer updates its state in two relevant ways: it allocates a sequence number for this request by incrementing the sequence counter in the requested sequence space, and maps the sequencer request ID to the allocated number. If the sequencer has seen the sequencer request ID before, it responds with the previously allocated sequence number and marks it as a retransmit.
|
| 128 |
+
|
| 129 |
+
Proxy leader failure When a proxy leader fails, the new leader must recover the sequence numbers that were allocated but not yet assigned. The state needed to correctly match allocated but unassigned sequence numbers to operations was lost with the failed leader, so these are temporary holes. We now explain how we use sequencer request IDs to recover such holes. This is the key mechanism for ensuring correctness when proxies execute only one round of replication.
|
| 130 |
+
|
| 131 |
+
The new leader collaborates with the sequencer to identify these temporary holes as follows:
|
| 132 |
+
|
| 133 |
+
1. The new leader saw a contiguous set of sequencer request IDs until some ID $x$ , after which it saw noncontiguous IDs until $y$ . The range from $x$ to $y$ is noncontiguous because the leader replicates sequence number-operation pairs as they arrive from the sequencer, which may be out of order.
|
| 134 |
+
|
| 135 |
+
2. The new leader requests sequence numbers for all IDs from $x + 1$ until $y - 1$ that were not replicated. The sequencer will either return already-allocated sequence numbers, or will allocate new numbers for the IDs.
|
| 136 |
+
|
| 137 |
+
3. The new leader replicates and assigns all returned sequence numbers to no-ops and returns them to the service stub.
|
| 138 |
+
|
| 139 |
+
4. The new leader then resumes normal operation, allocating sequencer request IDs from $y + 1$ .
|
| 140 |
+
|
| 141 |
+
There may be allocated but unassigned sequence numbers with sequence request IDs greater than $y$ . In such cases, the sequencer will mark the returned sequence numbers as retransmits. The leader replicates and assigns them to no-ops and retries the request with a new sequencer request ID. If the sequencer fails concurrently with leader failure, the sequencer recovery protocol recovers and assigns no-ops to any allocated but unassigned sequence numbers (§4.3).
|
| 142 |
+
|
| 143 |
+
Returning the operation and sequence number to the service stub (step 5) Strict serializability dictates that the service's execution protocol cannot use one sequence number for multiple operations, and different sequence numbers cannot be used for one operation. MASON must therefore guarantee the sequence number associated with an operation never changes once the service is made aware of it. MASON thus replicates the sequence number-to-operation assignment (step 4) before passing the operation to the service stub.
|
| 144 |
+
|
| 145 |
+
The proxy leader's other steps in handling a client operation-passing the operation to the service stub and forwarding the service's response to the client (step 7)-can be safely left unreplicated. Retrying these steps is safe. The service stub, shards, and clients already provide at-most-once semantics to handle retransmission due to network drops, so they will be able to handle retransmission from the proxies.
|
| 146 |
+
|
| 147 |
+
§ 4.2 RELIABLE TRANSPORT PREVENTS HOLES FROM PACKET LOSS
|
| 148 |
+
|
| 149 |
+
MASON handles network drops with a reliable transport layer. Since the state needed to reliably transport multi-sequence numbers is lost on sequencer failure, MASON uses a recovery protocol to correctly fill holes with no-ops in case of simultaneous packet loss and sequencer failure (§4.3). Reliable transport and the sequencer recovery protocol ensure that every allocated multi-sequence number arrives at a proxy.
|
| 150 |
+
|
| 151 |
+
§ 4.3 RECOVERING TO PREVENT HOLES FROM SE- QUENCER FAILURE
|
| 152 |
+
|
| 153 |
+
In MASON, sequencer failure can cause temporary holes if failure occurs before the reliable transport protocol can retransmit a dropped response. Suppose the sequencer allocates and sends multi-sequence numbers $a$ and $b$ , where $a < b$ , for two client operations. If the message containing $a$ is dropped and the sequencer fails before retransmission, but a proxy receives $b$ , then $a$ is a temporary hole. One solution replicates the sequencer to permanently associate client requests and multi-sequence numbers. However, replication compromises the main benefit of a sequencer: simplified ordering so the sequencer can devote all its resources to allocating numbers.
|
| 154 |
+
|
| 155 |
+
MASON instead runs one active sequencer, backed by an idle backup sequencer and sequencer recovery protocol. If the active sequencer fails, the backup sequencer takes over and executes the recovery protocol to correctly fill any temporary holes caused by the failure, ensuring a contiguous multi-sequence when the backup resumes normal operation.
|
| 156 |
+
|
| 157 |
+
MASON's sequencer recovery protocol is based on two observations. First, the proxies' collective state includes which sequence numbers have been assigned, so they collectively know where potential holes in each sequence space are. MASON assigns these sequence numbers to no-ops. Second, all outstanding operations are concurrent. An outstanding operation is one that a proxy received (step 1 in Fig. 1), but has not yet assigned a sequence number (step 4), and thus is not ordered. When the backup sequencer resumes normal operation, it can allocate new multi-sequence numbers for outstanding operations in any relative order, as long as they are ordered after the highest previously-assigned sequence number in each sequence space, which the proxies collectively know.
|
| 158 |
+
|
| 159 |
+
The steps in MASON's sequencer recovery protocol are:
|
| 160 |
+
|
| 161 |
+
a) Detect sequencer failure and activate a backup sequencer.
|
| 162 |
+
|
| 163 |
+
$b)$ Identify potential holes in each sequence space.
|
| 164 |
+
|
| 165 |
+
c) Replicate the assignment of no-ops to holes.
|
| 166 |
+
|
| 167 |
+
d) Resume normal operation with new sequence numbers.
|
| 168 |
+
|
| 169 |
+
Failure detection and backup sequencer activation Proxies unreliably detect sequencer failure with timeouts and pings. If a proxy does not hear from the sequencer after a timeout (.5 s in our implementation), it pings the sequencer. After another timeout, the proxy declares the sequencer failed and initiates recovery by activating the backup sequencer. The backup sequencer informs the other proxies that recovery has begun. All proxies then replicate a special recovery operation and seal their sequence spaces, rejecting any packets from the previous sequencer. The new sequencer waits for all proxies to complete the sealing process before resuming recovery. Replicating the recovery operation on all proxies before allowing the backup sequencer to resume recovery ensures proxies reject all packets from the previous sequencer This in turn, ensures there is only one active sequencer at a time even when proxy leaders fail, sequencer-failure detection is incorrect, or messages from the previous sequencer were delayed or reordered in the network.
|
| 170 |
+
|
| 171 |
+
Identifying potential holes During normal operation, proxies track their local views of each sequence space. A proxy's local view is the subsequence of numbers in each sequence space that the proxy has assigned to operations. After sealing, proxies send their local views to the backup sequencer. The backup sequencer reconstructs each sequence space, exposing any temporary holes. Garbage collection of proxies' local views is described at the end of this section.
|
| 172 |
+
|
| 173 |
+
Assigning temporary holes to no-ops The backup sequencer notifies proxies of any temporary holes in each sequence space. Proxies assign these sequence numbers to no-ops, replicate the assignment, and pass them to the service stubs, as they would with client-issued operations.
|
| 174 |
+
|
| 175 |
+
Resuming normal operation The backup sequencer identifies the start of each sequence space based on the highest number in each sequence space compiled from the proxies. It then notifies proxies to resume normal operation and allocates new sequence numbers from that point. Proxies must re-request sequence numbers for all outstanding operations.
|
| 176 |
+
|
| 177 |
+
Garbage-collecting sequence number tracking state Proxies run a lightweight garbage collection protocol to discard tracked sequence numbers that are no longer needed for sequencer recovery. Each sequence space is partitioned into intervals of size $N$ . When all $N$ sequence numbers in an interval have been assigned to operations, it is safe to discard the state associated with those sequence numbers. To determine when all $N$ numbers have been assigned, the proxies form a communication ring and periodically send an accumulating count of the sequence numbers assigned in each sequence space's latest interval. At the end of a round, if any sequence space’s count is $N$ , the interval is completely assigned; all state associated with that interval is discarded.
|
| 178 |
+
|
| 179 |
+
§ 4.4 PROOF SKETCH OF STRICT SERIALIZABILITY
|
| 180 |
+
|
| 181 |
+
This subsection sketches a proof of the strict serializability of the assignment of multi-sequence numbers to operations. We include the formal proof in the appendix ( $§\mathrm{A}$ ). We make the assumptions stated in $\$ {3.1}$ . Our proof reasons about pairs of operations, showing they are either strictly concurrent, where they do not share sequence spaces, or strictly ordered, where if ${a}_{n} < {b}_{n}$ for some overlapping sequence space $n$ , then ${a}_{{n}^{\prime }} < {b}_{{n}^{\prime }}$ for all overlapping sequence spaces ${n}^{\prime }$ , where ${a}_{n}$ denotes the sequence number in sequence space $n$ assigned to operation $a$ .
|
| 182 |
+
|
| 183 |
+
To show that there exists a total order over all completed operations consistent with the partial ordering of real-time precedence, we exhaustively analyzed all cases of failure scenarios from no failures to concurrent failure of proxy leaders, proxy followers, and sequencer. In all cases an operation is assigned at most one multi-sequence number which occurs if/when replication to a majority of replicas in a proxy succeeds. The assigned multi-sequence numbers for all operations that access overlapping sequence spaces are then strictly ordered by either the same sequencer, or by an initial sequencer and a backup sequencer that recovers all previous assignments before allocating any new multi-sequence numbers. Thus, the partial order of assigned multi-sequence numbers strictly orders all conflicting operations. Further, this partial order is consistent with real-time precedence either trivially when two operations are ordered by the same sequencer or because a backup sequencer only allocates numbers larger than the maximum previously assigned in each sequence space. Only strictly concurrent (i.e., no overlapping sequence spaces) operations are unordered by that partial order, and any ordering of them results in a valid total order. Extending the partial order to a total order consistent with real-time precedence is thus trivial: unordered operations are first ordered by the partial order of real-time precedence and then remaining unordered operations are arbitrarily ordered.
|
| 184 |
+
|
| 185 |
+
§ 5 SUPPORTING SCALABLE THROUGHPUT
|
| 186 |
+
|
| 187 |
+
A service's achievable throughput (service throughput) is capped by the minimum of the rate at which it can execute requests (execution throughput) and the rate at which it can order requests (ordering throughput). Execution throughput scales when more service shards are added if and only if the service implements a scalable execution protocol. Ordering throughput scales only if the ordering component scales. Previous multi-sequence abstraction designs do not scale.
|
| 188 |
+
|
| 189 |
+
MASON supports scalable service throughput by removing the bottlenecks that limit monolithic-sequencer designs and achieving scalable ordering. This section describes two complementary mechanisms that alleviate all ordering throughput bottlenecks: horizontally scaling out the proxy layer, and batching requests to the sequencer.
|
| 190 |
+
|
| 191 |
+
Potential ordering throughput bottlenecks MASON has two components, so there are two potential bottlenecks on computation: the proxy layer and the sequencer. Each component sends and receives network traffic, so there are four potential bottlenecks on network bandwidth. Our two scaling mechanisms address all six bottlenecks: scaling out the proxy layer relieves all bottlenecks at the proxy layer, and batching relieves all bottlenecks at the sequencer.
|
| 192 |
+
|
| 193 |
+
The proxy layer scales out When MASON is bottlenecked by a proxy layer resource, the proxy layer can scale out. Each proxy operates essentially independently, so holding all else constant, doubling the number of proxies doubles the amount of computation and bandwidth available at the proxies for processing client operations, doubling the proxy layer's achievable throughput.
|
| 194 |
+
|
| 195 |
+
In truth, proxies are not completely independent; there is overhead to garbage collect multi-sequence number tracking state (§4.3). However, the overhead is constant for each proxy with respect to the number of proxies due to the ring communication pattern; thus, it does not affect the proxy layer's scalability.
|
| 196 |
+
|
| 197 |
+
Batches are as efficient as single requests When MASON is bottlenecked by the sequencer proxies can increase throughput by batching multi-sequence number requests. This batching is perfect, holding all else constant, in that a request for one client operation uses the same resources as a request for multiple operations.
|
| 198 |
+
|
| 199 |
+
To request multi-sequence numbers for a batch of client requests, the proxy constructs a sequencer request which indicates the relevant sequence spaces and how many numbers are required from each sequence space to order the operations in the batch and sends a single sequencer request for the batch. The sequencer allocates the requested count of sequence numbers in each sequence space and replies with the lowest allocated number in each sequence space. Finally, the proxy iterates through client operations in the order they were received and gives each operation the next lowest sequence number in each of its sequence spaces.
|
| 200 |
+
|
| 201 |
+
MASON alleviates all bottlenecks on the sequencer by increasing the batch size. MASON's batching is timeout-driven: all client requests that arrive at a proxy within the timeout are batched together. By doubling the timeout (hence batch size) at a given client load, proxies can halve the rate at which they issue sequencer requests. The sequencer, in turn, would need half the resources to handle the same client load. The sequencer can thus handle twice the ordering throughput before hitting the same bottleneck. Timeout-driven batching is naturally dynamic: higher client load results in larger batches.
|
| 202 |
+
|
| 203 |
+
Why not batch at clients? A strawman design for increasing ordering throughput is to batch requests at clients, which has two limitations. First, the maximum throughput is limited by the number of parallel requests a client will individually make. Second, batching at clients requires waiting until the client has issued those requests, which can substantially increase latency. In contrast, MASON's proxies can batch across any number of clients, achieving the large batches that allow it to scale. In general, naïvely adding only a batching layer to prior designs does not work, as it introduces new failure modes (e.g., batching machine failure) that require a comprehensive service redesign such as that of MASON.
|
| 204 |
+
|
| 205 |
+
§ 6 SERVICES
|
| 206 |
+
|
| 207 |
+
This section explains how services can easily use MASON and its contiguous multi-sequence abstraction to scale service throughput. We describe two services we implemented over MASON: a distributed shared log based on CORFU [3] and a distributed prototype of the coordination service ZooKeeper [20].
|
| 208 |
+
|
| 209 |
+
§ 6.1 INTERACTION WITH MASON
|
| 210 |
+
|
| 211 |
+
A service's execution protocol consists of (at least) two components: shards and service stubs. Shards are implemented entirely by the service and interact with service stubs and other service-implemented components. Service stubs are the mechanism by which services interact with proxies. They determine an operation's relevant sequence spaces and request ordering via MASON if necessary, drive the execution protocol interacting with other service components, and have control of the operation until informing MASON that the operation is complete. This is sufficient for the services we implement here; more complex services may need multi-round sequencing for some operations, e.g., where the write set depends on the read set. In that case, MASON could be augmented so that the stub could request another round of ordering and include metadata, which MASON replicates and the service can use to resume execution if the current proxy leader fails.
|
| 212 |
+
|
| 213 |
+
§ 6.2 MAKING CORFU SCALABLE: CORFU-MASON
|
| 214 |
+
|
| 215 |
+
CORFU is a shared log supporting append and read operations that consistently execute across shards [3]. Appends write a value to the current tail of the log. Reads return the value written to a specified log position. Many applications can be implemented with shared logs, e.g., producer-consumer queues and logging [22, 49].
|
| 216 |
+
|
| 217 |
+
We use MASON to implement Corfu-MASON, a service based on CORFU. CORFU's original implementation does not scale; although CORFU has a scalable execution protocol, the implementation is limited by the ordering throughput of its monolithic sequencer $\left\lbrack {3,{53}}\right\rbrack$ . By replacing the sequencer with MASON, MASON's scalable ordering combines with the scalable execution protocol to enable the whole service to scale.
|
| 218 |
+
|
| 219 |
+
Corfu-MASON uses CORFU's scalable execution protocol. The shared $\log$ is represented by a single sequence space. Appends acquire a sequence number that directly determines which log position to write. A round-robin mapping of log position-to-shard ensures append load is uniform on shards, enabling appends to execute in parallel [3].
|
| 220 |
+
|
| 221 |
+
Corfu-MASON implements two of CORFU's three operations, append(b)and $\operatorname{read}\left( l\right)$ . append(b)appends the entry $b$ to the log and returns the log position $l$ to which it was written. read(l)returns the entry at $\log$ position $l$ , or an error code if the entry does not exist. CORFU implements a third operation, fill(l), to fill holes in the sequence (and the log) caused by failed clients. CORFU clients detect holes in the log with a timeout and execute fill(l)to fill the $l$ th position with junk. The timeout-and-fill(l)procedure is unnecessary in Corfu-MASON because of MASON’s contiguous sequence.
|
| 222 |
+
|
| 223 |
+
Corfu-MASON's execution protocol uses sequence numbers for appends to determine which log positions to write, which in turn map to specific shards. In addition to eliminating the need for fill operations, MASON's contiguous sequence simplifies reads. If a client attempts to read a log position that has not been written yet, it can simply keep checking that log position. The contiguous sequence guarantees that the entry will eventually be written. reads need not be ordered and hence are not ordered or replicated by MASON; the service stub executes reads immediately. CORFU tolerates shard failure using client-driven chain replication [52], and so Corfu-MASON uses service stub-driven chain replication.
|
| 224 |
+
|
| 225 |
+
Corfu-MASON was implemented in a single day thanks to both the simplicity of CORFU and the strong abstraction of a contiguous sequence provided by MASON.
|
| 226 |
+
|
| 227 |
+
§ 6.3 MAKING ZOOKEEPER SCALABLE: ZK-MASON
|
| 228 |
+
|
| 229 |
+
ZK-MASON is a ZooKeeper-like coordination service built on MASON. ZooKeeper [20] is a widely-used coordination service implemented on ZooKeeper Atomic Broadcast (ZAB) [23], a version of state machine replication (SMR). ZAB, like other SMR protocols, cannot scale: it is fundamentally limited by the rate a single machine can execute requests. Furthermore, ZooKeeper uses a single replicated state machine to ensure consistency, so an instance cannot be sharded. We designed ZK-MASON to be scalable, using the cross-shard consistency and scalable ordering provided by MASON.
|
| 230 |
+
|
| 231 |
+
ZK-MASON operations Similar to ZooKeeper, ZK-MASON maintains a set of znodes. Each znode has a pathname beginning with "/" (similar to a filesystem) and data associated with it. We implemented seven operations in ZK-MASON:
|
| 232 |
+
|
| 233 |
+
* create (path, data, flags) : creates a znode with pathname path and data data. flags allows the client to specify a persistent or ephemeral znode.
|
| 234 |
+
|
| 235 |
+
* setData (path, data, version): sets the data at path if version matches the current version, or if version is -1 .
|
| 236 |
+
|
| 237 |
+
* getData (path, watch) : gets the data at path.
|
| 238 |
+
|
| 239 |
+
* exists (path, watch) : checks if the znode exists.
|
| 240 |
+
|
| 241 |
+
* delete (path, version): deletes znode specified by path if version matches the current version, or if version is -1 .
|
| 242 |
+
|
| 243 |
+
* getChildren (path, watch) : returns the children of path
|
| 244 |
+
|
| 245 |
+
The read operations getData, exists, and getChildren return the znode's current version. Read operations have a watch flag, which sets a watch on the znode if the flag is set. ZK-MASON watches have the same semantics as ZooKeeper watches. Watches are triggered by updates depending on the type of read operation and the type of update operation. For example, a watch set by getChildren is triggered after a create or delete of a child, but not by any setData on its children, as that does not change the result of getChildren. ZK-MASON notifies the client when its watch is triggered.
|
| 246 |
+
|
| 247 |
+
ZK-MASON execution protocol ZK-MASON's execution protocol is based on Eris's execution protocol [35]. ZK-MASON assigns znodes to shards based on a hash of the full pathname. Shards consist of ${2f} + 1$ servers; each shard tolerates $f$ failures. Each server executes incoming operations in order of the shard's sequence space. When a proxy receives a client operation, the service stub determines which shards are involved in the operation and requests a multi-sequence number for the relevant sequence spaces. For example, to execute a create, the service stub hashes the path and the parent pathname to get the sequence spaces for those two shards. MASON acquires and replicates a multi-sequence number with the two sequence spaces. The service stub sends a create operation to each server in path's shard and an addChild operation to each server in path's parent's shard in parallel. When the stub receives a quorum of $f + 1$ responses from each shard, the operation is complete; the stub informs MASON of completion, and MASON returns to the client.
|
| 248 |
+
|
| 249 |
+
Ephemeral znodes Ephemeral znodes are transient znodes that exist only during an active client connection. They are created by a client and deleted by the service when the client disconnects, either explicitly or due to failure. Ephemeral zn-odes can be used to add to a distributed queue: if the creating client fails, the object is removed. They can also help manage locks: if a client acquires a lock and fails, the lock is released when the ephemeral object disappears [50]. Implementing ephemeral znodes in ZK-MASON is straightforward. Shards keep a timer that is reset with client heartbeats. After timing out, the shard sends a delete to a proxy to delete the node. The delete is ordered to prevent divergent shards.
|
| 250 |
+
|
| 251 |
+
The contiguous multi-sequence abstraction simplifies ZK-MASON Implementing this service over a noncontiguous multi-sequence would require consensus to deal with holes. Because a missing sequence number could belong to a multi-shard operation, e.g., create, the hole-filling consensus would need to be service-wide to avoid partially executing the operation on some shards but not others. To handle cases where aborting a partially-executed operation is impossible, each full operation would need to be persisted by the service so it could be recovered by shards that never received it (e.g., the full operation could be sent to every relevant shard).
|
| 252 |
+
|
| 253 |
+
In ZK-MASON, if a shard encounters a gap in its sequence space, it can wait for the missing operation and each shard only needs to receive the parts of the operation that will execute on that shard. The contiguous multi-sequence guarantees that the operation will be executed.
|
| 254 |
+
|
| 255 |
+
§ 7 EVALUATION
|
| 256 |
+
|
| 257 |
+
MASON provides two main innovations for building services. First, it is a general, reusable building block that offers the contiguous multi-sequence abstraction. This makes it easy to build efficient implementations of complex services (§6). But, as with any such abstraction, we expect overheads compared to specialized implementations. Second, MASON provides a scalable multi-sequence allowing previously unscalable services to now scale. This section quantifies the overhead of MASON’s general abstraction for two services (§7.2 and §7.3), shows MASON provides scalable ordering (§7.1), that its scalable ordering does indeed enable services to scale (§7.2 and §7.3), and that MASON does provide a contiguous multi-sequence despite failures (§7.4).
|
| 258 |
+
|
| 259 |
+
Implementation MASON is written in C++. All components, including clients, service shards, and MASON components, communicate with eRPC, a reliable RPC framework [25]. eRPC uses unreliable datagrams in Intel DPDK (v. 17.11.5) as its transport layer [12]. We replicate proxies with Raft [44], and periodically durably snapshot their state for Raft log compaction. MASON will be open-sourced by publication time.
|
| 260 |
+
|
| 261 |
+
Evaluation setup We evaluate MASON on the Emulab testbed [54] with Dell R430 (d430) machines [9]. We run Ubuntu 18.04.11 with Linux kernel version 4.15.0. The machines have two hyperthreaded 8-core CPUs (Intel E5-2630 "Haswell", 2.4 GHz) with 20 MB L3 cache, 64 GB RAM, and one dual-port 10 GbE PCI-Express NIC (Intel X710).
|
| 262 |
+
|
| 263 |
+
We load MASON with clients running on separate machines of the same type. Unless otherwise specified, each client machine runs 16 threads, each implementing several logical closed-loop clients that generate new operations as previous operations complete. We control load by varying the number of client machines and the number of logical closed-loop clients per thread. Latency is measured at clients for each operation. We report the median over five trials of the median latency over all clients in a trial. We present latency as median/99th percentile. Throughput is also measured at each client and aggregated over all clients in a trial. For all scal-ability experiments we derive the throughput by increasing load (i.e., the number of logical clients). We report the highest throughput before latency spikes from overload. We show the median throughput over five trials. Trials are 68 seconds each; the first and last 4 seconds of measurements are discarded.
|
| 264 |
+
|
| 265 |
+
Each proxy is replicated on 3 machines. Experiments in Sections 7.1 and 7.4 use a stub service with one operation: clients indicate relevant sequence spaces and the service returns the assigned multi-sequence number to the client.
|
| 266 |
+
|
| 267 |
+
§ 7.1 MASON SCALES ORDERING THROUGHPUT
|
| 268 |
+
|
| 269 |
+
MASON uses two mechanisms to scale ordering throughput: adding more proxies and increasing batching to the sequencer. The first mechanism, adding more proxies, is evaluated in Figure 2. Ordering throughput is the number of client operations per second that receive a multi-sequence number and return to clients. To stress ordering throughput, the proxies do not execute operations on behalf of clients in this experiment.
|
| 270 |
+
|
| 271 |
+
< g r a p h i c s >
|
| 272 |
+
|
| 273 |
+
Figure 2: MASON ordering throughput with increasing proxy count.
|
| 274 |
+
|
| 275 |
+
Figure 2 shows that, as the number of proxies doubles, the ordering throughput also roughly doubles for each sequence space count. As the number of sequence spaces in the system increases, the per-proxy machine throughput decreases, so overall ordering throughput with the same number of proxies is lower. Latency at these throughputs ranges from $\sim {243}/ \sim {380\mu }\mathrm{s}$ for a single sequence space to $\sim {358}/ \sim {693\mu }\mathrm{s}$ for 8 sequence spaces. This experiment demonstrates that adding more proxies enables MASON to scale ordering throughput.
|
| 276 |
+
|
| 277 |
+
We are unable to test our second mechanism, increasing batching to the sequencer, because we cannot saturate the sequencer with the machines available on Emulab. With 48 proxy machines, the sequencer processes $\sim {3.2}\mathrm{{Mops}}/\mathrm{s}$ , which is far from the 1̃4.5 Mops/s possible at line rate. As MASON scales linearly with increasing proxies, we expect to be able to achieve over ${142}\mathrm{{Mops}}/\mathrm{s}$ before the sequencer becomes the bottleneck. At that point, we expect to be able to continue doubling the ordering throughput of MASON by doubling the number of proxies and doubling the batch sizes. Average batch size for 48 proxies with one sequence space is $\sim 8$ operations.
|
| 278 |
+
|
| 279 |
+
§ 7.2 MAKING CORFU SCALABLE
|
| 280 |
+
|
| 281 |
+
MASON provides scalable ordering that, when coupled with a scalable execution protocol, enables services to scale. Corfu-MASON replaces CORFU's monolithic sequencer with MASON, yielding a scalable distributed shared log (§6.2).
|
| 282 |
+
|
| 283 |
+
We compare Corfu-MASON with CORFU’, our implementation of CORFU in the same environment as Corfu-MASON, using C++ and eRPC over DPDK. CORFU's sequencer processes requests at $\sim {14.2}\mathrm{{Mops}}/\mathrm{s}$ , nearly line-rate for our message size $\left( { \sim {14.5}\mathrm{{Mops}}/\mathrm{s}}\right)$ . This is a fairer baseline than using CORFU's improved sequencer, whose maximum ordering throughput is $\sim {570}\mathrm{{Kops}}/\mathrm{s}\left\lbrack {3,4}\right\rbrack$ .
|
| 284 |
+
|
| 285 |
+
Figure 3a evaluates Corfu-MASON's scalability. We run a workload consisting entirely of ${64}\mathrm{\;B}$ appends and increase the number of Corfu shards. We use 6 (replicated) proxies for every Corfu shard, keeping the ratio of proxies to Corfu shards constant. CORFU' roughly doubles throughput from one to two Corfu shards before the sequencer saturates and latency increases; the maximum observed throughput of ${\mathrm{{CORFU}}}^{\prime }$ is $\sim {14.1}\mathrm{{Mops}}/\mathrm{s}$ with latency of $\sim {70}/ \sim {90\mu }\mathrm{s}$ . MASON allows ordering in Corfu-MASON to scale, enabling service throughput to increase linearly: Corfu-MASON scales from 7̃.3 Mops/s with one Corfu shard to 2̃9.1 Mops/s with four Corfu shards, an increase of $\sim {3.98}\mathrm{x}$ . Append latency at four Corfu shards is $\sim {200}/{297\mu }\mathrm{s}$ . The increase in latency is from extra round trips (clients sending requests to proxy leaders, which leaders replicate) and proxies waiting for ${20\mu }\mathrm{s}$ to batch requests.
|
| 286 |
+
|
| 287 |
+
Figure 3b shows the scalability of reads. Clients execute reads on random log positions in ${\mathrm{{CORFU}}}^{\prime }$ by reading a shard's tail replica. Reads in Corfu-MASON are executed by proxy leaders, which read the tail replica. Reads are not sequenced in either service, so reads scale the same in both services. Latency for Corfu-MASON is $\sim {97}/ \sim {147\mu }\mathrm{s}, \sim {65\mu }\mathrm{s}$ higher than CORFU’s $\sim {32}/ \sim {62\mu }\mathrm{s}$ , from the extra round trip through the proxy leader.
|
| 288 |
+
|
| 289 |
+
§ 7.3 MAKING ZOOKEEPER SCALABLE
|
| 290 |
+
|
| 291 |
+
ZK-MASON is a ZooKeeper-like coordination service [20] (see Sec. 6.3). ZK-MASON uses a scalable execution protocol with MASON's scalable ordering to scale the entire service.
|
| 292 |
+
|
| 293 |
+
To compare ZK-MASON and ZooKeeper we implemented RSMKeeper, a prototype of ZooKeeper over Raft [44]. RSM-Keeper has the same operations as ZK-MASON. Both are implemented in C++ with eRPC over DPDK [12, 25]; RSM-Keeper uses a single thread. We note that RSMKeeper has much higher throughput than the original ZooKeeper implementation, providing a fairer baseline.
|
| 294 |
+
|
| 295 |
+
We configured RSMKeeper and ZK-MASON to maximize service throughput while keeping latency low. RSMKeeper is loaded by one client machine running 8 threads. ZK-MASON clients use 16 threads. ZK-MASON uses 2 proxies per shard and 1 client machine per proxy. Each proxy uses 8 threads and each ZK-MASON shard uses 1 thread. This is the minimal setup for a single shard that stresses the shard's throughput. We add more ZK-MASON shards, keeping the ratio of clients and proxies to shards constant. Our ZK-MASON experiments show the scalability of the contiguous multi-sequence abstraction when scaling out the number of shards.
|
| 296 |
+
|
| 297 |
+
Figure 4a shows the throughput of setData operations. RSMKeeper's (and ZooKeeper's) design uses a single replicated state machine to ensure consistency and thus cannot run with more than one shard; its maximum throughput is $\sim {150}\mathrm{{Kops}}/\mathrm{s}$ . With one shard, ZK-MASON has ${8.6} \times$ the service throughput of RSMKeeper, at 1̃.29 Mops/s while providing latency in a similar range as shown in Figure 4c. ZK-MASON's higher single-shard throughput comes from the proxy layer scaling with two (replicated) proxies handling client requests for one ZK-MASON shard. Furthermore, ZK-MASON shards do less work per setData operation than RSMKeeper. For each operation, RSMKeeper handles operation execution, one round of client-to-leader communication, two rounds of leader-to-follower communication, and snapshotting Raft state and log compaction to disk. On the other hand, MASON frees the ZK-MASON shard from handling tasks related to ordering and consensus. The shard only handles execution and one round of proxy-to-shard communication. With more resources devoted to execution, one ZK-MASON shard has a higher maximum throughput than RSMKeeper. More importantly, ZK-MASON is able to scale throughput by increasing the number of shards and proxies: with eight shards its throughput scales to 7̃ Mops/s.
|
| 298 |
+
|
| 299 |
+
< g r a p h i c s >
|
| 300 |
+
|
| 301 |
+
max width=
|
| 302 |
+
|
| 303 |
+
Operation Med. 99%
|
| 304 |
+
|
| 305 |
+
1-3
|
| 306 |
+
CORFU’ append 70 90
|
| 307 |
+
|
| 308 |
+
1-3
|
| 309 |
+
Corfu-MASON append 200 297
|
| 310 |
+
|
| 311 |
+
1-3
|
| 312 |
+
CORFU’ read 32 62
|
| 313 |
+
|
| 314 |
+
1-3
|
| 315 |
+
Corfu-MASON read 97 147
|
| 316 |
+
|
| 317 |
+
1-3
|
| 318 |
+
|
| 319 |
+
(c) Latency $\left( {\mu \mathrm{s}}\right)$ . append latency is for ser-
|
| 320 |
+
|
| 321 |
+
vice at peak throughput.
|
| 322 |
+
|
| 323 |
+
Figure 3: CORFU ${}^{\prime }$ and Corfu-MASON comparison. Corfu-MASON append throughput scales linearly with more shards while CORFU’ saturates at 2 shards. Corfu-MASON has higher latency in exchange for contiguity and linear scalability.
|
| 324 |
+
|
| 325 |
+
< g r a p h i c s >
|
| 326 |
+
|
| 327 |
+
Figure 4: RSMKeeper and ZK-MASON comparison. ZK-MASON achieves higher throughput than RSMKeeper with a single shard at comparable latency. ZK-MASON throughput scales linearly at the cost of a modest increase in latency.
|
| 328 |
+
|
| 329 |
+
Figure 4b shows the throughput of getData operations. We configured RSMKeeper to replicate getData operations to provide the same consistency as ZooKeeper's sync-getData construction and ZK-MASON's getData operation. RSM-Keeper’s maximum throughput is $\sim {150}\mathrm{{Kops}}/\mathrm{s}$ with latency $\sim {209}/ \sim {352\mu }\mathrm{s}$ . ZK-MASON’s getData throughput scales from $\sim {1.1}\mathrm{{Mops}}/\mathrm{s}$ with one shard to $\sim 6\mathrm{{Mops}}/\mathrm{s}$ with eight shards. Latencies in those runs range from $\sim {224}/ \sim {306\mu }\mathrm{s}$ (one shard) to $\sim {327}/ \sim {667\mu }\mathrm{s}$ (eight shards). getData operations have slightly higher latency than setData operations because proxies need to wait for a response from a ZK-MASON quorum before returning to the client, while setData can be executed on ZK-MASON shards asynchronously.
|
| 330 |
+
|
| 331 |
+
§ 7.4 MASON PROVIDES A CONTIGUOUS SEQUENCE
|
| 332 |
+
|
| 333 |
+
This experiment validates that MASON provides a contiguous sequence despite component failures. We run MASON with 16 proxies. Each proxy machine hosts either 8 leaders or 8 followers in 8 different proxies for a total of 6 proxy machines (2 leader machines and 4 follower machines). Load is generated by 4 client machines. Clients request one sequence number from each of 4 sequence spaces. We inject proxy and
|
| 334 |
+
|
| 335 |
+
< g r a p h i c s >
|
| 336 |
+
|
| 337 |
+
Figure 5: Highest contiguous multi-sequence number received across all clients at time $t$ . We induce proxy leader failure at ${10}\mathrm{\;s}$ and sequencer failure at ${20}\mathrm{\;s}$ .
|
| 338 |
+
|
| 339 |
+
§ SEQUENCER FAILURE; NETWORK DROPS OCCUR NATURALLY.
|
| 340 |
+
|
| 341 |
+
Figure 5 shows the highest contiguous sequence number successfully received by a client over time for each of 4 sequence spaces. That is, if Figure 5 indicates that at time $x$ the highest contiguous sequence number from a sequence space is $y$ , then each sequence number up to and including $y$ in that space was received by some client. We ran the experiment with 4 sequence spaces and plotted the highest contiguous sequence number for each sequence space. Since clients request one number from every sequence space, they advance at the same rate and thus all four lines overlap.
|
| 342 |
+
|
| 343 |
+
We first kill a proxy machine hosting 8 proxy leaders ${10}\mathrm{\;s}$ into the experiment. The 8 recovering proxies stop processing client operations and may have uncompleted operations. The flat region in the plot indicates where the sequence increase is blocked by uncompleted operations. Once failover is complete, the new leaders respond to pending client operations. The plot spikes as gaps in the sequence are filled in and operations serviced by the two non-failing proxies are accounted for. Proxy failure detection and failover take ${3.06}\mathrm{\;s}$ , including $1\mathrm{\;s} - 2\mathrm{\;s}$ for the failure detection timeout, set randomly by Raft.
|
| 344 |
+
|
| 345 |
+
We kill the sequencer ${20}\mathrm{\;s}$ into the experiment. A proxy times out $1\mathrm{\;s}$ later and begins the recovery protocol. Failure detection and recovery take ${2.38}\mathrm{\;s}$ -the plot’s $2\mathrm{{nd}}$ flat region-and then the contiguous multi-sequence continues to grow.
|
| 346 |
+
|
| 347 |
+
§ 8 RELATED WORK
|
| 348 |
+
|
| 349 |
+
This section explains MASON's relationship to the five categories of related work it builds upon. At a high level, the primary distinction of MASON is that it provides strict serializability, unlike atomic multicast; it is scalable, unlike state machine replication and fast ordering systems; it provides multiple sequence spaces, unlike shared logs; and its abstraction enables more efficient, specialized service implementations than distributed databases.
|
| 350 |
+
|
| 351 |
+
Atomic multicast Atomic multicast guarantees messages are delivered reliably and satisfying a total order to one or more groups of processes $\left\lbrack {8,{15},{16},{18}}\right\rbrack$ . Unlike the order given by a contiguous multi-sequence, the total order given by atomic multicast is not strictly serializable. Atomic multicast is thus used directly in systems to provide weaker consistency guarantees [37], or augmented to provide stronger consistency [6, 32].
|
| 352 |
+
|
| 353 |
+
State machine replication There is a large body of work on state machine replication (SMR) implemented with consensus $\left\lbrack {1,{10},{13},{17},{21},{23},{26},{28} - {31},{38},{39},{42} - {44},{48}}\right\rbrack$ , which provides two properties MASON aims for: a contiguous sequence via SMR's log and fault tolerance via consensus. These protocols have a fundamental throughput ceiling, the rate a single machine can execute commands in order.
|
| 354 |
+
|
| 355 |
+
Distributed shared logs CORFU uses a monolithic sequencer to find the tail of a distributed shared $\log \left\lbrack 3\right\rbrack$ . It cannot scale beyond the throughput of the sequencer. MASON can provide a contiguous sequence to a CORFU service while scaling beyond the throughput of a monolithic sequencer, but MASON requires more resources and has higher latency.
|
| 356 |
+
|
| 357 |
+
Delos [5] unifies separate shared log or storage instances into a single virtualized shared log. It inherits the scalability limitations of its underlying systems. Scalog [11] is a distributed shared log that uses a replicated ordering mechanism to reliably totally order records in a log. Scalog increases the write throughput ceiling compared to CORFU by two orders of magnitude. It increases ordering throughput using a similar technique as MASON: each storage server periodically orders multiple records at once. Scalog, unlike CORFU but like MASON, guarantees that services always see a contiguous sequence of operations. ChronoLog [27] uses physical time to order records by accounting for skew among distributed components. It reports an order of magnitude higher throughput than CORFU. Delos, Scalog, and ChronoLog cannot be easily extended to multi-sequencing: Scalog orders operations using a summary of operations that arrived at individual shards; ChronoLog and Delos lack mechanisms to atomically append to multiple logs. Thus, they cannot easily be modified to support strictly serializable cross-shard operations.
|
| 358 |
+
|
| 359 |
+
Chariots [41] scales by delegating the ordering of disjoint ranges of a shared log to independent servers, providing only causal consistency [36]. FuzzyLog partially orders records in exchange for better performance [37]. MASON provides the stronger guarantee of strict serializability.
|
| 360 |
+
|
| 361 |
+
Fast ordering systems State-of-the-art networks or network appliances can support high-throughput, low-latency sequencing [24, 34, 35]. Unlike MASON, these sequencers cannot scale, do not provide a contiguous sequence, and are not fault-tolerant. However, such sequencers can provide sequencing with much lower latency than MASON.
|
| 362 |
+
|
| 363 |
+
Kronos provides high-throughput happens-before ordering; services totally order operations [14]. Mostly-ordered multicast uses datacenter network properties to provide consistent multicasting except during network failures or packet loss [48]. Reliable 1Pipe, 1Pipe's strongest abstraction, provides ordered communication to receiver groups where messages eventually arrive absent failures and partitions [33]. Services detect and handle lost messages with consensus, much like services using noncontiguous multi-sequences. In contrast to these systems, MASON provides the stronger abstraction of a strictly serializable, contiguous sequence.
|
| 364 |
+
|
| 365 |
+
Distributed databases FoundationDB uses a single sequence space with batching to scalably implement commit timestamps [56], but does not provide contiguity or multi-sequencing. Eris [35], Calvin [51], vCorfu [53], Tango [4], and other distributed databases $\left\lbrack {2,{40},{47},{49},{55},{56}}\right\rbrack$ provide a higher-level abstraction than MASON. It is harder for services to build efficient, specialized implementations over the distributed database abstraction compared to the multi-sequence abstraction. For instance, ephemeral znodes (§6.3) do not fit the traditional distributed database model; a service developer would implement a new replicated component to manage client connections and explicitly delete the znode at connection termination. In contrast, implementing ephemeral znodes in ZK-MASON was straightforward.
|
| 366 |
+
|
| 367 |
+
MASON's contiguous multi-sequence abstraction is an excellent candidate for implementing distributed databases. Its contiguity would eliminate significant complexity in ported implementations of Eris and vCorfu. Similarly, its contiguity would greatly simplify developing new multi-sequence-based distributed databases. Its scalable multi-sequence would enable Eris, vCorfu, and future databases to scale far higher than the throughput ceiling of monolithic sequencers. This is an important avenue for future work.
|
| 368 |
+
|
| 369 |
+
§ 9 CONCLUSION
|
| 370 |
+
|
| 371 |
+
This paper proposed the contiguous multi-sequence abstraction for building consistent services. It is a stronger abstraction than the noncontiguous multi-sequence abstraction in use today, making it easier to build services with multi-sequences. We also presented MASON, the first system to expose the contiguous multi-sequence abstraction and the first to provide a scalable multi-sequence. We demonstrated MASON’s usefulness as a building block for scalable, consistent services by using it to enable scalability in two services that were previously fundamentally unscalable.
|
papers/JSYS/JSYS 2023/JSYS 2023 March_Papers/T-eV4T4h_dc/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
@@ -0,0 +1,495 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# FLAIR: STORING UNBOUNDED DATA STREAMS ON MOBILE DEVICES TO UNLOCK USER PRIVACY AT THE EDGE
|
| 2 |
+
|
| 3 |
+
Anonymous authors
|
| 4 |
+
|
| 5 |
+
Paper under double-blind review
|
| 6 |
+
|
| 7 |
+
## Abstract
|
| 8 |
+
|
| 9 |
+
Mobile devices are producing larger and larger data streams, such as location streams, which are consumed by machine learning pipelines deliver location-based services to end users. Such data streams are generally uploaded and centralized to be processed by third parties, potentially exposing sensitive personal information. In this context, existing protection mechanisms, such as Location Privacy Protection Mechanisms (LPPMs), have been investigated. Alas, none of them have actually been implemented, nor deployed in real-life, in mobile devices to enforce user privacy at the edge. We believe that the effective deployment of LPPMs on mobile devices faces a major challenge: the storage of unbounded data streams. This paper introduces FLAIR, a storage system based on a new piece-wise linear approximation technique that increases the storage capacity of mobile devices by relying on data modeling. Beyond the FLAIR storage layer, we also introduce Divide & Stay, a new privacy-preserving technique to execute Points of Interest (POIs) inference. Finally, we deploy both of them on Android and iOS to demonstrate that a real deployment of LPPMs is now possible.
|
| 10 |
+
|
| 11 |
+
## 1 Introduction
|
| 12 |
+
|
| 13 |
+
With the advent of smartphones and more generally the ${In}$ - ternet of Things (IoT), connected devices are mainstream in our societies and widely deployed at the edge. Such constrained devices are not only consuming data and services, such as streaming, geolocalization, or restaurant recommendations, but also producers of data streams by leveraging a wide variety of embedded sensors that capture the surrounding environment of end-users, including their daily routines. Online services are heavily relying on this crowdsourced data to improve the user experience through machine learning. The data deluge generated by a user is potentially tremendous: according to preliminary experiments, a smartphone can generate approximately 2 pairs of GPS samples and 476 triplets of accelerometer samples per second, resulting in more than 172,800 location and 41,126,400 acceleration daily samples. These data streams tend to be uploaded from the device to third-party service providers to extract the valuable information it contains. As an example, the Point Of Interests (POIs) of a user can be extracted from her GPS traces, to better understand consumers' behavior.
|
| 14 |
+
|
| 15 |
+

|
| 16 |
+
|
| 17 |
+
Figure 1: FLAIR compacts any location stream as a sequence of segments, obtained from a piece-wise model.
|
| 18 |
+
|
| 19 |
+
However, this continuous data stream inevitably includes sensitive personal information (SPI) that may jeopardize the privacy of end-users, if processed by malicious stakehold-ers. While machine learning algorithms are nowadays widely adopted as a convenient keystone to process large datasets and infer actionable insights, they often require grouping raw input datasets in a remote place, thus imposing a privacy threat for end-users sharing their data. This highlights the utility vs. privacy trade-off that is inherent to any data-sharing activity. On the one hand, without crowd-sourced GPS traces, it would be hard to model traffic in real-time and recommend itineraries. On the other hand, it is crucial to protect user privacy when accepting to gather SPI.
|
| 20 |
+
|
| 21 |
+
To address this ethical challenge, privacy-preserving machine learning [39] and decentralized machine learning [16, 40] are revisiting state-of-the-art machine learning algorithms to enforce user privacy, among other properties. Furthermore, regarding location privacy, several protection mechanisms, called Location Privacy Protection Mechanisms (LPPMs), have been developed to preserve user privacy in mobility situations. Location reports are evaluated and obfuscated before being sent to a service provider, hence keeping user data privacy under control. The user no longer automatically shares her data streams with service providers, but carefully selects what she shares and makes sure the data she unveils does not contain any SPI. For example, Geo-Indistinguishability [15] generalizes differential privacy [18] to GPS traces, while PROMESSE [35] smooths the GPS traces-both temporally and geographically-to erase POIs from the input trace. LPPMs successfully preserve sensitive data, such as POIs, while maintaining the data utility for the targeted service.
|
| 22 |
+
|
| 23 |
+
Despite their effectiveness, no LPPM has ever been implemented and deployed on mobile devices: previous works have been simulated on ADB [25] at best. While extending those works to Android and iOS devices may be perceived as straightforward, it faces several challenges imposed by the scarce resources of mobile devices. In particular, LPPMs often require the user to access all her GPS traces and, ideally, the ones of additional users. The strategy consisting in storing entire raw traces does not scale and is impracticable for the average user who does not possess high memory devices. Unfortunately, the memory constraints of modern devices prevent the users from sharing user traces at the edge of the network.
|
| 24 |
+
|
| 25 |
+
This paper demonstrates that modeling data streams make this transfer possible. In particular, we introduce Fast LineAr InteRpolation (FLAIR), a new data storage system based on a new piecewise linear approximation technique, and we use it to model and store data streams under memory constraints (see Fig. 1). Unlike existing stream or temporal databases, FLAIR does not store a fixed number of data samples but models their evolution, theoretically offering an unlimited storage capacity. We show that FLAIR can be deployed on Android and iOS smartphones to store GPS traces of entire datasets. We then implement a LPPM working directly on mobile phones-which is made possible by the increased GPS storage capacity offered by FLAIR. However, the LPPM's privacy gains need to be evaluated in situ before being uploaded to service providers: are POIs actually obfuscated? To this end, we also introduce a new POI attack algorithm, dubbed Divide & Stay (D&S), which can compute POI on large traces in tens of seconds directly on mobile phones. We report that our combined approaches enable storing tremendous amounts of geolocation data on mobiles, thus allowing the use of LPPMs to ensure end-user privacy while using geolocation services.
|
| 26 |
+
|
| 27 |
+
In the following, we first discuss the related work (Sec. 2), before diving into the details of FLAIR and how it can be applied to boost location privacy (Sec. 3). We then present our experimental setup (Sec. 4) and the results we obtained (Sec. 5); we discuss the potential shortcoming of our approach (Sec. 6) before concluding (Sec. 7).
|
| 28 |
+
|
| 29 |
+
## 2 Related Works
|
| 30 |
+
|
| 31 |
+
### 2.1 Location Privacy Attacks
|
| 32 |
+
|
| 33 |
+
Raw user mobility traces can be exploited to model the users' behavior and reveal their sensitive personal information (SPI). In particular, the Point Of Interests (POIs) are widely used as a way to extract SPI from mobility traces. In a nutshell, a POI is a place where the user comes often and stays for a significant amount of time: it can reveal her home, workplace, or leisure habits. From POIs, more subtle information can also be inferred: sexual orientation from attendance to LGBT+ places, for instance. The set of POIs can also be used as a way to re-identify a user in a dataset of mobility traces [21,34]. The POIs can be extracted using spatiotemporal clustering algorithms $\left\lbrack {{23},{42}}\right\rbrack$ . Alternatively, an attacker may also reidentify a user directly from raw traces, without computing any POI [29].
|
| 34 |
+
|
| 35 |
+
### 2.2 Mobility Dataset Protection Mechanisms
|
| 36 |
+
|
| 37 |
+
When data samples are gathered in a remote server, one can expect the latter to protect the dataset as a whole. In particular, $k$ -anonymity [36] is the property of a dataset guaranteeing that whenever some data leaks, the owner of each data trace is indistinguishable from at least $k - 1$ other users contributing to the dataset. Similarly, $l$ -diversity [27] extends $k$ -anonymity by ensuring that the $l$ users are diverse enough not to infer SPI about the data owner. Finally, differential privacy [18] aims at ensuring that the inclusion of a single element in a dataset does not alter significantly an aggregated query on the whole dataset. However, all these techniques require personal samples to be grouped to enforce user privacy.
|
| 38 |
+
|
| 39 |
+
### 2.3 Location Privacy Protection Mechanisms
|
| 40 |
+
|
| 41 |
+
Rather than protecting the dataset as a whole, each data sample can also be protected individually. In the case of location data, several protection mechanisms-called Location Privacy Protection Mechanisms (LPPMs)—have been developed. They may be deployed in a remote server where all data samples are gathered or directly on the device before any data exchange.
|
| 42 |
+
|
| 43 |
+
Geo-Indistinguishability (GEOI) [15] implements differential privacy [18] at the trace granularity. In particular, GEOI adjusts mobility traces with two-dimensional Laplacian noise, making POIs more difficult to infer. Heat Map Confusion (HMC) [28] aims at preventing re-identification attacks by altering all the traces altogether. The raw traces are transformed into heat maps, which are altered to look like another heat map in the dataset, and then transformed back to a GPS trace.
|
| 44 |
+
|
| 45 |
+
PROMESSE [35] smooths the mobility traces, both temporally and geographically, to erase POIs from the trace. PROMESSE ensures that, between each location sample, there is at least a given time and distance interval. In the resulting mobility trace, the user appears to have a constant speed. While PROMESSE blurs the time notion from the trace-i.e., the user never appears to stay at the same place-it does not alter their spatial characteristics. Yet, while POIs may be still inferred if the user repeatedly goes to the same places, it will be harder to distinguish such POIs from more random crossing points.
|
| 46 |
+
|
| 47 |
+
It is also possible to combine several LPPMs to improve the privacy of users $\left\lbrack {{25},{30}}\right\rbrack$ . Because of potential remote leaks, the user should anonymize her trace locally before sharing it, which is how EDEN [25] operates. However, EDEN has not been deployed: it has only been simulated on ADB. Even more so: despite their validity and to the best of our knowledge, no LPPM has been implemented in mobile devices. This is partly due to the tight constraints of mobile devices, memory-wise notably: HMC [28], for instance, requires locally loading a large set of GPS traces to operate.
|
| 48 |
+
|
| 49 |
+
### 2.4 Temporal Databases & Mobile Devices
|
| 50 |
+
|
| 51 |
+
To overcome the memory constraints of mobile devices, one needs efficient embedded temporal databases. To take the example of Android: only few databases are available, such as SQLITE and its derivative DRIFT [1], the cloud-based Firebase [3], the NOSQL HIVE, and OBJECTBOX [11]. The situation is similar on iOS.
|
| 52 |
+
|
| 53 |
+
Relational databases Relational databases (e.g., SQL) are typically designed for OnLine Transactional Processing (OLTP) and OnLine Analytical Processing (OLAP) workloads, which widely differ from time-series workloads. In the first, reads are mostly contiguous (as opposed to the random-read tendency of OLTP); writes are most often inserts (not updates) and typically target the most recent time ranges. OLAP is designed to store big data workloads to get analytical statistics from data, while not putting the emphasis on read nor write performances. Finally, in temporal workloads, it is unlikely to process writes & reads in the same single transaction [37].
|
| 54 |
+
|
| 55 |
+
Despite these profound differences, several relational databases offer support for temporal data with industry-ready performance. As an example, TimescaleDB [14] is a middle-ware that exposes temporal functionalities atop a relational PostgreSQL foundation.
|
| 56 |
+
|
| 57 |
+
InfluxDB InfluxDB [8] is one of the most widely used temporal databases. Implemented in Go, this high-performance time series engine is designed for really fast writes to collect metrics and events from IoT sensors. Unfortunately, its retention policy prevents the storage to scale in time: the oldest samples are dumped to make room for the new ones.
|
| 58 |
+
|
| 59 |
+
To the best of our knowledge, however, none of the existing solutions prioritize data compression to the extent that they would prune raw data samples in favor of modeled approximations.
|
| 60 |
+
|
| 61 |
+
Modeling data streams While being discrete, the streams sampled by sensors represent inherently continuous signals. Data modeling does not only allow important memory consumption gains, but also flattens sensors' noise, and enables extrapolation between measurements. In particular, Piecewise Linear Approximation (PLA) are used to model the data in successive linear polynomials. An intuitive way to do linear approximation is to apply a bottom-up segmentation: each pair of consecutive points is connected by interpolations; the less significant contiguous interpolations are merged, as long as the obtained interpolations introduce no error above a given threshold. The bottom-up approach has low complexity but usually requires an offline approach to consider all the points at once. The Sliding Window And Bottom-up (SWAB) algorithm [24], however, is an online approach that uses a sliding window to buffer the latest samples on which a bottom-up approach is applied. emSWAB [17] improves the sliding window by adding several samples at the same time instead of one. Instead of interpolation, linear regression can also be used to model the samples reported by IoT sensors [22]. For example, GREYCAT [31] adopts polynomial regressions with higher degrees to further compress the data. Unfortunately, none of those works have been implemented on mobile devices to date.
|
| 62 |
+
|
| 63 |
+
Closer to our work, FSW [26] and the ShrinkingCone algorithm [20] attempt to maximize the length of a segment while satisfying a given error threshold, using the same property used in FLAIR. FSW is not a streaming algorithm as it considers the dataset as a whole, and do not support insertion. The ShrinkingCone algorithm is a streaming greedy algorithm designed to approximate an index, mapping keys to positions: it only considers monotonic increasing functions and can produce disjoints segments. FLAIR models non-monotonic functions in a streaming fashion, while providing joints segments.
|
| 64 |
+
|
| 65 |
+
## 3 Enabling User Privacy at the Edge
|
| 66 |
+
|
| 67 |
+
### 3.1 In-situ Data Management
|
| 68 |
+
|
| 69 |
+
For privacy's sake, we advocate for in-situ data management strategies-i.e., SPI should be anonymized within the mobile device before any data exchange. This avoids anonymizing by relying on a trusted third party first gathering multiple users' raw data. Such a third party may accidentally or intentionally leak users' data, making the adoption of such protection mechanisms ineffective.
|
| 70 |
+
|
| 71 |
+
In the following, we will focus on mobility traces. A mobility trace is an ordered sequence $T$ of pairs(t, g)where $t$ is a timestamp and $g$ is a geolocation sample, a latitude-longitude pair for example. The trace is ordered in chronological order and we assume that reported timestamps are unique.
|
| 72 |
+
|
| 73 |
+
We believe that keeping the raw data where it is created-i.e., on the mobile devices-increases user privacy. However, sharing data is required to enable location-based services, such as traffic modeling. The user should share their mobility traces after they have been protected using an LPPM. The first challenge is to find which LPPM to use and which related parameters are optimal. To tackle this issue, a public dataset can be used to estimate the impact of an LPPM and to pick the best option. EDEN [25] proposes a more advanced solution: federated learning is used among the participants to learn a model which can predict the best configuration without sharing any mobility trace. Nonetheless, both approaches require storing an important volume of data to successfully protect user privacy.
|
| 74 |
+
|
| 75 |
+
The strong resource constraints of mobile devices prevent the previous solutions to work in practice. In particular, mobile ecosystems lack system components to deploy efficient local storage solutions. Not only is there no advanced database readily available on mobile operating systems, but no native data modeling framework is provided either. For example, EDEN was implemented using the PYTORCH library [12], which is not available on smartphones ${}^{1}$ : the proposal was only simulated on a server. It is, therefore, crucial to deliver tools enabling the deployment of state-of-the-art techniques in mobile devices to support privacy-preserving strategies at the edge of a network.
|
| 76 |
+
|
| 77 |
+

|
| 78 |
+
|
| 79 |
+
Figure 2: FLAIR considers the sample ${s}_{0} = \left( {{x}_{0},{y}_{0}}\right)$ of the current model as the origin. In addition to the current gradient ${A}_{0}$ , the minimum and maximum acceptable gradients, ${A}_{\min }$ and ${A}_{\max }$ , are kept. ${A}_{\min }$ and ${A}_{\max }$ are defined such that the error reported by the model is lower than or equal to $\varepsilon$ . To check if a new sample ${s}_{t} = \left( {{x}_{t},{y}_{t}}\right)$ fits the model, FLAIR computes its gradient ${A}_{t}$ and compares it to ${A}_{\min }$ and ${A}_{\max }$ .
|
| 80 |
+
|
| 81 |
+
### 3.2 Unleashing Your Device Storage with FLAIR
|
| 82 |
+
|
| 83 |
+
To overcome the memory constraint of mobile devices, efficient temporal databases must be ported onto mobile environments. In particular, we advocate the use of data modeling, such as PLA [22, 24] or GREYCAT [31], to increase the storage capacity of constrained devices. We propose Fast LineAr InteRpolation (FLAIR), a storage system based on a fast PLA to store approximate models of any data stream on any mobile device, instead of storing all the raw data samples as state-of-the-art temporal databases do. For simplicity, we refer to both the storage system and the associated modeling technique as FLAIR.
|
| 84 |
+
|
| 85 |
+
FLAIR models one-dimensional samples as piece-wise linear interpolations that enforce the following invariant: all samples modeled by an interpolation must maintain an error below the configuration parameter $\varepsilon$ . Data samples are inserted incrementally: the current model is adjusted to fit new samples until it cannot satisfy the invariant. In that case, the current model is persisted in memory $\mathcal{M}$ , and a new interpolation begins from the two last inserted points. Each model in $\mathcal{M}$ is represented by a pair $\left( {{s}_{i},{A}_{i}}\right) : {s}_{i} = \left( {{x}_{i},{y}_{i}}\right)$ is the interpolation’s initial sample, while ${A}_{i}$ is the line’s gradient. Each model thus represents the function $y = {A}_{i} \times \left( {x - {x}_{i}}\right) + {y}_{i}$ . While working on the current model, its initial sample is set as the origin ${s}_{0} = \left( {{x}_{0},{y}_{0}}\right)$ , the current interpolation is thus a polynomial defined as $y = {A}_{0} \times x$ . The current gradient ${A}_{0}$ is the slope between ${s}_{0}$ and the last interpolated sample ${s}_{t}$ . Fig. 2 depicts a FLAIR model with two initial samples ${s}_{0}$ and ${s}_{t}$ . It shows the interpolation parameters $\left( {{s}_{0},{A}_{0}}\right)$ , and two additional gradients ${A}_{\min }$ and ${A}_{\max }$ . A naive solution to maintain the invariant while updating the current model would be to memorize every sample between ${s}_{0}$ and the last sample ${s}_{t}$ , to check their error against the model. Instead, FLAIR only maintains ${A}_{\min }$ and ${A}_{\max }$ , which are updated at each sample insertion.
|
| 86 |
+
|
| 87 |
+
---
|
| 88 |
+
|
| 89 |
+
${}^{1}$ PyTorch allows importing and using trained models on Android and iOS, but disallows training them locally.
|
| 90 |
+
|
| 91 |
+
---
|
| 92 |
+
|
| 93 |
+

|
| 94 |
+
|
| 95 |
+
Figure 3: When a new sample fits within $\left\lbrack {{A}_{\min };{A}_{\max }}\right\rbrack$ , it is added to the current model by updating ${A}_{0}$ and the interval to ensure that all previous samples fit the updated model.
|
| 96 |
+
|
| 97 |
+
Algorithm 1 FLAIR insertion using parameter $\varepsilon \in {\mathbb{R}}^{+ * }$
|
| 98 |
+
|
| 99 |
+
---
|
| 100 |
+
|
| 101 |
+
Before: $\mathcal{M};{x}_{0},{x}_{t - 1} \in {\mathbb{R}}^{ + };{y}_{0},{y}_{t - 1},{A}_{0},{A}_{\min },{A}_{\max } \in \mathbb{R}$
|
| 102 |
+
|
| 103 |
+
function INSERT $\left( {{x}_{t} \in {\mathbb{R}}^{ + },{y}_{t} \in \mathbb{R}}\right)$
|
| 104 |
+
|
| 105 |
+
$\left( {{x}_{t}^{\Delta },{y}_{t}^{\Delta }}\right) \leftarrow \left( {{x}_{t} - {x}_{0},{y}_{t} - {y}_{0}}\right) \; \vartriangleright$ Compute ${A}_{t}$
|
| 106 |
+
|
| 107 |
+
${A}_{t} \leftarrow {y}_{t}^{\Delta }/{x}_{t}^{\Delta }$
|
| 108 |
+
|
| 109 |
+
if ${A}_{\min } \leq {A}_{t} \leq {A}_{\max }$ then
|
| 110 |
+
|
| 111 |
+
${A}_{0} \leftarrow {A}_{t}$ $\vartriangleright$ Update model
|
| 112 |
+
|
| 113 |
+
${A}_{\min } \leftarrow \max \left( {{A}_{\min },\frac{{y}_{t}^{\Delta } - \varepsilon }{{x}_{t}^{\Delta }}}\right)$
|
| 114 |
+
|
| 115 |
+
${A}_{\max } \leftarrow \min \left( {{A}_{\max },\frac{{y}_{t}^{\Delta } + \varepsilon }{{x}_{t}^{\Delta }}}\right)$
|
| 116 |
+
|
| 117 |
+
else
|
| 118 |
+
|
| 119 |
+
$\mathcal{M}$ .insert $\left( {{x}_{0},{y}_{0},{A}_{0}}\right) \; \vartriangleright$ Persist model
|
| 120 |
+
|
| 121 |
+
$\left( {{x}_{0},{y}_{0}}\right) \leftarrow \left( {{x}_{t - 1},{y}_{t - 1}}\right) \; \vartriangleright$ Build new model
|
| 122 |
+
|
| 123 |
+
$\left( {{x}_{t}^{\Delta },{y}_{t}^{\Delta }}\right) \leftarrow \left( {{x}_{t} - {x}_{0},{y}_{t} - {y}_{0}}\right)$
|
| 124 |
+
|
| 125 |
+
${A}_{0} \leftarrow {y}_{t}^{\Delta }/{x}_{t}^{\Delta }$
|
| 126 |
+
|
| 127 |
+
${A}_{\min } \leftarrow \left( {{y}_{t}^{\Delta } - \varepsilon }\right) /{x}_{t}^{\Delta }$
|
| 128 |
+
|
| 129 |
+
${A}_{\max } \leftarrow \left( {{y}_{t}^{\Delta } + \varepsilon }\right) /{x}_{t}^{\Delta }$
|
| 130 |
+
|
| 131 |
+
end if
|
| 132 |
+
|
| 133 |
+
$\left( {{x}_{t - 1},{y}_{t - 1}}\right) \leftarrow \left( {{x}_{t},{y}_{t}}\right) \; \vartriangleright$ Update penultimate
|
| 134 |
+
|
| 135 |
+
end function
|
| 136 |
+
|
| 137 |
+
---
|
| 138 |
+
|
| 139 |
+
Algorithm 1 details the insertion of a new sample ${s}_{t}$ . First, FLAIR computes the gradient ${A}_{t}$ of the line $\left( {{s}_{0},{s}_{t}}\right)$ (lines 2-3). If ${A}_{t}$ is inside $\left\lbrack {{A}_{\min };{A}_{\max }}\right\rbrack ,{s}_{t}$ is added to the current model by updating ${A}_{0},{A}_{\min }$ and ${A}_{\max }$ (lines 5-7), as displayed in figure 3. Graphically, we see that the resulting 'allowed cone' is the intersection of the model’s previous one, and that of ${s}_{t}$ ’s allowed error. By recurrence, the cone materialized by ${s}_{0}$ and $\left\lbrack {{A}_{\min };{A}_{\max }}\right\rbrack$ is the intersection of the error margin of every point modeled by the current interpolation-illustrating how FLAIR respects its invariant. If ${A}_{t}$ falls outside the interval $\left\lbrack {{A}_{\min };{A}_{\max }}\right\rbrack ,{s}_{t}$ breaks the invariant: the current model is persisted in memory $\mathcal{M}\left( {1.9}\right)$ , and a new model $\left( {{s}_{0},{A}_{0}}\right)$ is computed from ${s}_{t - 1}$ , along with new limits ${A}_{\min }$ and ${A}_{\max }$ (l. 10-14). This case is displayed in figure 4. In any case, the penultimate sample ${s}_{t - 1}$ is updated on line 16 .
|
| 140 |
+
|
| 141 |
+
In FLAIR, reading a value $x$ is achieved by estimating its image using the appropriate model, as is shown in algorithm 2.
|
| 142 |
+
|
| 143 |
+

|
| 144 |
+
|
| 145 |
+
Figure 4: When a new sample reports an error $> \varepsilon$ , a new model is created using the penultimate sample ${s}_{t - 1}$ as ${s}_{0}$ .
|
| 146 |
+
|
| 147 |
+
Lines 2-3 display the computation of the image when $x$ belongs to the current model. When it does not, FLAIR retrieves the model in charge of approximating $x$ (1.5). In practice, this is made through a dichotomy search, as $\mathcal{M}$ stores models in insertion order. Using that model, the interpolation of $x$ is computed on line 6 .
|
| 148 |
+
|
| 149 |
+
Algorithm 2 FLAIR approximate read
|
| 150 |
+
|
| 151 |
+
---
|
| 152 |
+
|
| 153 |
+
Before: Current model $\left( {{x}_{0},{y}_{0},{A}_{0}}\right)$ ;
|
| 154 |
+
|
| 155 |
+
Memory $\mathcal{M}$ containing previous models
|
| 156 |
+
|
| 157 |
+
function $\operatorname{READ}\left( {x \in {\mathbb{R}}^{ + }}\right)$
|
| 158 |
+
|
| 159 |
+
if ${x}_{0} \leq x$ then
|
| 160 |
+
|
| 161 |
+
return ${A}_{0} \times \left( {x - {x}_{0}}\right) + {y}_{0}$
|
| 162 |
+
|
| 163 |
+
end if
|
| 164 |
+
|
| 165 |
+
Select $i$ s.t. $\left( {{x}_{i},{y}_{i},{A}_{i}}\right) \in \mathcal{M} \land {x}_{i} \leq x < {x}_{i + 1}$
|
| 166 |
+
|
| 167 |
+
return ${A}_{i} \times \left( {x - {x}_{i}}\right) + {y}_{i}$
|
| 168 |
+
|
| 169 |
+
end function
|
| 170 |
+
|
| 171 |
+
---
|
| 172 |
+
|
| 173 |
+
The value of $\varepsilon$ has an important impact on the performances of FLAIR. Figure 5 illustrates the longitude of Figure 1b with two extreme values for $\varepsilon$ . If $\varepsilon$ is too small (Fig. 5a), none of the inserted samples fits the current model at that time, initiating a new model each time. In that case, there will be one model per sample, imposing an important memory overhead. The resulting model overfits the data. On the other hand, if $\varepsilon$ is too large (Fig. 5b), then all the inserted samples fit, and a single model is kept. While it is the best case memory-wise, the resulting model simply connects the first and last point and underfits the data.
|
| 174 |
+
|
| 175 |
+
While FLAIR is designed for the modeling of one-dimensional data, it straight-forwardly generalizes to multiple-dimensional data by combining several instances of FLAIR. As long as the newly inserted data samples fit the existing model, the memory footprint of FLAIR remains unchanged. This potentially unlimited storage capacity makes FLAIR a key asset for mobile devices, making the storage of mobility traces possible. We claim that the use of FLAIR alleviates the memory constraint of mobile devices, making the real use of LPPM possible and paving the way for user control of SPI.
|
| 176 |
+
|
| 177 |
+

|
| 178 |
+
|
| 179 |
+
Figure 5: The performances of FLAIR is highly dependent on the value of $\varepsilon$ : a too small value will result in overfitting and a too large one in underfitting.
|
| 180 |
+
|
| 181 |
+
### 3.3 Evaluating Your Location Privacy with D&S
|
| 182 |
+
|
| 183 |
+
To demonstrate that FLAIR enables the deployment of existing LPPM in the wild, we use FLAIR on a mobile device to store an entire dataset of mobility traces. Then, we perform a geolocation attack on these traces, with and without the use of an LPPM. We focus on POI attacks [34] and we use PROMESSE [35] as the LPPM to protect the mobility traces. POI-attack [34] aims at extracting the POIs from a mobility trace. The extraction is done by a two-steps algorithm: first potential candidates for POIs, dubbed stays, are extracted and, then, these stays are merged to avoid duplication of similar POIs. A stay is defined as a circle with a radius lower than ${D}_{\max }$ where a user spent a time higher than a set time ${t}_{\min }$ . A stay is represented by its center. The two thresholds ${t}_{min}$ and ${D}_{\max }$ have an important impact on the type of POI extracted. Short stays will identify day-to-day patterns, such as shopping preferences, while long stays will identify travel preferences and periods, for example. The resulting stays whose centroids are close to a given value are then merged to obtain the final POI.
|
| 184 |
+
|
| 185 |
+
The regular way to extract the stays is to iterate over the mobility traces and compute stays as they appear [34]. Unfortunately, this approach is very expensive for dense mobility traces-i.e., with many data samples per unit of time. Instead of sampling, which results in a loss of information, we introduce a new algorithm to extract the stays while scaling with the density of the traces. This contribution, named ${Di}$ - vide & Stay (D&S), is a divide-and-conquer algorithm that considers the mobility trace as a whole, and not iteratively.
|
| 186 |
+
|
| 187 |
+
The intuition behind Divide & Stay is to avoid computing stays when it is useless. It is impossible to extract a stay from a segment where more than ${D}_{\max }$ meters have been traveled in less than ${t}_{\min }$ . For example, the mobility trace of a trip in a car at high speed in a straight line meets those conditions. While the regular approach would consider each location until the end of the trace, D&S skips it entirely. The denser the trace the more time the regular approach would spend on such segments. The key idea of Divide & Stay is to recursively divide the trace until either such a segment is found, and discarded, or until a fixed size segment is found on which the regular way to extract stays is performed.
|
| 188 |
+
|
| 189 |
+
More precisely, in D&S, the trace is split into two parts, cut in the middle. Both segments left and right, are individually considered. If the start and endpoint of the segment are close temporally, but far spatially, it means that no stay would be possibly extracted: no stay is further searched on this segment. Otherwise, stays are recursively computed with the top-down approach on the segment, until the size is lower than a given threshold $S$ , e.g. 300 . In that case, the classical way to compute stays [34] is triggered on the considered sub-trace. Algorithm 3 depicts the pseudo-code of Divide & Stay. The trace $T$ is manipulated as a whole but with the different indexes $s, i$ , and $e$ for the recursion. $T\left\lbrack i\right\rbrack$ . $t$ refers to the timestamp of the sample $T\left\lbrack i\right\rbrack$ and $T\left\lbrack i\right\rbrack .g$ refers to the associated location. The distance between two locations is computed with geo.dist and the function getStays refers to the original function computing stays [34].
|
| 190 |
+
|
| 191 |
+
The more discarded segments, the faster compared to the regular approach. However, stays around the middle points of index $i$ could be missed, but D&S ignores them as a POI is a cluster of several stays: it is very unlikely to miss them all. D&S can be implemented sequentially or concurrently, to leverage multi-core processors.
|
| 192 |
+
|
| 193 |
+
Algorithm 3 Divide & Stay (D&S)
|
| 194 |
+
|
| 195 |
+
---
|
| 196 |
+
|
| 197 |
+
Input: $T \in {\left( \mathbb{R} \times \mathbb{G}\right) }^{n};S \in {\mathbb{N}}^{ + };s \in \llbracket 0;n - 1\rrbracket$ ;
|
| 198 |
+
|
| 199 |
+
$e \in \llbracket 0;n - 1\rrbracket ,\left( {{t}_{min},{D}_{max}}\right) \in {\mathbb{R}}^{2 + }$
|
| 200 |
+
|
| 201 |
+
Output: ${STAYS} \in {\left( \mathbb{R} \times \mathbb{G}\right) }^{n}$
|
| 202 |
+
|
| 203 |
+
${STAYS} \leftarrow \varnothing$
|
| 204 |
+
|
| 205 |
+
if $T$ .size $\left( \right) \leq S$ then
|
| 206 |
+
|
| 207 |
+
return getStays $\left( {T\text{.subtrace}\left( {s, e}\right) , m, D}\right)$
|
| 208 |
+
|
| 209 |
+
end if
|
| 210 |
+
|
| 211 |
+
$i = \lfloor \left( {e + s}\right) /2\rfloor$
|
| 212 |
+
|
| 213 |
+
${t1} = T\left\lbrack i\right\rbrack \cdot t - T\left\lbrack s\right\rbrack \cdot t$
|
| 214 |
+
|
| 215 |
+
${d1} = \operatorname{geo.dist}\left( {T\left\lbrack s\right\rbrack .g, T\left\lbrack i\right\rbrack .g}\right)$
|
| 216 |
+
|
| 217 |
+
if $\neg \left( {{d1} > {D}_{\max } \land {t1} \leq {t}_{\min }}\right)$ then
|
| 218 |
+
|
| 219 |
+
STAYS $+ = D\& S\left( {T, S, s, i,{t}_{min},{D}_{max}}\right)$
|
| 220 |
+
|
| 221 |
+
end if
|
| 222 |
+
|
| 223 |
+
${t2} = T\left\lbrack e\right\rbrack \cdot t - T\left\lbrack i\right\rbrack \cdot t$
|
| 224 |
+
|
| 225 |
+
${d2} =$ geo.dist $\left( {T\left\lbrack i\right\rbrack .g, T\left\lbrack e\right\rbrack .g}\right)$
|
| 226 |
+
|
| 227 |
+
if $\neg \left( {{d2} > {D}_{\max } \land {t2} \leq {t}_{\min }}\right)$ then
|
| 228 |
+
|
| 229 |
+
STAYS $+ = D\& S\left( {T, S, i, e,{t}_{min},{D}_{max}}\right)$
|
| 230 |
+
|
| 231 |
+
end if
|
| 232 |
+
|
| 233 |
+
return ${STAYS}$
|
| 234 |
+
|
| 235 |
+
---
|
| 236 |
+
|
| 237 |
+
## 4 Experimental Setup
|
| 238 |
+
|
| 239 |
+
This section presents observed indicators used to affirm the value of FLAIR's contribution to mobile machine learning on time series. We then introduce datasets that were used to assert FLAIR's storage capabilities. Next, we present competing solutions that were also implemented in benchmark applications to compare with FLAIR's performances. Finally, we discuss experimentation settings.
|
| 240 |
+
|
| 241 |
+
### 4.1 Key Performance Metrics
|
| 242 |
+
|
| 243 |
+
To evaluate how our approach performs, we use two classes of key performance metrics: system metrics and privacy-related metrics. Concerning privacy-related experiments, we only measure the computation time when evaluating Divide & Stay. Those metrics highly depend on the chosen algorithms, while the use of FLAIR has no impact. Since our objective is to demonstrate that FLAIR can help to port state-of-the-art LPPM techniques on constrained devices, we do not discuss privacy-related metrics for other experiments.
|
| 244 |
+
|
| 245 |
+
Memory footprint The key objective of FLAIR is to reduce the memory footprint required to store an unbounded stream of samples. More specifically, we explore two metrics: (i) the number of 64-bits variables required by the model and (ii) the size of the model in the device memory. To do so, we compare the size of the persistent file with the size of the vanilla SQLITE database file. We consider the number of 64-bit variables as a device-agnostic estimation of the model footprint.
|
| 246 |
+
|
| 247 |
+
I/O throughput Another relevant system metric is the $\mathrm{I}/\mathrm{O}$ throughput of the temporal databases. In particular, we measure how many write and read operations can be performed per second.
|
| 248 |
+
|
| 249 |
+
We will be comparing POI-inference algorithms, and POIs returned by the same algorithm using different data backends. For that reason, we need two metrics to compare the sets of POIs returned in the different cases: a distance between POIs, and the sets' sizes.
|
| 250 |
+
|
| 251 |
+
Measuring the quality of inferred POIs is difficult, as there is no acknowledged definition of how to compute POIs. We consider as our ground truth the POIs inferred by the state-of-the-art POI-attack [34], which we refer to as the 'raw' POIs. The existence of such a 'ground-truth' is however debatable, as two different-but close-POIs can be merged by the algorithm into a single POI. As an example, if a user visits two different shops separated by a road, but their distance is lower than ${D}_{\max }$ , those will be merged into a single POI located at the center of the road.
|
| 252 |
+
|
| 253 |
+
Distance between POIs As the POI definition is mainly algorithmic, we compute the distance of each obtained POI to its closest raw POI as the metrics assessing the quality of new POIs. These distances are reported as a Cumulative Distribution Function (CDF). If FLAIR does not alter significantly the locations of the mobility traces it captures, the computed distances should be short.
|
| 254 |
+
|
| 255 |
+
Number of POIs In addition to the distances between POIs, we are also considering their returned quantity as a metric. In our previous example, visiting the two shops may result in two different POIs because they have been slightly shifted by FLAIR. Beyond the numbers, we expect that PROMESSE successfully anonymizes mobility traces by returning a grand total of zero POI.
|
| 256 |
+
|
| 257 |
+
### 4.2 Mobility Datasets
|
| 258 |
+
|
| 259 |
+
Cabspotting CABSPOTTING [33] is a mobility dataset of 536 taxis in the San Francisco Bay Area. The data was collected during a month and is composed of 11 million records, for a total of 388MB.
|
| 260 |
+
|
| 261 |
+
PrivaMov PRIVAMOV [32] is a multi-sensors mobility dataset gathered during 15 months by 100 users around the city of Lyon, France. We use the full GPS dataset, which includes 156 million records, totaling 7.2GB. Compared to CABSPOTTING, PRIVAMOV is a highly-dense mobility dataset.
|
| 262 |
+
|
| 263 |
+
### 4.3 Storage Competitors
|
| 264 |
+
|
| 265 |
+
SQLite SQLITE is the state-of-the-art solution to persist and query large volumes of data on Android devices. SQLITE provides a lightweight relational database management system. SQLITE is not a temporal database, but is a convenient and standard way to store samples persistently on a mobile device. Insertions are atomic, so one may batch them to avoid one memory access per insertion.
|
| 266 |
+
|
| 267 |
+
SWAB Sliding-Window And Bottom-up (SWAB) [24] is a linear interpolation model. As FLAIR, the samples are represented by a list of linear models. In particular, reading a sample is achieved by iteratively going through the list of models until the corresponding one is found and then used to estimate the requested value. The bottom-up approach of SWAB starts by connecting every pair of consecutive samples and then iterates by merging the less significant pair of contiguous interpolations. This process is repeated until no more pairs can be merged without introducing an error higher than E. Contrarily to FLAIR, this bottom-up approach is an offline one, requiring all the samples to be known. SWAB extends the bottom-up approach by buffering samples in a sliding window. New samples are inserted in the sliding window and then modeled using a bottom-up approach: whenever the window is full, the oldest model is kept and the captured samples are removed from the buffer.
|
| 268 |
+
|
| 269 |
+
One could expect that the bottom-up approach delivers more accurate models than the greedy FLAIR, even resulting in a slight reduction in the number of models and faster readings. On the other hand, sample insertion is more expensive than FLAIR due to the execution of the bottom-up approach when storing samples. Like FLAIR, SWAB ensures that reading stored samples is at most $\varepsilon$ away from the exact values.
|
| 270 |
+
|
| 271 |
+
Greycat GREYCAT [31] aims at compressing even further the data by not limiting itself to linear models. GREYCAT also models the samples by a list of models, but these models are polynomials. The samples are read exactly the same way.
|
| 272 |
+
|
| 273 |
+
When inserting a sample, it first checks if it fits the model. If so, then nothing needs to be done. Otherwise, unlike FLAIR and SWAB which directly initiate a new model, GREYCAT tries to increase the degree of the polynomial to make it fit the new sample. To do so, GREYCAT first regenerates $d + 1$ samples in the interval covered by the current model, where $d$ is the degree of the current model. Then, a polynomial regression of degree $d + 1$ is computed on those points along the new one. If the resulting regression reports an error higher than $\frac{\varepsilon }{{2}^{d + 1}}$ , then the model is kept, otherwise, the process is repeated by incrementing the degree until either a fitting model is found or a maximum degree is reached. If the maximum degree is reached, the former model is stored and a new model is initiated. The resulting model is quite compact, and thus faster to read, but at the expense of an important insertion cost.
|
| 274 |
+
|
| 275 |
+
Unlike FLAIR and SWAB, there can be errors higher than $\varepsilon$ for the inserted samples, as the errors are not computed on raw samples but on generated ones, which may not coincide. Furthermore, the use of higher-degree polynomials makes the implementation subject to overflow: to alleviate this effect, the inserted values are normalized.
|
| 276 |
+
|
| 277 |
+
### 4.4 Experimental Settings
|
| 278 |
+
|
| 279 |
+
For experiments with unidimensional data-i.e. memory and throughput benchmarks-we set $\varepsilon = {10}^{-2}$ . The random samples used in those experiments are following a uniform distribution in $\left\lbrack {-1,{000};1,{000}}\right\rbrack$ : it is very unlikely to have two successive samples with a difference lower than $\varepsilon$ . For experiments on location data, and unless said otherwise, we set $\varepsilon = {10}^{-3}$ for FLAIR, SWAB and Greycat. For Greycat, the maximum degree for the polynomials is set to 14 . For POI computations, we use ${t}_{\min } = 5\mathrm{\;{min}}$ and a diameter of ${D}_{\max } = {500}\mathrm{\;m}$ for both the standard approach and D&S. Similarly, we use $\delta = {500}\mathrm{\;m}$ for PROMESSE: it should remove all the POIs from the traces.
|
| 280 |
+
|
| 281 |
+
The experiments evaluating the throughput were evaluated four times each and the average is taken as the standard deviation was small. All the other experiments are deterministic and performed once.
|
| 282 |
+
|
| 283 |
+
### 4.5 Implementation Details
|
| 284 |
+
|
| 285 |
+
We ran our experiments on a Fairphone 3 [2] running Android 11; we reproduced them on an iPhone 12 [9] running iOS 15.1.1. We chose to implement our evaluation apps using Flutter [6]. Flutter is Google's UI toolkit, based on the Dart programming language, that can be used to develop natively compiled apps for Android, iOS, web and desktop platforms (as long as the project's dependencies implement cross-compilation to all considered platforms).
|
| 286 |
+
|
| 287 |
+
We, therefore, implemented a Flutter library including FLAIR, its storage competitors, the POI-attack with and without our D&S extension, and PROMESSE. Our implementation is publicly available [5]. For our experiments, we implemented several mobile applications based on this library.
|
| 288 |
+
|
| 289 |
+
## 5 Experimental Results
|
| 290 |
+
|
| 291 |
+
In this section, we evaluate our implementation of FLAIR on Android and iOS to show how it can enable in-situ data management on mobile devices. We first show that using FLAIR paves the way for storing a tremendous quantity of samples, by comparing it to SQLITE and reporting its performances when storing samples generated by the accelerometer. Then, we deploy the PROMESSE LPPM directly on mobile thanks to FLAIR. Still on the mobile phones, we evaluate traces using our POI-attack Divide & Stay (D&S): to assess the precision of the GPS time series modeled by FLAIR, and the privacy gain of the LPPM.
|
| 292 |
+
|
| 293 |
+
### 5.1 Memory Benchmark
|
| 294 |
+
|
| 295 |
+
As there is no temporal database, such as InfluxDB, available on Android, We first compare FLAIR's performances with SQLITE, as it is the only database natively provided on Android.
|
| 296 |
+
|
| 297 |
+
To compare the memory consumption of the two approaches, two same operations are performed with both SQLITE and FLAIR: (i) the incremental insertion of random samples and (ii) the incremental insertion of constant samples. The memory footprint on the disk of both solutions is compared when storing timestamped values. As FLAIR models the inserted samples, random values are the worst-case scenario it can face, while inserting constant values represents the ideal one. One million samples are stored and, for every 10,000 insertion, the size of the file associated with the storage solution is saved. The experiments are done with a publicly available application [10].
|
| 298 |
+
|
| 299 |
+
Figure 6 depicts the memory footprint of both approaches. On the one hand, the size of the SQLITE file grows linearly with the number of inserted samples, no matter the nature (random or constant) of the samples. On the other hand, the FLAIR size grows linearly with random values, while the size is constant for constant values. In particular, for the constant values, the required size is negligible. The difference between vanilla SQLITE and FLAIR is explained by the way the model is stored: while SQLITE optimizes the way the raw data is stored, FLAIR is an in-memory stream storage solution which naively stores coefficients in text file. Using more efficient storage would shrink the difference between the two. As expected, the memory footprint of a data stream storage solution clearly outperforms the one of a vanilla SQLITE database in the case of stable values. While random and constant values are extreme cases, in practice data streams exhibits a behavior between the two scenarios which allows FLAIR to lower the memory required to store those data streams.
|
| 300 |
+
|
| 301 |
+

|
| 302 |
+
|
| 303 |
+
Figure 6: Insertion of 1,000,000 samples, random (R) or constant (C), in both SQLITE and FLAIR.
|
| 304 |
+
|
| 305 |
+
In practice, we compare SQLITE and FLAIR to store the entire PRIVAMOV dataset (7.2GB). FLAIR only requires ${25}\mathrm{{MB}}$ compared to more than $5\mathrm{{GB}}$ for SQLITE, despite the naive storage scheme used by FLAIR. On mobile devices, loading the raw dataset in memory crashes the application, while FLAIR fits the same dataset into memory.
|
| 306 |
+
|
| 307 |
+
### 5.2 Throughput Benchmark
|
| 308 |
+
|
| 309 |
+
We compare FLAIR with its competitors among the temporal databases: SWAB and GREYCAT. We study the throughput of each approach, in terms of numbers of insertions and readings per second. For the insertions, we successively insert ${1M}$ random samples in the storage solution (random values are used as a worst-case situation for FLAIR, due to its way of modeling data). For the reads, we also incrementally insert ${1M}$ samples before querying 10,000 random samples among the inserted ones. GREYCAT is an exception: due to its long insertion time, we only insert 10,000 random values and those values are then queried. Our experiment is done using a publicly available application [13].
|
| 310 |
+
|
| 311 |
+
Figure 7 shows the throughput of the approaches for sequential insertions and random reads. Note the logarithmic scale. FLAIR drastically outperforms its competitors for the insertions: it provides a speed-up from $\times {133}$ against SWAB up to $\times 3,{505}$ against GREYCAT. The insertion scheme of FLAIR is fast as it relies on few parameters. On the other hand, GREYCAT relies on a costly procedure when a sample is inserted: it tries to increase the degree of the current model until it fits with the new point or until a maximum degree is reached. GREYCAT aims at computing a model as compact as possible, which is not the best choice for fast online insertions. While SWAB performs better, it cannot compare to FLAIR because of the way SWAB inserts a sample: when its sliding window is full and a new sample does not fit the current model, a costly bottom-up approach is triggered over the entire window.
|
| 312 |
+
|
| 313 |
+

|
| 314 |
+
|
| 315 |
+
Figure 7: Throughput for insertions and reads using FLAIR, SWAB, and GREYCAT (log scale). FLAIR drastically outperforms its competitors for insertions and reads.
|
| 316 |
+
|
| 317 |
+
For the reads (Fig 7b), FLAIR also outperforms SWAB. Our investigation reports that the gain reported by FLAIR largely benefits from the time index it exploits to fetch the models: SWAB browses the list of models sequentially until the good model is found while FLAIR relies on a dichotomy search. SWAB has a complexity linear in the size of the models list while FLAIR has a logarithmic one. Nonetheless, their lists of models have roughly the same size as random samples were added. GREYCAT has the same approach as SWAB and this is why it is not represented in the results: with only 10,000 insertions instead of ${1M}$ , its list of models is significantly smaller compared to the others, making the comparison unfair. Nonetheless, we expect GREYCAT to have a better throughput as its model list shall be shorter.
|
| 318 |
+
|
| 319 |
+
Note that those results have been obtained with the worst-case: random samples. Similarly unfit for FLAIR are periodical signals such as raw audio: our tests show a memory usage similar to random noise. Because FLAIR leverages linear interpolations, it performs best with signals that have a linear shape (e.g. GPS, accelerometer). We expect SWAB to store fewer models than FLAIR thanks to its sliding window, resulting in faster reads. However, the throughput obtained for FLAIR is minimal and FLAIR is an order of magnitude faster than SWAB for insertions, so it does not make a significant difference. We can conclude that FLAIR is the best solution for storing an unbounded stream of samples on mobile devices.
|
| 320 |
+
|
| 321 |
+
### 5.3 Privacy Benchmark
|
| 322 |
+
|
| 323 |
+
#### 5.3.1 Location privacy
|
| 324 |
+
|
| 325 |
+
Location data is not only highly sensitive privacy-wise, but also crucial for location-based services. While LPPMs have been developed to protect user locations, they are generally used on the server where the data is aggregated. The data is thus exposed to classical threats, such as malicious users, man in the middle, or database leaks. To avoid them, the best solution is to keep the data on the device where it is produced, until it is sufficiently obfuscated to be shared with a third-party. With GPS data, this protection mechanism must be undertaken by a device-local LPPM. Evaluating the privacy of the resulting trace must also be performed locally, by executing attacks on the obfuscated data. Both processes require storing all the user mobility traces directly on the mobile. While existing approaches have simulated this approach [25], no real deployment has ever been reported. In this section, we show that using FLAIR enables overcoming one of the memory hurdles of constrained devices. We use FLAIR to store entire GPS traces on mobile devices, execute POI attacks, and protect the traces using the LPPM PROMESSE [35].
|
| 326 |
+
|
| 327 |
+
PROMESSE [35] is an LPPM that intends to hide POIs from a mobility trace by introducing a negligible spatial error. To do so, PROMESSE smooths the trajectories by replacing the mobility trace with a new one applying a constant speed while keeping the same starting and ending timestamps. The new trace ${T}^{\prime }$ is characterized by the distance $\delta$ between two points. First, additional locations are inserted by considering the existing locations one by one in chronological order. If the distance between the last generated location ${T}^{\prime }\left\lbrack i\right\rbrack$ and the current one $T\left\lbrack c\right\rbrack$ is below $\delta$ , this location is discarded. Otherwise, ${T}^{\prime }\left\lbrack {i + 1}\right\rbrack$ is not defined as the current location $T\left\lbrack c\right\rbrack$ , but the location between ${T}^{\prime }\left\lbrack i\right\rbrack$ and $T\left\lbrack c\right\rbrack$ , such that the distance between ${T}^{\prime }\left\lbrack i\right\rbrack$ and ${T}^{\prime }\left\lbrack {i + 1}\right\rbrack$ is equal to $\delta$ . Once all the locations included in the new mobility trace are defined, the timestamps are updated to ensure that the period between the two locations is the same, keeping the timestamps of the first and last locations unchanged. The resulting mobility trace is protected against POI attacks while providing high spatial accuracy.
|
| 328 |
+
|
| 329 |
+
Our experiments are performed using a publicly available application [7].
|
| 330 |
+
|
| 331 |
+
Enforcing privacy on CABSPOTTING Using FLAIR, we store the entire CABSPOTTING dataset's latitudes and longitudes in memory, using both $\varepsilon = {10}^{-3}$ and $\varepsilon = 2 \times {10}^{-3}$ (representing an accuracy of approximately a hundred meters). For each user, we compute the gain in terms of memory we save by modeling the dataset instead of storing the raw traces.
|
| 332 |
+
|
| 333 |
+
Figure 8 reports the gain distribution as a CDF along with the average gain on the entire dataset. One can observe that most of the user traces benefit from using FLAIR, and FLAIR provides an overall gain of ${21}\%$ for $\varepsilon = {10}^{-3}$ on the entire dataset, and a gain of ${47.9}\%$ for $\varepsilon = 2 \times {10}^{-3}$ . Nonetheless, the mobility of a few users imposes an important cost: for them, using FLAIR is counter-productive. Fortunately, this does not balance out the gain for the other users.
|
| 334 |
+
|
| 335 |
+
To better understand how the $\varepsilon$ parameter introduced by FLAIR affects the utility of the resulting traces, we study the POIs inferred from the modeled traces. We compute the
|
| 336 |
+
|
| 337 |
+

|
| 338 |
+
|
| 339 |
+
Figure 8: Memory gain distribution when storing CABSPOT-TING with FLAIR. Using FLAIR with $\varepsilon = {10}^{-3}$ reports on a gain of ${21}\%$ , while $\varepsilon = 2 \times {10}^{-3}$ reaches a gain of ${48}\%$ .
|
| 340 |
+
|
| 341 |
+

|
| 342 |
+
|
| 343 |
+
Figure 9: Distances distribution when using FLAIR on CAB-SPOTTING. The distances are computed between the POIs obtained using the modeled traces and their closest counterparts, obtained with the raw traces. Except for a few extreme values, the values are close: 90% of the POIs are at a distance lower than 510 meters from the ground truth. The use of FLAIR does not alter the utility of the traces.
|
| 344 |
+
|
| 345 |
+
POIs of the trace both with and without using FLAIR. To estimate the relevance of the obtained POIs, we compute the distance of each POI reported while using the trace modeled by FLAIR to the closest POI among the POIs in the raw trace. Figure 9 depicts the distribution, as a CDF, of this distance between "modeled" and "raw" POIs. Figure 9a shows that the distance is short: with $\varepsilon = {10}^{-3},{99.5}\%$ of the distances are lower than 2,425 meters and 99% are lower than 1,700 meters. Figure 9b zooms on this distribution, focusing on distances lower than 1,000 meters. With $\varepsilon = {10}^{-3},{90}\%$ of the obtained POIs using FLAIR are at a distance lower than 510 meters to a POI inferred from the raw trace. By construction, POIs are the center of spheres of a diameter of 500 meters where the user has stayed more than 5 minutes. The vast majority of the obtained POIs using FLAIR being within 500 meters of raw POIs, it means that FLAIR delivers relevant approximations. With $\varepsilon = 2 \times {10}^{-3},{90}\%$ of the obtained POIs using FLAIR are at a distance lower than 826 meters to a POI inferred from the raw trace: the gain in memory has an impact on the utility of the resulting trace.
|
| 346 |
+
|
| 347 |
+
Figure 10 reports on the sensibility analysis of $\varepsilon$ , both in terms of gains and distances. As expected, the higher $\varepsilon$ , the better the gains, but the longest the distances. Regarding the gains (Fig. 10a), a low $\varepsilon$ can induce a memory overhead. Indeed, if the model is used only for one data point, it generates a memory overhead similarly to Fig. 6, in this case of ${50}\%$ . We, therefore, recommend using $\varepsilon = {10}^{-3}$ as the minimal tolerated error to observe a gain. Regarding the distances, Figure 10b reports on the distribution of distances below 1,000 meters, as the higher values follow the same tendency as Figure 9a. Except for a few extreme values, most of the distances remain short, even for high $\varepsilon$ values.
|
| 348 |
+
|
| 349 |
+

|
| 350 |
+
|
| 351 |
+
Figure 10: Distances distribution for different $\varepsilon$ when using FLAIR on CABSPOTTING. Distances and memory gain are computed from the modeled traces with different values for $\varepsilon$ . The higher $\varepsilon$ , the higher the gain, but the longer the distances between the inferred and raw POI.
|
| 352 |
+
|
| 353 |
+
Processing Benchmark For dense datasets, e.g. with more than two GPS samples per second, the gain becomes even more significant. For example, storing the entire PRIVAMOV dataset using FLAIR with $\varepsilon = {10}^{-3}$ results in a memory gain of 99.87%. Compared to sampling, FLAIR stores all the samples, instead of discarding a part of them. However, the large number of samples can be a hindrance to many approaches, including the extraction of POI. To be able to port LPPMs onto constrained devices, other bottlenecks of the systems should be resolved, in addition to storage.
|
| 354 |
+
|
| 355 |
+
For example, computing POIs with the traditional POI attacks may lead to unpractical computation time. Computing the POIs of the user 1 of PRIVAMOV takes 2 hours: computing the POIs for the entire dataset is far too costly. We cannot expect end-users to execute processes with such computation time on their mobile phone: while FLAIR has removed the memory constraint, computation time is still a hurdle. ${Di}$ - vide & Stay is a way, in this case, to decrease the complexity of POI computation. Table 1 displays PRIVAMOV user 1 POIs' computation time on different platforms. It shows that applying Divide & Stay to the user 1 mobility trace decreases the computation from 2 hours to 59 seconds on Android, providing a $\times {120}$ speed-up; speed gain even reaches $\times {164}$ on iOS, computation time decreasing from 1 hour to 22 seconds. Divide & Stay makes the in-situ use of POI attacks and the corresponding LPPM possible.
|
| 356 |
+
|
| 357 |
+
In addition to speed, the quality of the inferred POIs is the most salient concern about Divide & Stay. We assess the quality by computing the distances to the POIs obtained from the POI-attack on CABSPOTTING. We choose CABSPOTTING because computing it on PRIVAMOV is prohibitive it terms of computation time. Figure 11 displays the distribution of the distances below 100 meters: more than ${68}\%$ are the same and ${90}\%$ of the POIs are at a distance lower than 22 meters from actual ones. Divide & Stay provides an important speed-up without altering the quality of POIs. Note that FLAIR was not used in this case, as the performances of Divide & Stay are orthogonal to the use of a temporal database to model the samples.
|
| 358 |
+
|
| 359 |
+
Table 1: Computation times of raw POIs for PRIVAMOV user 1 on different platforms. Divide & Stay (D&S) is at least 100 times faster than state-of-the-art approaches.
|
| 360 |
+
|
| 361 |
+
<table><tr><td>$\mathbf{{Platform}}$</td><td>POI-attack</td><td>D&S</td><td>Speed-up</td></tr><tr><td>Desktop</td><td>59 min 20 s</td><td>32 s</td><td>$\times {111}$</td></tr><tr><td>iOS</td><td>$1\mathrm{\;h}{00}\mathrm{\;{min}}{01}\mathrm{\;s}$</td><td>22 s</td><td>$\times {164}$</td></tr><tr><td>Android</td><td>$1\mathrm{\;h}{58}\mathrm{\;{min}}{04}\mathrm{\;s}$</td><td>59 s</td><td>$\times {120}$</td></tr></table>
|
| 362 |
+
|
| 363 |
+

|
| 364 |
+
|
| 365 |
+
Figure 11: Distances distribution when using Divide & Stay on CABSPOTTING. The distances between the POIs are obtained using Divide & Stay and their closest counterparts, obtained with the traditional POI attack. Except for a few extreme values, the values are close: more than ${68}\%$ are the same and 90% of the POIs are at a distance lower than 22 meters than a "real" one.
|
| 366 |
+
|
| 367 |
+
Bringing back privacy to the user. By using both FLAIR and D&S we can perform POI-attacks and use LPPMs directly on the user's device. We consider the POIs of user 0 of CABSPOTTING with and without FLAIR, D&S, and Promesse, see Table 2.
|
| 368 |
+
|
| 369 |
+
The use of FLAIR and D&S alters the number of POIs, which explains the extreme values obtained in the distribution of the distances (Fig. 9 and 10b): it corresponds to POIs that have no counterpart and may be far away from other POIs. The use of D&S corroborates the results of Figure 11: an important part of the inferred POIs look similar to the raw ones. On the other hand, even though the number of POIs is similar, none of the POIs obtained using FLAIR are equal to the original one, with or without D&S, despite being very close.
|
| 370 |
+
|
| 371 |
+
Table 2: Impact of FLAIR and D&S on the number of inferred POIs from user 0 trace in CABSPOTTING. Thanks to FLAIR and D&S, PROMESSE succeeds to protect user privacy at the edge.
|
| 372 |
+
|
| 373 |
+
<table><tr><td rowspan="2">.5 Algorithm</td><td colspan="2">without Promesse</td><td colspan="2">with Promesse</td></tr><tr><td>Raw POIs</td><td>FLAIR</td><td>Raw POIs</td><td>FLAIR</td></tr><tr><td>POI-attack</td><td>30</td><td>31</td><td>0</td><td>0</td></tr><tr><td>D&S</td><td>30</td><td>30</td><td>0</td><td>0</td></tr><tr><td>POI-attack $\cap \mathrm{D}\& \mathrm{\;S}$</td><td>21</td><td>20</td><td>-</td><td>-</td></tr></table>
|
| 374 |
+
|
| 375 |
+
To conclude, our implementation of the data stream storage solution, FLAIR, enables the effective deployment of more advanced techniques, such as EDEN [25] or HMC [28]. This may require new algorithms, such as Divide & Stay, but it enables in-situ data privacy protection before sharing any sensitive information. We believe that this is a critical step forward towards improving user privacy as all LPPMs experiments until today were either centralized or simulated.
|
| 376 |
+
|
| 377 |
+
### 5.4 Stability Benchmark
|
| 378 |
+
|
| 379 |
+
We further explore the capability of FLAIR to capture stable models that group as many samples as possible for the longest possible durations. Figure 12 reports on the time and the number of samples covered by the models of FLAIR for the CABSPOTTING and PRIVAMOV datasets. One can observe that the stability of FLAIR depends on the density of the considered datasets. While FLAIR only captures at most 4 samples for ${90}\%$ of the models stored in CABSPOTTING (Fig. 12a), it reaches up to 2,841 samples in the context of PRIVAMOV (Fig. 12c), which samples GPS locations at a higher frequency than CABSPOTTING. This is confirmed by our observations of Figures ${12}\mathrm{\;b}$ and ${12}\mathrm{\;d}$ , which report a time coverage of ${202}\mathrm{\;{ms}}$ and $3,{602}\mathrm{\;{ms}}$ for ${90}\%$ of FLAIR models in CABSPOTTING and PRIVAMOV, respectively. Given that PRIVAMOV is a larger dataset than CABSPOTTING (7.2 GB vs. 388 MB), one can conclude that FLAIR succeeds to scale with the volume of data to be stored.
|
| 380 |
+
|
| 381 |
+
### 5.5 Beyond Location Streams
|
| 382 |
+
|
| 383 |
+
Storing timestamps In all the previous experiments, the timestamps were not modeled by FLAIR, as we expect the user to query the time at which she is interested in the samples. However, it is straightforward to store timestamps using FLAIR: we store couples $\left( {i,{t}_{i}}\right)$ with ${t}_{i}$ being the ${i}^{\text{th }}$ inserted timestamp. Unlike other sensor samples, the nature of the timestamps makes them a good candidate for modeling: their value keeps increasing in a relatively periodic fashion. To assess the efficiency of FLAIR for storing timestamps, we stored all the timestamps of the user 1 of the PRIVAMOV dataset with $\varepsilon = 1 -$ i.e., we tolerate an error of one second per estimate. The 4,341,716 timestamps were stored using 26,862 models for a total of 80,592 floats and an overall gain of 98%, with a mean average error (MAE) of 0.246 second. Hence, not only the use of FLAIR results in a dramatic gain of memory, but it provides very good estimations.
|
| 384 |
+
|
| 385 |
+

|
| 386 |
+
|
| 387 |
+
Figure 12: Stability of the inferred models when using FLAIR on PRIVAMOV and CABSPOTTING with $\varepsilon = {10}^{-3}$ .
|
| 388 |
+
|
| 389 |
+
Storing accelerations To assess that FLAIR is suitable for storing unbounded data streams, we use FLAIR to store accelerometer samples. While storing random samples is of little benefit, accelerometer samples are used in practice to model user mobility. Coupled with other sensors' data, such as GPS values, we can infer if the user is walking, biking or taking a car for example $\left\lbrack {{19},{38},{41}}\right\rbrack$ . However, the accelerometer produces more than 15 samples per second, hence challenging the storage of such a data stream. Our implementation is publicly available [4].
|
| 390 |
+
|
| 391 |
+
We store 10,000 consecutive accelerometer samples with FLAIR and, for every 100 insertions, we report on the size of the file and the relative gain. We use FLAIR with $\varepsilon = 1$ as the accelerometer has high variability, even when the mobile is stationary. FLAIR reports a constant memory whenever stationary, and a small gain $\left( { > \times {1.39}}\right)$ when walking. FLAIR is thus a suitable solution to store data streams produced by the sensors of mobile devices.
|
| 392 |
+
|
| 393 |
+
We also observed that the performances of FLAIR may differ, depending on device configurations. As older hardware's accelerometers are noisier and produce fewer samples than newer sensors, FLAIR's gain appears as higher on latter generation hardware. For instance, inserting ${10k}$ samples with a Pixel 7 Pro (Android 13) smartphone is completed in 21 seconds, while doing the same on a Moto $Z$ (Android 8) lasts for 49 seconds. Regarding iOS, latest iPhone 14 Plus (iOS 16.0.1) takes up 1 minute 39 seconds to store same samples count.
|
| 394 |
+
|
| 395 |
+
## 6 Threats to Validity
|
| 396 |
+
|
| 397 |
+
While the combination of FLAIR and D&S succeeds to embed LPPMs within mobile devices and increasing user privacy, our results might be threatened by some variables we considered.
|
| 398 |
+
|
| 399 |
+
The hardware threats relate to the classes of constrained devices we considered. In particular, we focused on the specific case of smartphones, which is the most commonly deployed mobile device in the wild. To limit the bias introduced by a given hardware configuration, we deployed both FLAIR and D&S on both recent Android and iOS smartphones for most of the reported experiments, while we also considered the impact of hardware configurations on the reported performances.
|
| 400 |
+
|
| 401 |
+
Another potential bias relates to the mobility datasets we considered in the context of this paper. To limit this threat, we evaluated our solutions on two established mobility datasets, CABSPOTTING and PRIVAMOV, which exhibit different characteristics. Yet, we could further explore the impact of these characteristics (sampling frequency, number of participants, duration and scales of the mobility traces). Beyond mobility datasets, we could consider the evaluation of other IoT data streams, such as air quality metrics, to assess the capability of FLAIR to handle a wide diversity of data streams. To mitigate this threat, we reported on the storage of timestamps and accelerations in addition to 2-dimensional locations.
|
| 402 |
+
|
| 403 |
+
Our implementations of FLAIR and D&S may suffer from software bugs that affect the reported performances. To limit this threat, we make the code of our libraries and applications freely available to encourage the reproducibility of our results and share the implementation decisions we took as part of the current implementation.
|
| 404 |
+
|
| 405 |
+
Finally, our results might strongly depend on the parameters we pick to evaluate our contributions. While FLAIR performances (gain, memory footprint) vary depending on the value of the $\varepsilon$ parameter, we considered a sensitive analysis of this parameter and we propose a default value $\varepsilon = {10}^{-3}$ that delivers a minimum memory gain that limits the modeling error.
|
| 406 |
+
|
| 407 |
+
## 7 Conclusion
|
| 408 |
+
|
| 409 |
+
The contributions of this paper are threefold: we introduced $i$ ) a new storage system based on piece-wise linear model dubbed FLAIR, ${ii}$ ) a new way to compute POIs, called ${Di}$ - vide & Stay, and finally iii) demonstrated how FLAIR could unlock device-local privacy protections on time series while using machine learning. Our extensive evaluations, based on real applications available for Android and iOS, show that FLAIR drastically outperforms its competitors in terms of insertion throughput-FLAIR is more than 130 times faster than the traditional SWAB-and read throughput-FLAIR reads 2,340 times faster than SWAB. While FLAIR can store tremendous data on mobile devices, Divide & Stay provides an important speed-up to reduce the total computation time of POI attacks by several orders of magnitude, making them suitable for mobile computing. By sharing these two frameworks with mobile developers, our contribution is an important step forward towards the real deployment of LPPMs and, more generally, privacy-friendly data-intensive workloads at the edge (e.g., federated learning on mobile phones).
|
| 410 |
+
|
| 411 |
+
## References
|
| 412 |
+
|
| 413 |
+
[1] Drift library. https://pub.dev/packages/drift.Last accessed on Sep 22nd, 2022.
|
| 414 |
+
|
| 415 |
+
[2] Fairphone 3 product page. https://shop.fairphone.com/en/fairphone-3.Last accessed on Sep 22nd, 2022.
|
| 416 |
+
|
| 417 |
+
[3] Firebase services. https://firebase.google.com.Last accessed on Sep 22nd, 2022.
|
| 418 |
+
|
| 419 |
+
[4] Flair accelerometer example application. https://anonymous.4open.science/r/ temporalbddflutter_jsys/example/README.md. Last accessed on Mar 1st, 2023.
|
| 420 |
+
|
| 421 |
+
[5] Flair implementation. https://anonymous.4open.science/r/temporalbddflutter_jsys.Last accessed on Mar 1st, 2023.
|
| 422 |
+
|
| 423 |
+
[6] Flutter framework. https://flutter.dev/.Last accessed on Sep 22nd, 2022.
|
| 424 |
+
|
| 425 |
+
[7] In-situ lppm. https://anonymous.4open.science/ r/in-situ-lppm_jsys. Last accessed on Mar 1st, 2023.
|
| 426 |
+
|
| 427 |
+
[8] Influxdb. https://www.influxdata.com/ products/influxdb-overview/. last accessed June 1, 2022.
|
| 428 |
+
|
| 429 |
+
[9] iphone 12 product page. https://www.apple.com/ iphone-12/specs/. Last accessed on Sep 22nd, 2022.
|
| 430 |
+
|
| 431 |
+
[10] Memory space benchmarking application. https: //anonymous.4open.science/r/benchmarking_ memory_space_jsys. Last accessed on Mar 1st, 2023.
|
| 432 |
+
|
| 433 |
+
[11] Objectbox database. https://objectbox.io.Last accessed on Sep 22nd, 2022.
|
| 434 |
+
|
| 435 |
+
[12] Pytorch library. https://www.pytorch.org.Last accessed on Sep 22nd, 2022.
|
| 436 |
+
|
| 437 |
+
[13] Throughput benchmarking application. https: //anonymous.4open.science/r/benchmarking_ throughput_jsys. Last accessed on Mar 1st, 2023.
|
| 438 |
+
|
| 439 |
+
[14] Timescale database. https://www.timescale.com.Last accessed on Sep 22nd, 2022.
|
| 440 |
+
|
| 441 |
+
[15] Miguel E Andrés, Nicolás E Bordenabe, Konstantinos Chatzikokolakis, and Catuscia Palamidessi. Geo-indistinguishability: Differential privacy for location-based systems. In Proceedings of the 2013 ACM SIGSAC conference on Computer & communications security, pages 901-914, 2013.
|
| 442 |
+
|
| 443 |
+
[16] Aurélien Bellet, Rachid Guerraoui, Mahsa Taziki, and Marc Tommasi. Fast and differentially private algorithms for decentralized collaborative machine learning. PhD thesis, INRIA Lille, 2017.
|
| 444 |
+
|
| 445 |
+
[17] Eugen Berlin and Kristof Van Laerhoven. An on-line piecewise linear approximation technique for wireless sensor networks. In IEEE Local Computer Network Conference, pages 905-912. IEEE, 2010.
|
| 446 |
+
|
| 447 |
+
[18] Cynthia Dwork. Differential privacy: A survey of results. In International conference on theory and applications of models of computation, pages 1-19. Springer, 2008.
|
| 448 |
+
|
| 449 |
+
[19] Shih-Hau Fang, Hao-Hsiang Liao, Yu-Xiang Fei, Kai-Hsiang Chen, Jen-Wei Huang, Yu-Ding Lu, and Yu Tsao. Transportation modes classification using sensors on smartphones. Sensors, 16(8):1324, 2016.
|
| 450 |
+
|
| 451 |
+
[20] Alex Galakatos, Michael Markovitch, Carsten Binnig, Rodrigo Fonseca, and Tim Kraska. Fiting-tree: A data-aware index structure. In Proceedings of the 2019 International Conference on Management of Data, pages 1189-1206, 2019.
|
| 452 |
+
|
| 453 |
+
[21] Sébastien Gambs, Marc-Olivier Killijian, and Miguel Núñez del Prado Cortez. De-anonymization attack on geolocated data. Journal of Computer and System Sciences, 80(8):1597-1614, 2014.
|
| 454 |
+
|
| 455 |
+
[22] Florian Grützmacher, Benjamin Beichler, Albert Hein, Thomas Kirste, and Christian Haubelt. Time and memory efficient online piecewise linear approximation of sensor signals. Sensors, 18(6):1672, 2018.
|
| 456 |
+
|
| 457 |
+
[23] Ramaswamy Hariharan and Kentaro Toyama. Project lachesis: parsing and modeling location histories. In International Conference on Geographic Information Science, pages 106-124. Springer, 2004.
|
| 458 |
+
|
| 459 |
+
[24] Eamonn Keogh, Selina Chu, David Hart, and Michael Pazzani. An online algorithm for segmenting time series. In Proceedings 2001 IEEE international conference on data mining, pages 289-296. IEEE, 2001.
|
| 460 |
+
|
| 461 |
+
[25] Besma Khalfoun, Sonia Ben Mokhtar, Sara Bouchenak, and Vlad Nitu. Eden: Enforcing location privacy through re-identification risk assessment: A federated learning approach. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 5(2):1-25, 2021.
|
| 462 |
+
|
| 463 |
+
[26] Xiaoyan Liu, Zhenjiang Lin, and Huaiqing Wang. Novel online methods for time series segmentation. IEEE Transactions on Knowledge and Data Engineering, 20(12):1616-1626, 2008.
|
| 464 |
+
|
| 465 |
+
[27] Ashwin Machanavajjhala, Daniel Kifer, Johannes Gehrke, and Muthuramakrishnan Venkitasubramaniam. 1-diversity: Privacy beyond k-anonymity. ACM Transactions on Knowledge Discovery from Data (TKDD), 1(1):3-es, 2007.
|
| 466 |
+
|
| 467 |
+
[28] Mohamed Maouche, Sonia Ben Mokhtar, and Sara Bouchenak. Hmc: Robust privacy protection of mobility data against multiple re-identification attacks. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 2(3):1-25, 2018.
|
| 468 |
+
|
| 469 |
+
[29] Mohamed Maouche, Sonia Ben Mokhtar, and Sara Bouchenak. Ap-attack: a novel user re-identification attack on mobility datasets. In Proceedings of the 14th EAI International Conference on Mobile and Ubiquitous Systems: Computing, Networking and Services, pages 48-57, 2017.
|
| 470 |
+
|
| 471 |
+
[30] Lakhdar Meftah, Romain Rouvoy, and Isabelle Chris-ment. Fougere: user-centric location privacy in mobile crowdsourcing apps. In IFIP International Conference on Distributed Applications and Interoperable Systems, pages 116-132. Springer, 2019.
|
| 472 |
+
|
| 473 |
+
[31] Assaad Moawad, Thomas Hartmann, François Fouquet, Grégory Nain, Jacques Klein, and Yves Le Traon. Beyond discrete modeling: A continuous and efficient model for iot. In 2015 ACM/IEEE 18th International Conference on Model Driven Engineering Languages and Systems (MODELS), pages 90-99. IEEE, 2015.
|
| 474 |
+
|
| 475 |
+
[32] Sonia Ben Mokhtar, Antoine Boutet, Louafi Bouzouina, Patrick Bonnel, Olivier Brette, Lionel Brunie, Mathieu Cunche, Stephane D'Alu, Vincent Primault, Patrice Raveneau, et al. Priva'mov: Analysing human mobility through multi-sensor datasets. In NetMob 2017, 2017.
|
| 476 |
+
|
| 477 |
+
[33] Michal Piorkowski, Natasa Sarafijanovic-Djukic, and Matthias Grossglauser. Crawdad data set epfl/mobility (v. 2009-02-24), 2009.
|
| 478 |
+
|
| 479 |
+
[34] Vincent Primault, Sonia Ben Mokhtar, Cédric Lau-radoux, and Lionel Brunie. Differentially private location privacy in practice. arXiv preprint arXiv:1410.7744, 2014.
|
| 480 |
+
|
| 481 |
+
[35] Vincent Primault, Sonia Ben Mokhtar, Cédric Lau-radoux, and Lionel Brunie. Time distortion anonymiza-tion for the publication of mobility data with high utility. In 2015 IEEE Trustcom/BigDataSE/ISPA, volume 1, pages 539-546. IEEE, 2015.
|
| 482 |
+
|
| 483 |
+
[36] Latanya Sweeney. k-anonymity: A model for protecting privacy. International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems, 10(05):557-570, 2002.
|
| 484 |
+
|
| 485 |
+
[37] Timescale. Building a distributed time-series database on PostgreSQL, August 2019. Last accessed on June 26th 2022. URL: https://www.timescale.com/blog/ building-a-distributed-time-series-database-on-postgresql/.
|
| 486 |
+
|
| 487 |
+
[38] Lin Wang, Hristijan Gjoreski, Mathias Ciliberto, Sami Mekki, Stefan Valentin, and Daniel Roggen. Enabling reproducible research in sensor-based transportation mode recognition with the sussex-huawei dataset. IEEE Access, 7:10870-10891, 2019.
|
| 488 |
+
|
| 489 |
+
[39] Kaihe Xu, Hao Yue, Linke Guo, Yuanxiong Guo, and Yuguang Fang. Privacy-preserving machine learning algorithms for big data systems. In 2015 IEEE 35th international conference on distributed computing systems, pages 318-327. IEEE, 2015.
|
| 490 |
+
|
| 491 |
+
[40] Blaise Agüera y Arcas. Decentralized machine learning. In 2018 IEEE International Conference on Big Data (Big Data), pages 1-1. IEEE, 2018.
|
| 492 |
+
|
| 493 |
+
[41] Meng-Chieh Yu, Tong Yu, Shao-Chen Wang, Chih-Jen Lin, and Edward Y Chang. Big data small footprint: The design of a low-power classifier for detecting transportation modes. Proceedings of the VLDB Endowment, 7(13):1429-1440, 2014.
|
| 494 |
+
|
| 495 |
+
[42] Changqing Zhou, Dan Frankowski, Pamela Ludford, Shashi Shekhar, and Loren Terveen. Discovering personal gazetteers: an interactive clustering approach. In Proceedings of the 12th annual ACM international workshop on Geographic information systems, pages 266- 273, 2004.
|
papers/JSYS/JSYS 2023/JSYS 2023 March_Papers/T-eV4T4h_dc/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,422 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ FLAIR: STORING UNBOUNDED DATA STREAMS ON MOBILE DEVICES TO UNLOCK USER PRIVACY AT THE EDGE
|
| 2 |
+
|
| 3 |
+
Anonymous authors
|
| 4 |
+
|
| 5 |
+
Paper under double-blind review
|
| 6 |
+
|
| 7 |
+
§ ABSTRACT
|
| 8 |
+
|
| 9 |
+
Mobile devices are producing larger and larger data streams, such as location streams, which are consumed by machine learning pipelines deliver location-based services to end users. Such data streams are generally uploaded and centralized to be processed by third parties, potentially exposing sensitive personal information. In this context, existing protection mechanisms, such as Location Privacy Protection Mechanisms (LPPMs), have been investigated. Alas, none of them have actually been implemented, nor deployed in real-life, in mobile devices to enforce user privacy at the edge. We believe that the effective deployment of LPPMs on mobile devices faces a major challenge: the storage of unbounded data streams. This paper introduces FLAIR, a storage system based on a new piece-wise linear approximation technique that increases the storage capacity of mobile devices by relying on data modeling. Beyond the FLAIR storage layer, we also introduce Divide & Stay, a new privacy-preserving technique to execute Points of Interest (POIs) inference. Finally, we deploy both of them on Android and iOS to demonstrate that a real deployment of LPPMs is now possible.
|
| 10 |
+
|
| 11 |
+
§ 1 INTRODUCTION
|
| 12 |
+
|
| 13 |
+
With the advent of smartphones and more generally the ${In}$ - ternet of Things (IoT), connected devices are mainstream in our societies and widely deployed at the edge. Such constrained devices are not only consuming data and services, such as streaming, geolocalization, or restaurant recommendations, but also producers of data streams by leveraging a wide variety of embedded sensors that capture the surrounding environment of end-users, including their daily routines. Online services are heavily relying on this crowdsourced data to improve the user experience through machine learning. The data deluge generated by a user is potentially tremendous: according to preliminary experiments, a smartphone can generate approximately 2 pairs of GPS samples and 476 triplets of accelerometer samples per second, resulting in more than 172,800 location and 41,126,400 acceleration daily samples. These data streams tend to be uploaded from the device to third-party service providers to extract the valuable information it contains. As an example, the Point Of Interests (POIs) of a user can be extracted from her GPS traces, to better understand consumers' behavior.
|
| 14 |
+
|
| 15 |
+
< g r a p h i c s >
|
| 16 |
+
|
| 17 |
+
Figure 1: FLAIR compacts any location stream as a sequence of segments, obtained from a piece-wise model.
|
| 18 |
+
|
| 19 |
+
However, this continuous data stream inevitably includes sensitive personal information (SPI) that may jeopardize the privacy of end-users, if processed by malicious stakehold-ers. While machine learning algorithms are nowadays widely adopted as a convenient keystone to process large datasets and infer actionable insights, they often require grouping raw input datasets in a remote place, thus imposing a privacy threat for end-users sharing their data. This highlights the utility vs. privacy trade-off that is inherent to any data-sharing activity. On the one hand, without crowd-sourced GPS traces, it would be hard to model traffic in real-time and recommend itineraries. On the other hand, it is crucial to protect user privacy when accepting to gather SPI.
|
| 20 |
+
|
| 21 |
+
To address this ethical challenge, privacy-preserving machine learning [39] and decentralized machine learning [16, 40] are revisiting state-of-the-art machine learning algorithms to enforce user privacy, among other properties. Furthermore, regarding location privacy, several protection mechanisms, called Location Privacy Protection Mechanisms (LPPMs), have been developed to preserve user privacy in mobility situations. Location reports are evaluated and obfuscated before being sent to a service provider, hence keeping user data privacy under control. The user no longer automatically shares her data streams with service providers, but carefully selects what she shares and makes sure the data she unveils does not contain any SPI. For example, Geo-Indistinguishability [15] generalizes differential privacy [18] to GPS traces, while PROMESSE [35] smooths the GPS traces-both temporally and geographically-to erase POIs from the input trace. LPPMs successfully preserve sensitive data, such as POIs, while maintaining the data utility for the targeted service.
|
| 22 |
+
|
| 23 |
+
Despite their effectiveness, no LPPM has ever been implemented and deployed on mobile devices: previous works have been simulated on ADB [25] at best. While extending those works to Android and iOS devices may be perceived as straightforward, it faces several challenges imposed by the scarce resources of mobile devices. In particular, LPPMs often require the user to access all her GPS traces and, ideally, the ones of additional users. The strategy consisting in storing entire raw traces does not scale and is impracticable for the average user who does not possess high memory devices. Unfortunately, the memory constraints of modern devices prevent the users from sharing user traces at the edge of the network.
|
| 24 |
+
|
| 25 |
+
This paper demonstrates that modeling data streams make this transfer possible. In particular, we introduce Fast LineAr InteRpolation (FLAIR), a new data storage system based on a new piecewise linear approximation technique, and we use it to model and store data streams under memory constraints (see Fig. 1). Unlike existing stream or temporal databases, FLAIR does not store a fixed number of data samples but models their evolution, theoretically offering an unlimited storage capacity. We show that FLAIR can be deployed on Android and iOS smartphones to store GPS traces of entire datasets. We then implement a LPPM working directly on mobile phones-which is made possible by the increased GPS storage capacity offered by FLAIR. However, the LPPM's privacy gains need to be evaluated in situ before being uploaded to service providers: are POIs actually obfuscated? To this end, we also introduce a new POI attack algorithm, dubbed Divide & Stay (D&S), which can compute POI on large traces in tens of seconds directly on mobile phones. We report that our combined approaches enable storing tremendous amounts of geolocation data on mobiles, thus allowing the use of LPPMs to ensure end-user privacy while using geolocation services.
|
| 26 |
+
|
| 27 |
+
In the following, we first discuss the related work (Sec. 2), before diving into the details of FLAIR and how it can be applied to boost location privacy (Sec. 3). We then present our experimental setup (Sec. 4) and the results we obtained (Sec. 5); we discuss the potential shortcoming of our approach (Sec. 6) before concluding (Sec. 7).
|
| 28 |
+
|
| 29 |
+
§ 2 RELATED WORKS
|
| 30 |
+
|
| 31 |
+
§ 2.1 LOCATION PRIVACY ATTACKS
|
| 32 |
+
|
| 33 |
+
Raw user mobility traces can be exploited to model the users' behavior and reveal their sensitive personal information (SPI). In particular, the Point Of Interests (POIs) are widely used as a way to extract SPI from mobility traces. In a nutshell, a POI is a place where the user comes often and stays for a significant amount of time: it can reveal her home, workplace, or leisure habits. From POIs, more subtle information can also be inferred: sexual orientation from attendance to LGBT+ places, for instance. The set of POIs can also be used as a way to re-identify a user in a dataset of mobility traces [21,34]. The POIs can be extracted using spatiotemporal clustering algorithms $\left\lbrack {{23},{42}}\right\rbrack$ . Alternatively, an attacker may also reidentify a user directly from raw traces, without computing any POI [29].
|
| 34 |
+
|
| 35 |
+
§ 2.2 MOBILITY DATASET PROTECTION MECHANISMS
|
| 36 |
+
|
| 37 |
+
When data samples are gathered in a remote server, one can expect the latter to protect the dataset as a whole. In particular, $k$ -anonymity [36] is the property of a dataset guaranteeing that whenever some data leaks, the owner of each data trace is indistinguishable from at least $k - 1$ other users contributing to the dataset. Similarly, $l$ -diversity [27] extends $k$ -anonymity by ensuring that the $l$ users are diverse enough not to infer SPI about the data owner. Finally, differential privacy [18] aims at ensuring that the inclusion of a single element in a dataset does not alter significantly an aggregated query on the whole dataset. However, all these techniques require personal samples to be grouped to enforce user privacy.
|
| 38 |
+
|
| 39 |
+
§ 2.3 LOCATION PRIVACY PROTECTION MECHANISMS
|
| 40 |
+
|
| 41 |
+
Rather than protecting the dataset as a whole, each data sample can also be protected individually. In the case of location data, several protection mechanisms-called Location Privacy Protection Mechanisms (LPPMs)—have been developed. They may be deployed in a remote server where all data samples are gathered or directly on the device before any data exchange.
|
| 42 |
+
|
| 43 |
+
Geo-Indistinguishability (GEOI) [15] implements differential privacy [18] at the trace granularity. In particular, GEOI adjusts mobility traces with two-dimensional Laplacian noise, making POIs more difficult to infer. Heat Map Confusion (HMC) [28] aims at preventing re-identification attacks by altering all the traces altogether. The raw traces are transformed into heat maps, which are altered to look like another heat map in the dataset, and then transformed back to a GPS trace.
|
| 44 |
+
|
| 45 |
+
PROMESSE [35] smooths the mobility traces, both temporally and geographically, to erase POIs from the trace. PROMESSE ensures that, between each location sample, there is at least a given time and distance interval. In the resulting mobility trace, the user appears to have a constant speed. While PROMESSE blurs the time notion from the trace-i.e., the user never appears to stay at the same place-it does not alter their spatial characteristics. Yet, while POIs may be still inferred if the user repeatedly goes to the same places, it will be harder to distinguish such POIs from more random crossing points.
|
| 46 |
+
|
| 47 |
+
It is also possible to combine several LPPMs to improve the privacy of users $\left\lbrack {{25},{30}}\right\rbrack$ . Because of potential remote leaks, the user should anonymize her trace locally before sharing it, which is how EDEN [25] operates. However, EDEN has not been deployed: it has only been simulated on ADB. Even more so: despite their validity and to the best of our knowledge, no LPPM has been implemented in mobile devices. This is partly due to the tight constraints of mobile devices, memory-wise notably: HMC [28], for instance, requires locally loading a large set of GPS traces to operate.
|
| 48 |
+
|
| 49 |
+
§ 2.4 TEMPORAL DATABASES & MOBILE DEVICES
|
| 50 |
+
|
| 51 |
+
To overcome the memory constraints of mobile devices, one needs efficient embedded temporal databases. To take the example of Android: only few databases are available, such as SQLITE and its derivative DRIFT [1], the cloud-based Firebase [3], the NOSQL HIVE, and OBJECTBOX [11]. The situation is similar on iOS.
|
| 52 |
+
|
| 53 |
+
Relational databases Relational databases (e.g., SQL) are typically designed for OnLine Transactional Processing (OLTP) and OnLine Analytical Processing (OLAP) workloads, which widely differ from time-series workloads. In the first, reads are mostly contiguous (as opposed to the random-read tendency of OLTP); writes are most often inserts (not updates) and typically target the most recent time ranges. OLAP is designed to store big data workloads to get analytical statistics from data, while not putting the emphasis on read nor write performances. Finally, in temporal workloads, it is unlikely to process writes & reads in the same single transaction [37].
|
| 54 |
+
|
| 55 |
+
Despite these profound differences, several relational databases offer support for temporal data with industry-ready performance. As an example, TimescaleDB [14] is a middle-ware that exposes temporal functionalities atop a relational PostgreSQL foundation.
|
| 56 |
+
|
| 57 |
+
InfluxDB InfluxDB [8] is one of the most widely used temporal databases. Implemented in Go, this high-performance time series engine is designed for really fast writes to collect metrics and events from IoT sensors. Unfortunately, its retention policy prevents the storage to scale in time: the oldest samples are dumped to make room for the new ones.
|
| 58 |
+
|
| 59 |
+
To the best of our knowledge, however, none of the existing solutions prioritize data compression to the extent that they would prune raw data samples in favor of modeled approximations.
|
| 60 |
+
|
| 61 |
+
Modeling data streams While being discrete, the streams sampled by sensors represent inherently continuous signals. Data modeling does not only allow important memory consumption gains, but also flattens sensors' noise, and enables extrapolation between measurements. In particular, Piecewise Linear Approximation (PLA) are used to model the data in successive linear polynomials. An intuitive way to do linear approximation is to apply a bottom-up segmentation: each pair of consecutive points is connected by interpolations; the less significant contiguous interpolations are merged, as long as the obtained interpolations introduce no error above a given threshold. The bottom-up approach has low complexity but usually requires an offline approach to consider all the points at once. The Sliding Window And Bottom-up (SWAB) algorithm [24], however, is an online approach that uses a sliding window to buffer the latest samples on which a bottom-up approach is applied. emSWAB [17] improves the sliding window by adding several samples at the same time instead of one. Instead of interpolation, linear regression can also be used to model the samples reported by IoT sensors [22]. For example, GREYCAT [31] adopts polynomial regressions with higher degrees to further compress the data. Unfortunately, none of those works have been implemented on mobile devices to date.
|
| 62 |
+
|
| 63 |
+
Closer to our work, FSW [26] and the ShrinkingCone algorithm [20] attempt to maximize the length of a segment while satisfying a given error threshold, using the same property used in FLAIR. FSW is not a streaming algorithm as it considers the dataset as a whole, and do not support insertion. The ShrinkingCone algorithm is a streaming greedy algorithm designed to approximate an index, mapping keys to positions: it only considers monotonic increasing functions and can produce disjoints segments. FLAIR models non-monotonic functions in a streaming fashion, while providing joints segments.
|
| 64 |
+
|
| 65 |
+
§ 3 ENABLING USER PRIVACY AT THE EDGE
|
| 66 |
+
|
| 67 |
+
§ 3.1 IN-SITU DATA MANAGEMENT
|
| 68 |
+
|
| 69 |
+
For privacy's sake, we advocate for in-situ data management strategies-i.e., SPI should be anonymized within the mobile device before any data exchange. This avoids anonymizing by relying on a trusted third party first gathering multiple users' raw data. Such a third party may accidentally or intentionally leak users' data, making the adoption of such protection mechanisms ineffective.
|
| 70 |
+
|
| 71 |
+
In the following, we will focus on mobility traces. A mobility trace is an ordered sequence $T$ of pairs(t, g)where $t$ is a timestamp and $g$ is a geolocation sample, a latitude-longitude pair for example. The trace is ordered in chronological order and we assume that reported timestamps are unique.
|
| 72 |
+
|
| 73 |
+
We believe that keeping the raw data where it is created-i.e., on the mobile devices-increases user privacy. However, sharing data is required to enable location-based services, such as traffic modeling. The user should share their mobility traces after they have been protected using an LPPM. The first challenge is to find which LPPM to use and which related parameters are optimal. To tackle this issue, a public dataset can be used to estimate the impact of an LPPM and to pick the best option. EDEN [25] proposes a more advanced solution: federated learning is used among the participants to learn a model which can predict the best configuration without sharing any mobility trace. Nonetheless, both approaches require storing an important volume of data to successfully protect user privacy.
|
| 74 |
+
|
| 75 |
+
The strong resource constraints of mobile devices prevent the previous solutions to work in practice. In particular, mobile ecosystems lack system components to deploy efficient local storage solutions. Not only is there no advanced database readily available on mobile operating systems, but no native data modeling framework is provided either. For example, EDEN was implemented using the PYTORCH library [12], which is not available on smartphones ${}^{1}$ : the proposal was only simulated on a server. It is, therefore, crucial to deliver tools enabling the deployment of state-of-the-art techniques in mobile devices to support privacy-preserving strategies at the edge of a network.
|
| 76 |
+
|
| 77 |
+
< g r a p h i c s >
|
| 78 |
+
|
| 79 |
+
Figure 2: FLAIR considers the sample ${s}_{0} = \left( {{x}_{0},{y}_{0}}\right)$ of the current model as the origin. In addition to the current gradient ${A}_{0}$ , the minimum and maximum acceptable gradients, ${A}_{\min }$ and ${A}_{\max }$ , are kept. ${A}_{\min }$ and ${A}_{\max }$ are defined such that the error reported by the model is lower than or equal to $\varepsilon$ . To check if a new sample ${s}_{t} = \left( {{x}_{t},{y}_{t}}\right)$ fits the model, FLAIR computes its gradient ${A}_{t}$ and compares it to ${A}_{\min }$ and ${A}_{\max }$ .
|
| 80 |
+
|
| 81 |
+
§ 3.2 UNLEASHING YOUR DEVICE STORAGE WITH FLAIR
|
| 82 |
+
|
| 83 |
+
To overcome the memory constraint of mobile devices, efficient temporal databases must be ported onto mobile environments. In particular, we advocate the use of data modeling, such as PLA [22, 24] or GREYCAT [31], to increase the storage capacity of constrained devices. We propose Fast LineAr InteRpolation (FLAIR), a storage system based on a fast PLA to store approximate models of any data stream on any mobile device, instead of storing all the raw data samples as state-of-the-art temporal databases do. For simplicity, we refer to both the storage system and the associated modeling technique as FLAIR.
|
| 84 |
+
|
| 85 |
+
FLAIR models one-dimensional samples as piece-wise linear interpolations that enforce the following invariant: all samples modeled by an interpolation must maintain an error below the configuration parameter $\varepsilon$ . Data samples are inserted incrementally: the current model is adjusted to fit new samples until it cannot satisfy the invariant. In that case, the current model is persisted in memory $\mathcal{M}$ , and a new interpolation begins from the two last inserted points. Each model in $\mathcal{M}$ is represented by a pair $\left( {{s}_{i},{A}_{i}}\right) : {s}_{i} = \left( {{x}_{i},{y}_{i}}\right)$ is the interpolation’s initial sample, while ${A}_{i}$ is the line’s gradient. Each model thus represents the function $y = {A}_{i} \times \left( {x - {x}_{i}}\right) + {y}_{i}$ . While working on the current model, its initial sample is set as the origin ${s}_{0} = \left( {{x}_{0},{y}_{0}}\right)$ , the current interpolation is thus a polynomial defined as $y = {A}_{0} \times x$ . The current gradient ${A}_{0}$ is the slope between ${s}_{0}$ and the last interpolated sample ${s}_{t}$ . Fig. 2 depicts a FLAIR model with two initial samples ${s}_{0}$ and ${s}_{t}$ . It shows the interpolation parameters $\left( {{s}_{0},{A}_{0}}\right)$ , and two additional gradients ${A}_{\min }$ and ${A}_{\max }$ . A naive solution to maintain the invariant while updating the current model would be to memorize every sample between ${s}_{0}$ and the last sample ${s}_{t}$ , to check their error against the model. Instead, FLAIR only maintains ${A}_{\min }$ and ${A}_{\max }$ , which are updated at each sample insertion.
|
| 86 |
+
|
| 87 |
+
${}^{1}$ PyTorch allows importing and using trained models on Android and iOS, but disallows training them locally.
|
| 88 |
+
|
| 89 |
+
< g r a p h i c s >
|
| 90 |
+
|
| 91 |
+
Figure 3: When a new sample fits within $\left\lbrack {{A}_{\min };{A}_{\max }}\right\rbrack$ , it is added to the current model by updating ${A}_{0}$ and the interval to ensure that all previous samples fit the updated model.
|
| 92 |
+
|
| 93 |
+
Algorithm 1 FLAIR insertion using parameter $\varepsilon \in {\mathbb{R}}^{+ * }$
|
| 94 |
+
|
| 95 |
+
Before: $\mathcal{M};{x}_{0},{x}_{t - 1} \in {\mathbb{R}}^{ + };{y}_{0},{y}_{t - 1},{A}_{0},{A}_{\min },{A}_{\max } \in \mathbb{R}$
|
| 96 |
+
|
| 97 |
+
function INSERT $\left( {{x}_{t} \in {\mathbb{R}}^{ + },{y}_{t} \in \mathbb{R}}\right)$
|
| 98 |
+
|
| 99 |
+
$\left( {{x}_{t}^{\Delta },{y}_{t}^{\Delta }}\right) \leftarrow \left( {{x}_{t} - {x}_{0},{y}_{t} - {y}_{0}}\right) \; \vartriangleright$ Compute ${A}_{t}$
|
| 100 |
+
|
| 101 |
+
${A}_{t} \leftarrow {y}_{t}^{\Delta }/{x}_{t}^{\Delta }$
|
| 102 |
+
|
| 103 |
+
if ${A}_{\min } \leq {A}_{t} \leq {A}_{\max }$ then
|
| 104 |
+
|
| 105 |
+
${A}_{0} \leftarrow {A}_{t}$ $\vartriangleright$ Update model
|
| 106 |
+
|
| 107 |
+
${A}_{\min } \leftarrow \max \left( {{A}_{\min },\frac{{y}_{t}^{\Delta } - \varepsilon }{{x}_{t}^{\Delta }}}\right)$
|
| 108 |
+
|
| 109 |
+
${A}_{\max } \leftarrow \min \left( {{A}_{\max },\frac{{y}_{t}^{\Delta } + \varepsilon }{{x}_{t}^{\Delta }}}\right)$
|
| 110 |
+
|
| 111 |
+
else
|
| 112 |
+
|
| 113 |
+
$\mathcal{M}$ .insert $\left( {{x}_{0},{y}_{0},{A}_{0}}\right) \; \vartriangleright$ Persist model
|
| 114 |
+
|
| 115 |
+
$\left( {{x}_{0},{y}_{0}}\right) \leftarrow \left( {{x}_{t - 1},{y}_{t - 1}}\right) \; \vartriangleright$ Build new model
|
| 116 |
+
|
| 117 |
+
$\left( {{x}_{t}^{\Delta },{y}_{t}^{\Delta }}\right) \leftarrow \left( {{x}_{t} - {x}_{0},{y}_{t} - {y}_{0}}\right)$
|
| 118 |
+
|
| 119 |
+
${A}_{0} \leftarrow {y}_{t}^{\Delta }/{x}_{t}^{\Delta }$
|
| 120 |
+
|
| 121 |
+
${A}_{\min } \leftarrow \left( {{y}_{t}^{\Delta } - \varepsilon }\right) /{x}_{t}^{\Delta }$
|
| 122 |
+
|
| 123 |
+
${A}_{\max } \leftarrow \left( {{y}_{t}^{\Delta } + \varepsilon }\right) /{x}_{t}^{\Delta }$
|
| 124 |
+
|
| 125 |
+
end if
|
| 126 |
+
|
| 127 |
+
$\left( {{x}_{t - 1},{y}_{t - 1}}\right) \leftarrow \left( {{x}_{t},{y}_{t}}\right) \; \vartriangleright$ Update penultimate
|
| 128 |
+
|
| 129 |
+
end function
|
| 130 |
+
|
| 131 |
+
Algorithm 1 details the insertion of a new sample ${s}_{t}$ . First, FLAIR computes the gradient ${A}_{t}$ of the line $\left( {{s}_{0},{s}_{t}}\right)$ (lines 2-3). If ${A}_{t}$ is inside $\left\lbrack {{A}_{\min };{A}_{\max }}\right\rbrack ,{s}_{t}$ is added to the current model by updating ${A}_{0},{A}_{\min }$ and ${A}_{\max }$ (lines 5-7), as displayed in figure 3. Graphically, we see that the resulting 'allowed cone' is the intersection of the model’s previous one, and that of ${s}_{t}$ ’s allowed error. By recurrence, the cone materialized by ${s}_{0}$ and $\left\lbrack {{A}_{\min };{A}_{\max }}\right\rbrack$ is the intersection of the error margin of every point modeled by the current interpolation-illustrating how FLAIR respects its invariant. If ${A}_{t}$ falls outside the interval $\left\lbrack {{A}_{\min };{A}_{\max }}\right\rbrack ,{s}_{t}$ breaks the invariant: the current model is persisted in memory $\mathcal{M}\left( {1.9}\right)$ , and a new model $\left( {{s}_{0},{A}_{0}}\right)$ is computed from ${s}_{t - 1}$ , along with new limits ${A}_{\min }$ and ${A}_{\max }$ (l. 10-14). This case is displayed in figure 4. In any case, the penultimate sample ${s}_{t - 1}$ is updated on line 16 .
|
| 132 |
+
|
| 133 |
+
In FLAIR, reading a value $x$ is achieved by estimating its image using the appropriate model, as is shown in algorithm 2.
|
| 134 |
+
|
| 135 |
+
< g r a p h i c s >
|
| 136 |
+
|
| 137 |
+
Figure 4: When a new sample reports an error $> \varepsilon$ , a new model is created using the penultimate sample ${s}_{t - 1}$ as ${s}_{0}$ .
|
| 138 |
+
|
| 139 |
+
Lines 2-3 display the computation of the image when $x$ belongs to the current model. When it does not, FLAIR retrieves the model in charge of approximating $x$ (1.5). In practice, this is made through a dichotomy search, as $\mathcal{M}$ stores models in insertion order. Using that model, the interpolation of $x$ is computed on line 6 .
|
| 140 |
+
|
| 141 |
+
Algorithm 2 FLAIR approximate read
|
| 142 |
+
|
| 143 |
+
Before: Current model $\left( {{x}_{0},{y}_{0},{A}_{0}}\right)$ ;
|
| 144 |
+
|
| 145 |
+
Memory $\mathcal{M}$ containing previous models
|
| 146 |
+
|
| 147 |
+
function $\operatorname{READ}\left( {x \in {\mathbb{R}}^{ + }}\right)$
|
| 148 |
+
|
| 149 |
+
if ${x}_{0} \leq x$ then
|
| 150 |
+
|
| 151 |
+
return ${A}_{0} \times \left( {x - {x}_{0}}\right) + {y}_{0}$
|
| 152 |
+
|
| 153 |
+
end if
|
| 154 |
+
|
| 155 |
+
Select $i$ s.t. $\left( {{x}_{i},{y}_{i},{A}_{i}}\right) \in \mathcal{M} \land {x}_{i} \leq x < {x}_{i + 1}$
|
| 156 |
+
|
| 157 |
+
return ${A}_{i} \times \left( {x - {x}_{i}}\right) + {y}_{i}$
|
| 158 |
+
|
| 159 |
+
end function
|
| 160 |
+
|
| 161 |
+
The value of $\varepsilon$ has an important impact on the performances of FLAIR. Figure 5 illustrates the longitude of Figure 1b with two extreme values for $\varepsilon$ . If $\varepsilon$ is too small (Fig. 5a), none of the inserted samples fits the current model at that time, initiating a new model each time. In that case, there will be one model per sample, imposing an important memory overhead. The resulting model overfits the data. On the other hand, if $\varepsilon$ is too large (Fig. 5b), then all the inserted samples fit, and a single model is kept. While it is the best case memory-wise, the resulting model simply connects the first and last point and underfits the data.
|
| 162 |
+
|
| 163 |
+
While FLAIR is designed for the modeling of one-dimensional data, it straight-forwardly generalizes to multiple-dimensional data by combining several instances of FLAIR. As long as the newly inserted data samples fit the existing model, the memory footprint of FLAIR remains unchanged. This potentially unlimited storage capacity makes FLAIR a key asset for mobile devices, making the storage of mobility traces possible. We claim that the use of FLAIR alleviates the memory constraint of mobile devices, making the real use of LPPM possible and paving the way for user control of SPI.
|
| 164 |
+
|
| 165 |
+
< g r a p h i c s >
|
| 166 |
+
|
| 167 |
+
Figure 5: The performances of FLAIR is highly dependent on the value of $\varepsilon$ : a too small value will result in overfitting and a too large one in underfitting.
|
| 168 |
+
|
| 169 |
+
§ 3.3 EVALUATING YOUR LOCATION PRIVACY WITH D&S
|
| 170 |
+
|
| 171 |
+
To demonstrate that FLAIR enables the deployment of existing LPPM in the wild, we use FLAIR on a mobile device to store an entire dataset of mobility traces. Then, we perform a geolocation attack on these traces, with and without the use of an LPPM. We focus on POI attacks [34] and we use PROMESSE [35] as the LPPM to protect the mobility traces. POI-attack [34] aims at extracting the POIs from a mobility trace. The extraction is done by a two-steps algorithm: first potential candidates for POIs, dubbed stays, are extracted and, then, these stays are merged to avoid duplication of similar POIs. A stay is defined as a circle with a radius lower than ${D}_{\max }$ where a user spent a time higher than a set time ${t}_{\min }$ . A stay is represented by its center. The two thresholds ${t}_{min}$ and ${D}_{\max }$ have an important impact on the type of POI extracted. Short stays will identify day-to-day patterns, such as shopping preferences, while long stays will identify travel preferences and periods, for example. The resulting stays whose centroids are close to a given value are then merged to obtain the final POI.
|
| 172 |
+
|
| 173 |
+
The regular way to extract the stays is to iterate over the mobility traces and compute stays as they appear [34]. Unfortunately, this approach is very expensive for dense mobility traces-i.e., with many data samples per unit of time. Instead of sampling, which results in a loss of information, we introduce a new algorithm to extract the stays while scaling with the density of the traces. This contribution, named ${Di}$ - vide & Stay (D&S), is a divide-and-conquer algorithm that considers the mobility trace as a whole, and not iteratively.
|
| 174 |
+
|
| 175 |
+
The intuition behind Divide & Stay is to avoid computing stays when it is useless. It is impossible to extract a stay from a segment where more than ${D}_{\max }$ meters have been traveled in less than ${t}_{\min }$ . For example, the mobility trace of a trip in a car at high speed in a straight line meets those conditions. While the regular approach would consider each location until the end of the trace, D&S skips it entirely. The denser the trace the more time the regular approach would spend on such segments. The key idea of Divide & Stay is to recursively divide the trace until either such a segment is found, and discarded, or until a fixed size segment is found on which the regular way to extract stays is performed.
|
| 176 |
+
|
| 177 |
+
More precisely, in D&S, the trace is split into two parts, cut in the middle. Both segments left and right, are individually considered. If the start and endpoint of the segment are close temporally, but far spatially, it means that no stay would be possibly extracted: no stay is further searched on this segment. Otherwise, stays are recursively computed with the top-down approach on the segment, until the size is lower than a given threshold $S$ , e.g. 300 . In that case, the classical way to compute stays [34] is triggered on the considered sub-trace. Algorithm 3 depicts the pseudo-code of Divide & Stay. The trace $T$ is manipulated as a whole but with the different indexes $s,i$ , and $e$ for the recursion. $T\left\lbrack i\right\rbrack$ . $t$ refers to the timestamp of the sample $T\left\lbrack i\right\rbrack$ and $T\left\lbrack i\right\rbrack .g$ refers to the associated location. The distance between two locations is computed with geo.dist and the function getStays refers to the original function computing stays [34].
|
| 178 |
+
|
| 179 |
+
The more discarded segments, the faster compared to the regular approach. However, stays around the middle points of index $i$ could be missed, but D&S ignores them as a POI is a cluster of several stays: it is very unlikely to miss them all. D&S can be implemented sequentially or concurrently, to leverage multi-core processors.
|
| 180 |
+
|
| 181 |
+
Algorithm 3 Divide & Stay (D&S)
|
| 182 |
+
|
| 183 |
+
Input: $T \in {\left( \mathbb{R} \times \mathbb{G}\right) }^{n};S \in {\mathbb{N}}^{ + };s \in \llbracket 0;n - 1\rrbracket$ ;
|
| 184 |
+
|
| 185 |
+
$e \in \llbracket 0;n - 1\rrbracket ,\left( {{t}_{min},{D}_{max}}\right) \in {\mathbb{R}}^{2 + }$
|
| 186 |
+
|
| 187 |
+
Output: ${STAYS} \in {\left( \mathbb{R} \times \mathbb{G}\right) }^{n}$
|
| 188 |
+
|
| 189 |
+
${STAYS} \leftarrow \varnothing$
|
| 190 |
+
|
| 191 |
+
if $T$ .size $\left( \right) \leq S$ then
|
| 192 |
+
|
| 193 |
+
return getStays $\left( {T\text{ .subtrace }\left( {s,e}\right) ,m,D}\right)$
|
| 194 |
+
|
| 195 |
+
end if
|
| 196 |
+
|
| 197 |
+
$i = \lfloor \left( {e + s}\right) /2\rfloor$
|
| 198 |
+
|
| 199 |
+
${t1} = T\left\lbrack i\right\rbrack \cdot t - T\left\lbrack s\right\rbrack \cdot t$
|
| 200 |
+
|
| 201 |
+
${d1} = \operatorname{geo.dist}\left( {T\left\lbrack s\right\rbrack .g,T\left\lbrack i\right\rbrack .g}\right)$
|
| 202 |
+
|
| 203 |
+
if $\neg \left( {{d1} > {D}_{\max } \land {t1} \leq {t}_{\min }}\right)$ then
|
| 204 |
+
|
| 205 |
+
STAYS $+ = D\& S\left( {T,S,s,i,{t}_{min},{D}_{max}}\right)$
|
| 206 |
+
|
| 207 |
+
end if
|
| 208 |
+
|
| 209 |
+
${t2} = T\left\lbrack e\right\rbrack \cdot t - T\left\lbrack i\right\rbrack \cdot t$
|
| 210 |
+
|
| 211 |
+
${d2} =$ geo.dist $\left( {T\left\lbrack i\right\rbrack .g,T\left\lbrack e\right\rbrack .g}\right)$
|
| 212 |
+
|
| 213 |
+
if $\neg \left( {{d2} > {D}_{\max } \land {t2} \leq {t}_{\min }}\right)$ then
|
| 214 |
+
|
| 215 |
+
STAYS $+ = D\& S\left( {T,S,i,e,{t}_{min},{D}_{max}}\right)$
|
| 216 |
+
|
| 217 |
+
end if
|
| 218 |
+
|
| 219 |
+
return ${STAYS}$
|
| 220 |
+
|
| 221 |
+
§ 4 EXPERIMENTAL SETUP
|
| 222 |
+
|
| 223 |
+
This section presents observed indicators used to affirm the value of FLAIR's contribution to mobile machine learning on time series. We then introduce datasets that were used to assert FLAIR's storage capabilities. Next, we present competing solutions that were also implemented in benchmark applications to compare with FLAIR's performances. Finally, we discuss experimentation settings.
|
| 224 |
+
|
| 225 |
+
§ 4.1 KEY PERFORMANCE METRICS
|
| 226 |
+
|
| 227 |
+
To evaluate how our approach performs, we use two classes of key performance metrics: system metrics and privacy-related metrics. Concerning privacy-related experiments, we only measure the computation time when evaluating Divide & Stay. Those metrics highly depend on the chosen algorithms, while the use of FLAIR has no impact. Since our objective is to demonstrate that FLAIR can help to port state-of-the-art LPPM techniques on constrained devices, we do not discuss privacy-related metrics for other experiments.
|
| 228 |
+
|
| 229 |
+
Memory footprint The key objective of FLAIR is to reduce the memory footprint required to store an unbounded stream of samples. More specifically, we explore two metrics: (i) the number of 64-bits variables required by the model and (ii) the size of the model in the device memory. To do so, we compare the size of the persistent file with the size of the vanilla SQLITE database file. We consider the number of 64-bit variables as a device-agnostic estimation of the model footprint.
|
| 230 |
+
|
| 231 |
+
I/O throughput Another relevant system metric is the $\mathrm{I}/\mathrm{O}$ throughput of the temporal databases. In particular, we measure how many write and read operations can be performed per second.
|
| 232 |
+
|
| 233 |
+
We will be comparing POI-inference algorithms, and POIs returned by the same algorithm using different data backends. For that reason, we need two metrics to compare the sets of POIs returned in the different cases: a distance between POIs, and the sets' sizes.
|
| 234 |
+
|
| 235 |
+
Measuring the quality of inferred POIs is difficult, as there is no acknowledged definition of how to compute POIs. We consider as our ground truth the POIs inferred by the state-of-the-art POI-attack [34], which we refer to as the 'raw' POIs. The existence of such a 'ground-truth' is however debatable, as two different-but close-POIs can be merged by the algorithm into a single POI. As an example, if a user visits two different shops separated by a road, but their distance is lower than ${D}_{\max }$ , those will be merged into a single POI located at the center of the road.
|
| 236 |
+
|
| 237 |
+
Distance between POIs As the POI definition is mainly algorithmic, we compute the distance of each obtained POI to its closest raw POI as the metrics assessing the quality of new POIs. These distances are reported as a Cumulative Distribution Function (CDF). If FLAIR does not alter significantly the locations of the mobility traces it captures, the computed distances should be short.
|
| 238 |
+
|
| 239 |
+
Number of POIs In addition to the distances between POIs, we are also considering their returned quantity as a metric. In our previous example, visiting the two shops may result in two different POIs because they have been slightly shifted by FLAIR. Beyond the numbers, we expect that PROMESSE successfully anonymizes mobility traces by returning a grand total of zero POI.
|
| 240 |
+
|
| 241 |
+
§ 4.2 MOBILITY DATASETS
|
| 242 |
+
|
| 243 |
+
Cabspotting CABSPOTTING [33] is a mobility dataset of 536 taxis in the San Francisco Bay Area. The data was collected during a month and is composed of 11 million records, for a total of 388MB.
|
| 244 |
+
|
| 245 |
+
PrivaMov PRIVAMOV [32] is a multi-sensors mobility dataset gathered during 15 months by 100 users around the city of Lyon, France. We use the full GPS dataset, which includes 156 million records, totaling 7.2GB. Compared to CABSPOTTING, PRIVAMOV is a highly-dense mobility dataset.
|
| 246 |
+
|
| 247 |
+
§ 4.3 STORAGE COMPETITORS
|
| 248 |
+
|
| 249 |
+
SQLite SQLITE is the state-of-the-art solution to persist and query large volumes of data on Android devices. SQLITE provides a lightweight relational database management system. SQLITE is not a temporal database, but is a convenient and standard way to store samples persistently on a mobile device. Insertions are atomic, so one may batch them to avoid one memory access per insertion.
|
| 250 |
+
|
| 251 |
+
SWAB Sliding-Window And Bottom-up (SWAB) [24] is a linear interpolation model. As FLAIR, the samples are represented by a list of linear models. In particular, reading a sample is achieved by iteratively going through the list of models until the corresponding one is found and then used to estimate the requested value. The bottom-up approach of SWAB starts by connecting every pair of consecutive samples and then iterates by merging the less significant pair of contiguous interpolations. This process is repeated until no more pairs can be merged without introducing an error higher than E. Contrarily to FLAIR, this bottom-up approach is an offline one, requiring all the samples to be known. SWAB extends the bottom-up approach by buffering samples in a sliding window. New samples are inserted in the sliding window and then modeled using a bottom-up approach: whenever the window is full, the oldest model is kept and the captured samples are removed from the buffer.
|
| 252 |
+
|
| 253 |
+
One could expect that the bottom-up approach delivers more accurate models than the greedy FLAIR, even resulting in a slight reduction in the number of models and faster readings. On the other hand, sample insertion is more expensive than FLAIR due to the execution of the bottom-up approach when storing samples. Like FLAIR, SWAB ensures that reading stored samples is at most $\varepsilon$ away from the exact values.
|
| 254 |
+
|
| 255 |
+
Greycat GREYCAT [31] aims at compressing even further the data by not limiting itself to linear models. GREYCAT also models the samples by a list of models, but these models are polynomials. The samples are read exactly the same way.
|
| 256 |
+
|
| 257 |
+
When inserting a sample, it first checks if it fits the model. If so, then nothing needs to be done. Otherwise, unlike FLAIR and SWAB which directly initiate a new model, GREYCAT tries to increase the degree of the polynomial to make it fit the new sample. To do so, GREYCAT first regenerates $d + 1$ samples in the interval covered by the current model, where $d$ is the degree of the current model. Then, a polynomial regression of degree $d + 1$ is computed on those points along the new one. If the resulting regression reports an error higher than $\frac{\varepsilon }{{2}^{d + 1}}$ , then the model is kept, otherwise, the process is repeated by incrementing the degree until either a fitting model is found or a maximum degree is reached. If the maximum degree is reached, the former model is stored and a new model is initiated. The resulting model is quite compact, and thus faster to read, but at the expense of an important insertion cost.
|
| 258 |
+
|
| 259 |
+
Unlike FLAIR and SWAB, there can be errors higher than $\varepsilon$ for the inserted samples, as the errors are not computed on raw samples but on generated ones, which may not coincide. Furthermore, the use of higher-degree polynomials makes the implementation subject to overflow: to alleviate this effect, the inserted values are normalized.
|
| 260 |
+
|
| 261 |
+
§ 4.4 EXPERIMENTAL SETTINGS
|
| 262 |
+
|
| 263 |
+
For experiments with unidimensional data-i.e. memory and throughput benchmarks-we set $\varepsilon = {10}^{-2}$ . The random samples used in those experiments are following a uniform distribution in $\left\lbrack {-1,{000};1,{000}}\right\rbrack$ : it is very unlikely to have two successive samples with a difference lower than $\varepsilon$ . For experiments on location data, and unless said otherwise, we set $\varepsilon = {10}^{-3}$ for FLAIR, SWAB and Greycat. For Greycat, the maximum degree for the polynomials is set to 14 . For POI computations, we use ${t}_{\min } = 5\mathrm{\;{min}}$ and a diameter of ${D}_{\max } = {500}\mathrm{\;m}$ for both the standard approach and D&S. Similarly, we use $\delta = {500}\mathrm{\;m}$ for PROMESSE: it should remove all the POIs from the traces.
|
| 264 |
+
|
| 265 |
+
The experiments evaluating the throughput were evaluated four times each and the average is taken as the standard deviation was small. All the other experiments are deterministic and performed once.
|
| 266 |
+
|
| 267 |
+
§ 4.5 IMPLEMENTATION DETAILS
|
| 268 |
+
|
| 269 |
+
We ran our experiments on a Fairphone 3 [2] running Android 11; we reproduced them on an iPhone 12 [9] running iOS 15.1.1. We chose to implement our evaluation apps using Flutter [6]. Flutter is Google's UI toolkit, based on the Dart programming language, that can be used to develop natively compiled apps for Android, iOS, web and desktop platforms (as long as the project's dependencies implement cross-compilation to all considered platforms).
|
| 270 |
+
|
| 271 |
+
We, therefore, implemented a Flutter library including FLAIR, its storage competitors, the POI-attack with and without our D&S extension, and PROMESSE. Our implementation is publicly available [5]. For our experiments, we implemented several mobile applications based on this library.
|
| 272 |
+
|
| 273 |
+
§ 5 EXPERIMENTAL RESULTS
|
| 274 |
+
|
| 275 |
+
In this section, we evaluate our implementation of FLAIR on Android and iOS to show how it can enable in-situ data management on mobile devices. We first show that using FLAIR paves the way for storing a tremendous quantity of samples, by comparing it to SQLITE and reporting its performances when storing samples generated by the accelerometer. Then, we deploy the PROMESSE LPPM directly on mobile thanks to FLAIR. Still on the mobile phones, we evaluate traces using our POI-attack Divide & Stay (D&S): to assess the precision of the GPS time series modeled by FLAIR, and the privacy gain of the LPPM.
|
| 276 |
+
|
| 277 |
+
§ 5.1 MEMORY BENCHMARK
|
| 278 |
+
|
| 279 |
+
As there is no temporal database, such as InfluxDB, available on Android, We first compare FLAIR's performances with SQLITE, as it is the only database natively provided on Android.
|
| 280 |
+
|
| 281 |
+
To compare the memory consumption of the two approaches, two same operations are performed with both SQLITE and FLAIR: (i) the incremental insertion of random samples and (ii) the incremental insertion of constant samples. The memory footprint on the disk of both solutions is compared when storing timestamped values. As FLAIR models the inserted samples, random values are the worst-case scenario it can face, while inserting constant values represents the ideal one. One million samples are stored and, for every 10,000 insertion, the size of the file associated with the storage solution is saved. The experiments are done with a publicly available application [10].
|
| 282 |
+
|
| 283 |
+
Figure 6 depicts the memory footprint of both approaches. On the one hand, the size of the SQLITE file grows linearly with the number of inserted samples, no matter the nature (random or constant) of the samples. On the other hand, the FLAIR size grows linearly with random values, while the size is constant for constant values. In particular, for the constant values, the required size is negligible. The difference between vanilla SQLITE and FLAIR is explained by the way the model is stored: while SQLITE optimizes the way the raw data is stored, FLAIR is an in-memory stream storage solution which naively stores coefficients in text file. Using more efficient storage would shrink the difference between the two. As expected, the memory footprint of a data stream storage solution clearly outperforms the one of a vanilla SQLITE database in the case of stable values. While random and constant values are extreme cases, in practice data streams exhibits a behavior between the two scenarios which allows FLAIR to lower the memory required to store those data streams.
|
| 284 |
+
|
| 285 |
+
< g r a p h i c s >
|
| 286 |
+
|
| 287 |
+
Figure 6: Insertion of 1,000,000 samples, random (R) or constant (C), in both SQLITE and FLAIR.
|
| 288 |
+
|
| 289 |
+
In practice, we compare SQLITE and FLAIR to store the entire PRIVAMOV dataset (7.2GB). FLAIR only requires ${25}\mathrm{{MB}}$ compared to more than $5\mathrm{{GB}}$ for SQLITE, despite the naive storage scheme used by FLAIR. On mobile devices, loading the raw dataset in memory crashes the application, while FLAIR fits the same dataset into memory.
|
| 290 |
+
|
| 291 |
+
§ 5.2 THROUGHPUT BENCHMARK
|
| 292 |
+
|
| 293 |
+
We compare FLAIR with its competitors among the temporal databases: SWAB and GREYCAT. We study the throughput of each approach, in terms of numbers of insertions and readings per second. For the insertions, we successively insert ${1M}$ random samples in the storage solution (random values are used as a worst-case situation for FLAIR, due to its way of modeling data). For the reads, we also incrementally insert ${1M}$ samples before querying 10,000 random samples among the inserted ones. GREYCAT is an exception: due to its long insertion time, we only insert 10,000 random values and those values are then queried. Our experiment is done using a publicly available application [13].
|
| 294 |
+
|
| 295 |
+
Figure 7 shows the throughput of the approaches for sequential insertions and random reads. Note the logarithmic scale. FLAIR drastically outperforms its competitors for the insertions: it provides a speed-up from $\times {133}$ against SWAB up to $\times 3,{505}$ against GREYCAT. The insertion scheme of FLAIR is fast as it relies on few parameters. On the other hand, GREYCAT relies on a costly procedure when a sample is inserted: it tries to increase the degree of the current model until it fits with the new point or until a maximum degree is reached. GREYCAT aims at computing a model as compact as possible, which is not the best choice for fast online insertions. While SWAB performs better, it cannot compare to FLAIR because of the way SWAB inserts a sample: when its sliding window is full and a new sample does not fit the current model, a costly bottom-up approach is triggered over the entire window.
|
| 296 |
+
|
| 297 |
+
< g r a p h i c s >
|
| 298 |
+
|
| 299 |
+
Figure 7: Throughput for insertions and reads using FLAIR, SWAB, and GREYCAT (log scale). FLAIR drastically outperforms its competitors for insertions and reads.
|
| 300 |
+
|
| 301 |
+
For the reads (Fig 7b), FLAIR also outperforms SWAB. Our investigation reports that the gain reported by FLAIR largely benefits from the time index it exploits to fetch the models: SWAB browses the list of models sequentially until the good model is found while FLAIR relies on a dichotomy search. SWAB has a complexity linear in the size of the models list while FLAIR has a logarithmic one. Nonetheless, their lists of models have roughly the same size as random samples were added. GREYCAT has the same approach as SWAB and this is why it is not represented in the results: with only 10,000 insertions instead of ${1M}$ , its list of models is significantly smaller compared to the others, making the comparison unfair. Nonetheless, we expect GREYCAT to have a better throughput as its model list shall be shorter.
|
| 302 |
+
|
| 303 |
+
Note that those results have been obtained with the worst-case: random samples. Similarly unfit for FLAIR are periodical signals such as raw audio: our tests show a memory usage similar to random noise. Because FLAIR leverages linear interpolations, it performs best with signals that have a linear shape (e.g. GPS, accelerometer). We expect SWAB to store fewer models than FLAIR thanks to its sliding window, resulting in faster reads. However, the throughput obtained for FLAIR is minimal and FLAIR is an order of magnitude faster than SWAB for insertions, so it does not make a significant difference. We can conclude that FLAIR is the best solution for storing an unbounded stream of samples on mobile devices.
|
| 304 |
+
|
| 305 |
+
§ 5.3 PRIVACY BENCHMARK
|
| 306 |
+
|
| 307 |
+
§ 5.3.1 LOCATION PRIVACY
|
| 308 |
+
|
| 309 |
+
Location data is not only highly sensitive privacy-wise, but also crucial for location-based services. While LPPMs have been developed to protect user locations, they are generally used on the server where the data is aggregated. The data is thus exposed to classical threats, such as malicious users, man in the middle, or database leaks. To avoid them, the best solution is to keep the data on the device where it is produced, until it is sufficiently obfuscated to be shared with a third-party. With GPS data, this protection mechanism must be undertaken by a device-local LPPM. Evaluating the privacy of the resulting trace must also be performed locally, by executing attacks on the obfuscated data. Both processes require storing all the user mobility traces directly on the mobile. While existing approaches have simulated this approach [25], no real deployment has ever been reported. In this section, we show that using FLAIR enables overcoming one of the memory hurdles of constrained devices. We use FLAIR to store entire GPS traces on mobile devices, execute POI attacks, and protect the traces using the LPPM PROMESSE [35].
|
| 310 |
+
|
| 311 |
+
PROMESSE [35] is an LPPM that intends to hide POIs from a mobility trace by introducing a negligible spatial error. To do so, PROMESSE smooths the trajectories by replacing the mobility trace with a new one applying a constant speed while keeping the same starting and ending timestamps. The new trace ${T}^{\prime }$ is characterized by the distance $\delta$ between two points. First, additional locations are inserted by considering the existing locations one by one in chronological order. If the distance between the last generated location ${T}^{\prime }\left\lbrack i\right\rbrack$ and the current one $T\left\lbrack c\right\rbrack$ is below $\delta$ , this location is discarded. Otherwise, ${T}^{\prime }\left\lbrack {i + 1}\right\rbrack$ is not defined as the current location $T\left\lbrack c\right\rbrack$ , but the location between ${T}^{\prime }\left\lbrack i\right\rbrack$ and $T\left\lbrack c\right\rbrack$ , such that the distance between ${T}^{\prime }\left\lbrack i\right\rbrack$ and ${T}^{\prime }\left\lbrack {i + 1}\right\rbrack$ is equal to $\delta$ . Once all the locations included in the new mobility trace are defined, the timestamps are updated to ensure that the period between the two locations is the same, keeping the timestamps of the first and last locations unchanged. The resulting mobility trace is protected against POI attacks while providing high spatial accuracy.
|
| 312 |
+
|
| 313 |
+
Our experiments are performed using a publicly available application [7].
|
| 314 |
+
|
| 315 |
+
Enforcing privacy on CABSPOTTING Using FLAIR, we store the entire CABSPOTTING dataset's latitudes and longitudes in memory, using both $\varepsilon = {10}^{-3}$ and $\varepsilon = 2 \times {10}^{-3}$ (representing an accuracy of approximately a hundred meters). For each user, we compute the gain in terms of memory we save by modeling the dataset instead of storing the raw traces.
|
| 316 |
+
|
| 317 |
+
Figure 8 reports the gain distribution as a CDF along with the average gain on the entire dataset. One can observe that most of the user traces benefit from using FLAIR, and FLAIR provides an overall gain of ${21}\%$ for $\varepsilon = {10}^{-3}$ on the entire dataset, and a gain of ${47.9}\%$ for $\varepsilon = 2 \times {10}^{-3}$ . Nonetheless, the mobility of a few users imposes an important cost: for them, using FLAIR is counter-productive. Fortunately, this does not balance out the gain for the other users.
|
| 318 |
+
|
| 319 |
+
To better understand how the $\varepsilon$ parameter introduced by FLAIR affects the utility of the resulting traces, we study the POIs inferred from the modeled traces. We compute the
|
| 320 |
+
|
| 321 |
+
< g r a p h i c s >
|
| 322 |
+
|
| 323 |
+
Figure 8: Memory gain distribution when storing CABSPOT-TING with FLAIR. Using FLAIR with $\varepsilon = {10}^{-3}$ reports on a gain of ${21}\%$ , while $\varepsilon = 2 \times {10}^{-3}$ reaches a gain of ${48}\%$ .
|
| 324 |
+
|
| 325 |
+
< g r a p h i c s >
|
| 326 |
+
|
| 327 |
+
Figure 9: Distances distribution when using FLAIR on CAB-SPOTTING. The distances are computed between the POIs obtained using the modeled traces and their closest counterparts, obtained with the raw traces. Except for a few extreme values, the values are close: 90% of the POIs are at a distance lower than 510 meters from the ground truth. The use of FLAIR does not alter the utility of the traces.
|
| 328 |
+
|
| 329 |
+
POIs of the trace both with and without using FLAIR. To estimate the relevance of the obtained POIs, we compute the distance of each POI reported while using the trace modeled by FLAIR to the closest POI among the POIs in the raw trace. Figure 9 depicts the distribution, as a CDF, of this distance between "modeled" and "raw" POIs. Figure 9a shows that the distance is short: with $\varepsilon = {10}^{-3},{99.5}\%$ of the distances are lower than 2,425 meters and 99% are lower than 1,700 meters. Figure 9b zooms on this distribution, focusing on distances lower than 1,000 meters. With $\varepsilon = {10}^{-3},{90}\%$ of the obtained POIs using FLAIR are at a distance lower than 510 meters to a POI inferred from the raw trace. By construction, POIs are the center of spheres of a diameter of 500 meters where the user has stayed more than 5 minutes. The vast majority of the obtained POIs using FLAIR being within 500 meters of raw POIs, it means that FLAIR delivers relevant approximations. With $\varepsilon = 2 \times {10}^{-3},{90}\%$ of the obtained POIs using FLAIR are at a distance lower than 826 meters to a POI inferred from the raw trace: the gain in memory has an impact on the utility of the resulting trace.
|
| 330 |
+
|
| 331 |
+
Figure 10 reports on the sensibility analysis of $\varepsilon$ , both in terms of gains and distances. As expected, the higher $\varepsilon$ , the better the gains, but the longest the distances. Regarding the gains (Fig. 10a), a low $\varepsilon$ can induce a memory overhead. Indeed, if the model is used only for one data point, it generates a memory overhead similarly to Fig. 6, in this case of ${50}\%$ . We, therefore, recommend using $\varepsilon = {10}^{-3}$ as the minimal tolerated error to observe a gain. Regarding the distances, Figure 10b reports on the distribution of distances below 1,000 meters, as the higher values follow the same tendency as Figure 9a. Except for a few extreme values, most of the distances remain short, even for high $\varepsilon$ values.
|
| 332 |
+
|
| 333 |
+
< g r a p h i c s >
|
| 334 |
+
|
| 335 |
+
Figure 10: Distances distribution for different $\varepsilon$ when using FLAIR on CABSPOTTING. Distances and memory gain are computed from the modeled traces with different values for $\varepsilon$ . The higher $\varepsilon$ , the higher the gain, but the longer the distances between the inferred and raw POI.
|
| 336 |
+
|
| 337 |
+
Processing Benchmark For dense datasets, e.g. with more than two GPS samples per second, the gain becomes even more significant. For example, storing the entire PRIVAMOV dataset using FLAIR with $\varepsilon = {10}^{-3}$ results in a memory gain of 99.87%. Compared to sampling, FLAIR stores all the samples, instead of discarding a part of them. However, the large number of samples can be a hindrance to many approaches, including the extraction of POI. To be able to port LPPMs onto constrained devices, other bottlenecks of the systems should be resolved, in addition to storage.
|
| 338 |
+
|
| 339 |
+
For example, computing POIs with the traditional POI attacks may lead to unpractical computation time. Computing the POIs of the user 1 of PRIVAMOV takes 2 hours: computing the POIs for the entire dataset is far too costly. We cannot expect end-users to execute processes with such computation time on their mobile phone: while FLAIR has removed the memory constraint, computation time is still a hurdle. ${Di}$ - vide & Stay is a way, in this case, to decrease the complexity of POI computation. Table 1 displays PRIVAMOV user 1 POIs' computation time on different platforms. It shows that applying Divide & Stay to the user 1 mobility trace decreases the computation from 2 hours to 59 seconds on Android, providing a $\times {120}$ speed-up; speed gain even reaches $\times {164}$ on iOS, computation time decreasing from 1 hour to 22 seconds. Divide & Stay makes the in-situ use of POI attacks and the corresponding LPPM possible.
|
| 340 |
+
|
| 341 |
+
In addition to speed, the quality of the inferred POIs is the most salient concern about Divide & Stay. We assess the quality by computing the distances to the POIs obtained from the POI-attack on CABSPOTTING. We choose CABSPOTTING because computing it on PRIVAMOV is prohibitive it terms of computation time. Figure 11 displays the distribution of the distances below 100 meters: more than ${68}\%$ are the same and ${90}\%$ of the POIs are at a distance lower than 22 meters from actual ones. Divide & Stay provides an important speed-up without altering the quality of POIs. Note that FLAIR was not used in this case, as the performances of Divide & Stay are orthogonal to the use of a temporal database to model the samples.
|
| 342 |
+
|
| 343 |
+
Table 1: Computation times of raw POIs for PRIVAMOV user 1 on different platforms. Divide & Stay (D&S) is at least 100 times faster than state-of-the-art approaches.
|
| 344 |
+
|
| 345 |
+
max width=
|
| 346 |
+
|
| 347 |
+
$\mathbf{{Platform}}$ POI-attack D&S Speed-up
|
| 348 |
+
|
| 349 |
+
1-4
|
| 350 |
+
Desktop 59 min 20 s 32 s $\times {111}$
|
| 351 |
+
|
| 352 |
+
1-4
|
| 353 |
+
iOS $1\mathrm{\;h}{00}\mathrm{\;{min}}{01}\mathrm{\;s}$ 22 s $\times {164}$
|
| 354 |
+
|
| 355 |
+
1-4
|
| 356 |
+
Android $1\mathrm{\;h}{58}\mathrm{\;{min}}{04}\mathrm{\;s}$ 59 s $\times {120}$
|
| 357 |
+
|
| 358 |
+
1-4
|
| 359 |
+
|
| 360 |
+
< g r a p h i c s >
|
| 361 |
+
|
| 362 |
+
Figure 11: Distances distribution when using Divide & Stay on CABSPOTTING. The distances between the POIs are obtained using Divide & Stay and their closest counterparts, obtained with the traditional POI attack. Except for a few extreme values, the values are close: more than ${68}\%$ are the same and 90% of the POIs are at a distance lower than 22 meters than a "real" one.
|
| 363 |
+
|
| 364 |
+
Bringing back privacy to the user. By using both FLAIR and D&S we can perform POI-attacks and use LPPMs directly on the user's device. We consider the POIs of user 0 of CABSPOTTING with and without FLAIR, D&S, and Promesse, see Table 2.
|
| 365 |
+
|
| 366 |
+
The use of FLAIR and D&S alters the number of POIs, which explains the extreme values obtained in the distribution of the distances (Fig. 9 and 10b): it corresponds to POIs that have no counterpart and may be far away from other POIs. The use of D&S corroborates the results of Figure 11: an important part of the inferred POIs look similar to the raw ones. On the other hand, even though the number of POIs is similar, none of the POIs obtained using FLAIR are equal to the original one, with or without D&S, despite being very close.
|
| 367 |
+
|
| 368 |
+
Table 2: Impact of FLAIR and D&S on the number of inferred POIs from user 0 trace in CABSPOTTING. Thanks to FLAIR and D&S, PROMESSE succeeds to protect user privacy at the edge.
|
| 369 |
+
|
| 370 |
+
max width=
|
| 371 |
+
|
| 372 |
+
2*.5 Algorithm 2|c|without Promesse 2|c|with Promesse
|
| 373 |
+
|
| 374 |
+
2-5
|
| 375 |
+
Raw POIs FLAIR Raw POIs FLAIR
|
| 376 |
+
|
| 377 |
+
1-5
|
| 378 |
+
POI-attack 30 31 0 0
|
| 379 |
+
|
| 380 |
+
1-5
|
| 381 |
+
D&S 30 30 0 0
|
| 382 |
+
|
| 383 |
+
1-5
|
| 384 |
+
POI-attack $\cap \mathrm{D}\& \mathrm{\;S}$ 21 20 - -
|
| 385 |
+
|
| 386 |
+
1-5
|
| 387 |
+
|
| 388 |
+
To conclude, our implementation of the data stream storage solution, FLAIR, enables the effective deployment of more advanced techniques, such as EDEN [25] or HMC [28]. This may require new algorithms, such as Divide & Stay, but it enables in-situ data privacy protection before sharing any sensitive information. We believe that this is a critical step forward towards improving user privacy as all LPPMs experiments until today were either centralized or simulated.
|
| 389 |
+
|
| 390 |
+
§ 5.4 STABILITY BENCHMARK
|
| 391 |
+
|
| 392 |
+
We further explore the capability of FLAIR to capture stable models that group as many samples as possible for the longest possible durations. Figure 12 reports on the time and the number of samples covered by the models of FLAIR for the CABSPOTTING and PRIVAMOV datasets. One can observe that the stability of FLAIR depends on the density of the considered datasets. While FLAIR only captures at most 4 samples for ${90}\%$ of the models stored in CABSPOTTING (Fig. 12a), it reaches up to 2,841 samples in the context of PRIVAMOV (Fig. 12c), which samples GPS locations at a higher frequency than CABSPOTTING. This is confirmed by our observations of Figures ${12}\mathrm{\;b}$ and ${12}\mathrm{\;d}$ , which report a time coverage of ${202}\mathrm{\;{ms}}$ and $3,{602}\mathrm{\;{ms}}$ for ${90}\%$ of FLAIR models in CABSPOTTING and PRIVAMOV, respectively. Given that PRIVAMOV is a larger dataset than CABSPOTTING (7.2 GB vs. 388 MB), one can conclude that FLAIR succeeds to scale with the volume of data to be stored.
|
| 393 |
+
|
| 394 |
+
§ 5.5 BEYOND LOCATION STREAMS
|
| 395 |
+
|
| 396 |
+
Storing timestamps In all the previous experiments, the timestamps were not modeled by FLAIR, as we expect the user to query the time at which she is interested in the samples. However, it is straightforward to store timestamps using FLAIR: we store couples $\left( {i,{t}_{i}}\right)$ with ${t}_{i}$ being the ${i}^{\text{ th }}$ inserted timestamp. Unlike other sensor samples, the nature of the timestamps makes them a good candidate for modeling: their value keeps increasing in a relatively periodic fashion. To assess the efficiency of FLAIR for storing timestamps, we stored all the timestamps of the user 1 of the PRIVAMOV dataset with $\varepsilon = 1 -$ i.e., we tolerate an error of one second per estimate. The 4,341,716 timestamps were stored using 26,862 models for a total of 80,592 floats and an overall gain of 98%, with a mean average error (MAE) of 0.246 second. Hence, not only the use of FLAIR results in a dramatic gain of memory, but it provides very good estimations.
|
| 397 |
+
|
| 398 |
+
< g r a p h i c s >
|
| 399 |
+
|
| 400 |
+
Figure 12: Stability of the inferred models when using FLAIR on PRIVAMOV and CABSPOTTING with $\varepsilon = {10}^{-3}$ .
|
| 401 |
+
|
| 402 |
+
Storing accelerations To assess that FLAIR is suitable for storing unbounded data streams, we use FLAIR to store accelerometer samples. While storing random samples is of little benefit, accelerometer samples are used in practice to model user mobility. Coupled with other sensors' data, such as GPS values, we can infer if the user is walking, biking or taking a car for example $\left\lbrack {{19},{38},{41}}\right\rbrack$ . However, the accelerometer produces more than 15 samples per second, hence challenging the storage of such a data stream. Our implementation is publicly available [4].
|
| 403 |
+
|
| 404 |
+
We store 10,000 consecutive accelerometer samples with FLAIR and, for every 100 insertions, we report on the size of the file and the relative gain. We use FLAIR with $\varepsilon = 1$ as the accelerometer has high variability, even when the mobile is stationary. FLAIR reports a constant memory whenever stationary, and a small gain $\left( { > \times {1.39}}\right)$ when walking. FLAIR is thus a suitable solution to store data streams produced by the sensors of mobile devices.
|
| 405 |
+
|
| 406 |
+
We also observed that the performances of FLAIR may differ, depending on device configurations. As older hardware's accelerometers are noisier and produce fewer samples than newer sensors, FLAIR's gain appears as higher on latter generation hardware. For instance, inserting ${10k}$ samples with a Pixel 7 Pro (Android 13) smartphone is completed in 21 seconds, while doing the same on a Moto $Z$ (Android 8) lasts for 49 seconds. Regarding iOS, latest iPhone 14 Plus (iOS 16.0.1) takes up 1 minute 39 seconds to store same samples count.
|
| 407 |
+
|
| 408 |
+
§ 6 THREATS TO VALIDITY
|
| 409 |
+
|
| 410 |
+
While the combination of FLAIR and D&S succeeds to embed LPPMs within mobile devices and increasing user privacy, our results might be threatened by some variables we considered.
|
| 411 |
+
|
| 412 |
+
The hardware threats relate to the classes of constrained devices we considered. In particular, we focused on the specific case of smartphones, which is the most commonly deployed mobile device in the wild. To limit the bias introduced by a given hardware configuration, we deployed both FLAIR and D&S on both recent Android and iOS smartphones for most of the reported experiments, while we also considered the impact of hardware configurations on the reported performances.
|
| 413 |
+
|
| 414 |
+
Another potential bias relates to the mobility datasets we considered in the context of this paper. To limit this threat, we evaluated our solutions on two established mobility datasets, CABSPOTTING and PRIVAMOV, which exhibit different characteristics. Yet, we could further explore the impact of these characteristics (sampling frequency, number of participants, duration and scales of the mobility traces). Beyond mobility datasets, we could consider the evaluation of other IoT data streams, such as air quality metrics, to assess the capability of FLAIR to handle a wide diversity of data streams. To mitigate this threat, we reported on the storage of timestamps and accelerations in addition to 2-dimensional locations.
|
| 415 |
+
|
| 416 |
+
Our implementations of FLAIR and D&S may suffer from software bugs that affect the reported performances. To limit this threat, we make the code of our libraries and applications freely available to encourage the reproducibility of our results and share the implementation decisions we took as part of the current implementation.
|
| 417 |
+
|
| 418 |
+
Finally, our results might strongly depend on the parameters we pick to evaluate our contributions. While FLAIR performances (gain, memory footprint) vary depending on the value of the $\varepsilon$ parameter, we considered a sensitive analysis of this parameter and we propose a default value $\varepsilon = {10}^{-3}$ that delivers a minimum memory gain that limits the modeling error.
|
| 419 |
+
|
| 420 |
+
§ 7 CONCLUSION
|
| 421 |
+
|
| 422 |
+
The contributions of this paper are threefold: we introduced $i$ ) a new storage system based on piece-wise linear model dubbed FLAIR, ${ii}$ ) a new way to compute POIs, called ${Di}$ - vide & Stay, and finally iii) demonstrated how FLAIR could unlock device-local privacy protections on time series while using machine learning. Our extensive evaluations, based on real applications available for Android and iOS, show that FLAIR drastically outperforms its competitors in terms of insertion throughput-FLAIR is more than 130 times faster than the traditional SWAB-and read throughput-FLAIR reads 2,340 times faster than SWAB. While FLAIR can store tremendous data on mobile devices, Divide & Stay provides an important speed-up to reduce the total computation time of POI attacks by several orders of magnitude, making them suitable for mobile computing. By sharing these two frameworks with mobile developers, our contribution is an important step forward towards the real deployment of LPPMs and, more generally, privacy-friendly data-intensive workloads at the edge (e.g., federated learning on mobile phones).
|
papers/JSYS/JSYS 2023/JSYS 2023 March_Papers/hj77eOQNIrx/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
papers/JSYS/JSYS 2023/JSYS 2023 March_Papers/hj77eOQNIrx/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
papers/JSYS/JSYS 2023/JSYS 2023 March_Papers/s-78X2Y9sm/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
papers/JSYS/JSYS 2023/JSYS 2023 March_Papers/s-78X2Y9sm/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
papers/JSYS/JSYS 2023/JSYS 2023 March_Papers/sR7rA8txBZF/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
papers/JSYS/JSYS 2023/JSYS 2023 March_Papers/sR7rA8txBZF/Initial_manuscript_tex/Initial_manuscript.tex
ADDED
|
@@ -0,0 +1,477 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
§ SOK: EVALUATIONS IN INDUSTRIAL INTRUSION DETECTION RESEARCH
|
| 2 |
+
|
| 3 |
+
Anonymous authors
|
| 4 |
+
|
| 5 |
+
Paper under double-blind review
|
| 6 |
+
|
| 7 |
+
§ ABSTRACT
|
| 8 |
+
|
| 9 |
+
Industrial systems are increasingly threatened by cyberattacks with potentially disastrous consequences. To counter such attacks, industrial intrusion detection systems strive to timely uncover even the most sophisticated breaches. Due to its criticality for society, this fast-growing field attracts researchers from diverse backgrounds, resulting in 130 new detection approaches in 2021 alone. This huge momentum facilitates the exploration of diverse promising paths but likewise risks fragmenting the research landscape and burying promising progress. Consequently, it needs sound and comprehensible evaluations to mitigate this risk and catalyze efforts into sustainable scientific progress with real-world applicability. In this paper, we therefore systematically analyze the evaluation methodologies of this field to understand the current state of industrial intrusion detection research. Our analysis of 609 publications shows that the rapid growth of this research field has positive and negative consequences. While we observe an increased use of public datasets, publications still only evaluate 1.3 datasets on average, and frequently used benchmarking metrics are ambiguous. At the same time, the adoption of newly developed benchmarking metrics sees little advancement. Finally, our systematic analysis enables us to provide actionable recommendations for all actors involved and thus bring the entire research field forward.
|
| 10 |
+
|
| 11 |
+
§ 1 INTRODUCTION
|
| 12 |
+
|
| 13 |
+
The digitalization of Industrial Control Systems (ICSs) has led to an escalating rise in cyberattacks $\left\lbrack {5,{52},{67}}\right\rbrack$ , of which prominent ones include the Stuxnet or Ukrainian power grid attacks. These attacks are boosted by widely deployed legacy devices not meant to implement crucial security measures [15]. Specialized Industrial Intrusion Detection Systems (IIDSs) address this gap by providing an easily retrofittable security solution for legacy industrial deployments [16, 27]. To this end, IIDSs passively monitor network traffic or the physical process state and alert human operators to initiate adequate countermeasures in case of suspected attacks [74].
|
| 14 |
+
|
| 15 |
+
As an emerging hot research area, IIDSs attract researchers and industrial operators from diverse backgrounds. It thus comes as no surprise that, according to our literature research, at least 1109 distinct authors have published ideas for detection mechanisms between 2019 and 2021 alone. While their diverse background is beneficial to cover lots of different perspectives and ideas, the resulting fast-paced advancements lead to a lack of established evaluation methodologies and comparability across the field. Consequently, worthwhile ideas remain hard to identify, and it is unclear which improvements are suitable to close the gap to much-needed production-ready IIDSs. Ideally, the vast research efforts would be channeled through clear, comparable, coherent, and expressive evaluation methodologies. Only through a resulting comparability between approaches can the IIDS research landscape fully benefit from its high diversity.
|
| 16 |
+
|
| 17 |
+
Digging deeper into conducted evaluations, researchers use benchmarking datasets that are either publicly available or, more commonly, custom-made for that specific test hindering repeatable experiments. Based on these datasets and an IIDS' alerts, various (performance) metrics are computed. However, IIDSs are often evaluated on pre-selected datasets, covering specific favorable scenarios [14]. Furthermore, metrics are chosen or designed based on specific goals determined (to some degree arbitrarily) by the researchers. The resulting custom evaluation methodologies lead to an immense heterogeneity within the IIDSs research landscape, where most works, despite common goals, lack comparability. Consequently, technological and scientific progress is inhibited.
|
| 18 |
+
|
| 19 |
+
In this regard, meta-analyses of IIDS research already unveiled inefficiencies in the detection capabilities of published works [17] or criticized the conclusions drawn from scientific evaluation procedures $\left\lbrack {8,{23},{43}}\right\rbrack$ . Simultaneously, we observe attempts to fix these issues by, e.g., collecting representative benchmarking datasets [14], inventing specialized industrial metrics to accurately assess the "success" of an IIDS $\left\lbrack {{24},{32} - {35},{38},{40},{44},{69}}\right\rbrack$ , or by providing an abstract format to facilitate a coherent research landscape [74]. However, related work so far still fails to (i) quantify how IIDSs are evaluated within the vast body of literature, (ii) assess the applicability and impact of recent critiques partially known from, e.g., traditional intrusion detection [8, 51, 66], and (iii) deliver overarching recommendations to pave the way towards the shared goal of improving IIDSs to truly protect industrial networks and critical infrastructure against future cyberattacks.
|
| 20 |
+
|
| 21 |
+
With this paper, we strive to close the outlined gap with a Systematization of Knowledge (SoK) on the evaluation methodologies across IIDS research. To this end, we conduct a Systematic Mapping Study (SMS) to quantify the current state of the research landscape encompassing 609 papers. From the resulting knowledge basis, we can draw a clear picture w.r.t. positive and negative developments as well as persistent flaws. Ultimately, our works allow us to provide clear recommendations for all involved actors to catalyze their joint efforts to protect the world's most critical networks.
|
| 22 |
+
|
| 23 |
+
Contributions. To pave the way toward a more coherent IIDS landscape, we make the following contributions:
|
| 24 |
+
|
| 25 |
+
* We survey 609 papers published until 2021 proposing IIDS designs and extract information about how their respective evaluations were conducted (Sec. 3).
|
| 26 |
+
|
| 27 |
+
* We systematize the gained knowledge w.r.t. utilized datasets and metrics to identify positive and negative trends as well as their potential for future improvements. We then complement these theoretical results with practical experiments to extend the understanding of the interplay between datasets and metrics (Sec. 4 and Sec. 5).
|
| 28 |
+
|
| 29 |
+
* Finally, we summarize current flaws in IIDS evaluations and formulate recommendations to improve future IIDS research for all involved actors: IIDS researchers, dataset creators, and industrial operators (Sec. 6).
|
| 30 |
+
|
| 31 |
+
Artifact Availability. We make the data of our SMS publicly available at https://www.dropbox.com/sh/ bvhlrinhv4rn50u/AAAmQxzzGqZmU-7E0yfRvxZXa, and will publish our evaluation tools used for the practical experiments upon acceptance (for anonymity purposes).
|
| 32 |
+
|
| 33 |
+
§ 2 RESEARCH ON INDUSTRIAL INTRUSION DETECTION
|
| 34 |
+
|
| 35 |
+
To lay the foundation for our work, we provide a brief introduction to the field of industrial intrusion detection (Sec. 2.1) and its challenges (Sec. 2.2) before we discuss related work on the evaluation methodologies of this research field (Sec. 2.3). Based on this, we motivate the need for systematizing the knowledge on evaluating industrial intrusion detection research and formulate basic research questions (Sec. 2.4) to ultimately steer future research in an effective direction.
|
| 36 |
+
|
| 37 |
+
§ 2.1 INDUSTRIAL INTRUSION DETECTION
|
| 38 |
+
|
| 39 |
+
The high degree of digitization in industries unleashes an enormous level of automation by integrating sensors, actuators, and control logic into tightly coupled cyber-physical systems. The current trend to build ICSs by adapting once proprietary and local network protocols, e.g., Modbus, to ubiquitous Ethernet networks, e.g., using ModbusTCP, paired with connectivity to the Internet, enables unique applications, e.g., remote monitoring or Supervisory Control and Data Acquisition (SCADA). Yet, these technologies simultaneously open new attack vectors, as prominent attacks demonstrate [5, 52].
|
| 40 |
+
|
| 41 |
+
To counter these security issues, various preventive measures have been proposed, e.g., secure variants of industrial communication protocols $\left\lbrack {{14},{15}}\right\rbrack$ . But, retrospectively integrating these measures into existing ICSs, operating for decades, is costly, if possible at all, due to their strict requirements toward, e.g., availability and latency. In this context, intrusion detection is proposed as a promising alternative or complementing technology to passively retrofit security into ICSs [74] by monitoring systems or networks for suspicious activities or violations of security policies. However, established intrusion detection solutions from computer networks serving, e.g., offices or data centers, are not as effective in industries [76], primarily due to ICSs' reliance on unique (real-time) hardware such as Programmable Logic Controllers (PLCs) and sophisticated, custom-tailored attacks targeting the physical process [5,70]. Consequently, research focuses on specialized Industrial Intrusion Detection Systems (IIDSs), which leverage the repetitive and predictable characteristics occurring in, e.g., Modbus' communication patterns or the physical process.
|
| 42 |
+
|
| 43 |
+
The IIDS research landscape can be coarsely classified along five dimensions: attacker model, detection technique, benchmarking environment, evaluation metric, and reactions. The attacker model influences which kind of attacks an intrusion detection system should be able to detect and potentially even differentiate. Note that while some surveys consider fault detection similar to attacks [49], faults do not occur as a consequence of cyberattackss but rather through, e.g., wear and tear [27] and are thus left out of the scope of this work. Thus, the attacker model determines an IIDS's input data, with common ones being network traffic, host data from SCADA systems or PLCs, and physical process data [49].
|
| 44 |
+
|
| 45 |
+
The main work of researchers then goes into designing the actual detection technique, which can be loosely categorized into knowledge-based, behavior-based, or hybrid approaches [53]. While knowledge-based systems (also referred to as misuse or supervised detection [53]) identify harmful behavior based on (known) patterns, behavior-based IIDSs rather specify how the ICS behaves normally and alert deviations from usual actions. Moreover, the detection technique is also heavily influenced by the attacker model. While attacks on a network layer are best detected on a per-packet basis, e.g., with deep-packet inspection [29], process-based detection can leverage a broader view of the ICS, e.g., by analyzing whether the physical process moves towards a critical state [13].
|
| 46 |
+
|
| 47 |
+
To validate the design of a detection technique and facilitate comparability of a newly proposed IIDS, its detection performance is evaluated with the help of suitable benchmarking environments and evaluation metrics (potentially in addition to computational performance or w.r.t. explainability [64]). Despite the data type, benchmarking environments for all kinds of industrial domains come in different forms, such as datasets, physical testbeds, simulations, or real facilities [14, 36]. Each type has its own trade-offs in terms of, e.g., accessibility, cost, or closeness to real deployments, so their selection needs to be carefully made. Moreover, the IIDS' performance needs to be measured based on sensitive metrics. In that regard, scientists can refer to a plethora of common metrics [61] expressing the amount of false positive alerts or more complex characteristics (cf. Appx. B).
|
| 48 |
+
|
| 49 |
+
A final dimension is the reaction to IIDS alerts to mitigate an attack. Especially when transferring an IIDS to real-world deployments, operators may conduct (manual) forensic analyses to understand the cause for alert [4] and ultimately mitigate the threat [67] by, e.g., applying firewall rules. Preventive measures can also be coupled directly to a detection mechanism for more automated reactions, then called intrusion prevention systems. Those do, however, need to be carefully designed, since in an industrial setting simply blocking suspicious traffic may cause more harm than the attack itself.
|
| 50 |
+
|
| 51 |
+
§ 2.2 CHALLENGES OF EVALUATING IIDS
|
| 52 |
+
|
| 53 |
+
IIDS research takes place in a diverse field encompassing ICS architectures ranging from water supply over power delivery to manufacturing, where cyberattacks are primarily unique to a particular deployment $\left\lbrack {5,{52}}\right\rbrack$ . Even though ICSs rely on researchers to design appropriate countermeasures and test their efficiency in real-world deployments, operators rarely provide such urgently-needed data samples $\left\lbrack {3,{50},{66}}\right\rbrack$ . While these challenges constitute an opportunity to tackle IIDS research from varying angles, transfer insights across industrial domains, and investigate their efficiency in real-world deployments, they likewise segregate the overall research landscape, resulting in isolated silos [74]. Consequently, sound scientific evaluations remain as the foundation to facilitate coherence and measure the overall progress of the research field.
|
| 54 |
+
|
| 55 |
+
However, due to influences from various fields and a generally high interest in IIDSs, so far no coherent evaluation methodology could be established and subsequently improved. In practice, the path taken by most researchers to design and test their IIDSs relies on privately acquired and/or public (synthetic) datasets containing samples of benign traffic and/or physical process data as well as attack scenarios. To evaluate their IIDSs, researchers first train (and configure) their IIDS on samples of benign behavior and/or attacks (depending on the type of IIDS) from a specific industrial scenario. On a second evaluation dataset, they then compare the IIDS output (alerts) to the attack labels contained within the chosen dataset, i.e., they track how well the IIDS detects attacks and to which degree benign traffic or process values are unintentionally classified as suspicious. Finally, various metrics, e.g., the F1 score, quantify the detection performance and serve as the basis for comparisons to related work.
|
| 56 |
+
|
| 57 |
+
While most works adhere to this loosely outlined evaluation methodology, the devil is in the details [43]. Optimally, a given dataset would be suitable for a large amount of IIDS types and thus constitute a reference benchmark. However, widely-used datasets usually cover only specific industrial domains and a small subset of imaginable attacks [14]. Thus, the datasets made available to the research community decisively influence the scenarios within which IIDSs are evaluated and also the types of attacks IIDSs are optimized for. Moreover, utilized evaluation metrics do not draw a complete picture of an IIDS's detection performance without putting them into context [27], which rarely happens adequately within the research field. As a matter of fact, this lack of hardened and proven research methodologies has been exposed to various criticism in recent years, as identified by related work.
|
| 58 |
+
|
| 59 |
+
§ 2.3 RELATED WORK ON EVALUATING IIDSS
|
| 60 |
+
|
| 61 |
+
Taking a closer look at recent literature on the challenges of evaluating industrial intrusion detection research (cf. Sec. 2.2), we identify a range of works discussing and criticizing the current state of IIDS research. First, various surveys provide an overview of the utilized detection methods across that research field $\left\lbrack {{16},{27},{49},{53},{63},{67},{74},{75}}\right\rbrack$ , ranging from learning specific communication patterns to analyzing the physical state of the monitored system. In this context, difficulties reproducing results and generalizing IIDSs to related ICSs domains beyond those specifically evaluated were reported $\left\lbrack {{17},{74}}\right\rbrack$ . While these surveys repeatedly cover more than 70 publications, showing the huge attention industrial intrusion detection attracts, at the same time, they indicate a lack of coherence and advancement within the research field.
|
| 62 |
+
|
| 63 |
+
Similar surveys focused on summarizing available datasets and testbeds (from which datasets can be generated) specifically designed for IIDS evaluations $\left\lbrack {{14},{36}}\right\rbrack$ . These efforts identify at least 61 testbeds and 23 benchmarking datasets that are publicly available [14]. Since these surveys focus solely on datasets, they lack essential analyses about the actual application of datasets. As a rare exception, Balla et al. [10] analyzed dataset usage for deep learning detection methodologies, observing a strong bias toward non-ICS datasets, such as the KDD dataset family, with a usage of over 50%.
|
| 64 |
+
|
| 65 |
+
Besides the used dataset or testbed, the choice of metrics plays an important role when evaluating IIDSs. Without a dedicated focus on industrial intrusion detection, Powers [61] provides an overview of different metrics and puts their expressiveness into context. Yet, the considered point-based metrics (cf. Appx. B.1), e.g., accuracy or precision (also used in other domains such as machine learning), must be used carefully not to introduce any biases [61]. Moreover, especially for evaluations on (industrial) time-series datasets, further challenges, such as an imbalanced representation of attacks, have to be considered $\left\lbrack {8,{25}}\right\rbrack$ . Consequently, more advanced time series-aware metrics have been proposed [24, 32-35, 38, 44, 69] (cf. Appx. B.2). While this development promises to enhance the expressiveness of evaluations, their soundness and usage remain mostly unexplored so far.
|
| 66 |
+
|
| 67 |
+
Finally, various meta-surveys focus on machine learning pitfalls for industrial intrusion detection $\left\lbrack {{18},{23},{50},{63}}\right\rbrack$ or highlight challenges when transferring IIDSs from research to actual industrial deployments $\left\lbrack {3,{50},{66}}\right\rbrack$ . These problems include, e.g., inappropriate use of metrics [8], the dominance of lab-based datasets $\left\lbrack {8,{63}}\right\rbrack$ , or predominant focus on only a few of the wide range of industrial domains and protocols [63]. Importantly, empirical data on the evaluation of IIDS research is not yet available.
|
| 68 |
+
|
| 69 |
+
In summary, evaluations of IIDSs can, in theory, be based on a solid foundation of public datasets and advanced metrics. However, this research branch lacks a decent understanding of the methodologies actually applied within it beyond individual criticism regarding isolated aspects.
|
| 70 |
+
|
| 71 |
+
§ 2.4 THE NEED FOR SYSTEMATIZATION
|
| 72 |
+
|
| 73 |
+
The tremendous research interest in industrial intrusion detection, with 130 publications in the year 2021 alone, has led to a huge variety of evaluation methodologies. The resulting fast-paced research has a huge risk of becoming disjoint [74], eventually slowing down the overall progress in securing ICSs. Most importantly, the heterogeneity across industrial domains [74] and an observed widespread evaluation bias [27,70,74] make comparisons between IIDSs difficult. Past surveys on detection methodologies, datasets, metrics, and meta-studies have only studied individual aspects in isolation from each other (cf. Sec. 2.3). Thus, to unveil the root causes hindering coherent and sustainable IIDS research, there is a need to systematically consolidate the current state of evaluations in industrial intrusion detection research to ultimately identify remedies against the status quo.
|
| 74 |
+
|
| 75 |
+
We argue that only by analyzing how IIDSs are evaluated on a broad scale, as done in a Systematic Mapping Study (SMS) [41], we can comprehensively tackle the question of research coherence and evaluation soundness, i.e., to which extent evaluations are performed on uniform (public) datasets with widespread and suitable metrics to achieve a high level of comparability. More precisely, we aim to answer the following research questions:
|
| 76 |
+
|
| 77 |
+
D Q1: Which datasets are actually used to evaluate IIDSs?
|
| 78 |
+
|
| 79 |
+
D Q2: To which extent do IIDSs compare against each other?
|
| 80 |
+
|
| 81 |
+
* Q3: Which metrics are utilized in evaluations of IIDSs?
|
| 82 |
+
|
| 83 |
+
Besides providing a comprehensive picture of the traits and characteristics of IIDS evaluations, answering these questions lays the foundation to formulate actionable recommendations for IIDS evaluation, enabling the different actors within the research community to focus their joint effort on the overarching challenge of securing industrial deployments.
|
| 84 |
+
|
| 85 |
+
§ 3 SYSTEMATIC MAPPING STUDY
|
| 86 |
+
|
| 87 |
+
The objective of this SoK is to provide a systematic understanding of how (differently) IIDS research is currently evaluated and how this current status quo can be sustainably improved. While related work already hints at prevalent issues that might prevent objective comparisons (cf. Sec. 2.3), a holistic analysis is missing so far. Therefore, we strive to ascertain the state of IIDS evaluation methodologies by conducting a Systematic Mapping Study (SMS), a variation of a classical Systematic Literature Review (SLR) [41], to obtain a large, qualitative, and unbiased collection of relevant publications in a verifiable process, oriented along established best practices and guidelines [41]. First, we search relevant papers for a broad subject (IIDSs proposals) from the scientific literature with a systematic process. Afterward, publications are analyzed and classified based on the subjects of our analysis (Q1-Q3), i.e., their evaluation methodology.
|
| 88 |
+
|
| 89 |
+
< g r a p h i c s >
|
| 90 |
+
|
| 91 |
+
Figure 1: To conduct the SMS, we follow a two-staged approach which results in extracting a total of 609 relevant publications proposing novel IIDS as of December 2022. We list the corresponding search string in Appx. A.
|
| 92 |
+
|
| 93 |
+
To holistically answer the outlined research questions for a large and heterogeneous research field, we perform a comprehensive SMS as depicted in Fig. 1. According to the research questions, the SMS focuses on publications that propose IIDSs for ICSs as researchers naturally have to evaluate their performance in a scientific manner. In contrast to Balla et al. [10], we only consider publications that leverage at least one industrial-specific dataset, i.e., they were obtained from an ICS, e.g., include specific protocols such as Modbus, physical process data, or ICS-specific cyberattacks.
|
| 94 |
+
|
| 95 |
+
To conduct our SMS, we leverage Parsifal [21] to organize and comprehensibly document our screening process. First, we transformed the research questions into a search string ① (cf. Appx. A), which we successively optimized through validation with an initial set of known and representative literature ②. We then queried four search engines (IEEE Xplore, ACM DL, Scopus, and Web of Science) on December 2022 and found a total of 3046 hits ③. From this initial set of publications, we discarded duplicates (1484 publications) and performed a first screening of all remaining publications' titles and abstracts ④. In this initial screening, we mostly focused on removing publications from other research domains that still matched our search string and such publications that clearly do not propose (and thus evaluate) an IIDS approach. After this first screening phase, 953 unique publications remained for further consideration. Note that we did not filter for any specific detection techniques. Still, most publications covered by the survey (and thus the research field) resemble machine-learning.
|
| 96 |
+
|
| 97 |
+
In a final step, we conducted a detailed screening of the remaining publications to extract those that build the foundation for our further analysis ⑤. When accessing the full text of all papers, only 13 publications were not accessible to us and thus omitted. We performed a detailed second screening of all remaining publications, resulting in 331 further rejections of those that do not match our requirements for proposing IIDSs, e.g., belonged to fault detection (cf. Sec. 2.1). From the resulting set of 609 accepted publications, we extracted the relevant data to answer our research questions, such as the datasets and metrics they utilize for their evaluations. To ensure consistency, one author performed the detailed screening and data extraction while the workload for initial title/abstract screening was shared across multiple persons.
|
| 98 |
+
|
| 99 |
+
Through our systematic approach, to the best of our knowledge, we are the first to analyze the entire IIDS landscape. With 609 analyzed publications, our work is based on a significantly larger knowledge base than any of the previous surveys of related work (cf. Sec. 2.3). This basis enables us to analyze the evaluation methodologies of the broad IIDS research landscape. Beyond presenting our findings, releasing our SMS as a public artifact (cf. Artifact Availability) may help future researchers to find appropriate candidates for comparisons, facilitates further analyses, or enables tracking of the progress within the ICS domain in the future.
|
| 100 |
+
|
| 101 |
+
§ 4 IIDS EVALUATION IN RESEARCH
|
| 102 |
+
|
| 103 |
+
With a systematic basis of 609 publications proposing IIDSs gathered in our SMS (cf. Sec. 3), we now assess how the overall research landscape on evaluation methodologies for IIDSs has evolved over time. As a systematic representation has been missing so far (cf. Sec. 2.3), we augment the field with a high-level overview in Sec. 4.1. Afterward, we unveil common trends in evaluation methodologies, especially w.r.t. the utilized datasets (cf. Sec. 4.2). Finally, we study the degree of comparability between IIDSs publications in terms of the utilized dataset and evaluation metric (cf. Sec. 4.3).
|
| 104 |
+
|
| 105 |
+
§ 4.1 OVERVIEW OF THE IIDS RESEARCH LANDSCAPE
|
| 106 |
+
|
| 107 |
+
We begin our analysis with a high-level overview of the evolution and composition of the IIDS research landscape.
|
| 108 |
+
|
| 109 |
+
§ 4.1.1 EVOLUTION
|
| 110 |
+
|
| 111 |
+
To understand the evolution of the IIDS research domain, we focus on the number of published papers over time (cf. Fig. 2), which we enrich with timestamps of notable cyber incidents and the releases of commonly used evaluation datasets. While the first publications within the IIDS domain date back to 2003, the domain initially received little attention, with only 28 publications until 2012. From 2013 onward, research took off exponentially, with an average increase of ${40.9}\%$ in yearly publications. In 2021, the last year considered in our SMS, we identified 130 new publications, which is higher than in any previous year. In comparison, the Top 10 cyber security conferences experienced a lower average yearly increase in accepted publications from 7.2% for Crypto up to only 25.5% for USENIX Sec during the same timespan [77].
|
| 112 |
+
|
| 113 |
+
< g r a p h i c s >
|
| 114 |
+
|
| 115 |
+
Figure 2: Publications on IIDSs took off around 2013 and kept increasing as more cyberattacks occurred. Simultaneously the trend fosters to evaluate on public datasets.
|
| 116 |
+
|
| 117 |
+
We presume that the key driver for this development and interest in this research domain is caused by the raised public awareness following the Stuxnet cyberattack and subsequent ones like the two major incidents with the Ukrainian power grid [5]. Apart from such targeted attacks, industries were equally affected by more widespread malware, such as Not-Petya or WannaCry [5], due to their increasing digitalization and Internet-facing deployments (cf. Sec. 2.1). With attacks still continuing [52], endangering human safety, expensive equipment, as well as the environment, the peak in 2021 with 130 proposals comes as no surprise-underlining the growing importance of IIDS research.
|
| 118 |
+
|
| 119 |
+
A first look at the (publicly) utilized datasets' in Fig. 2 also allows us to deduce the existence of a growing number of public datasets. These datasets stem from various industrial domains, such as water purification, gas distribution, and electrical power generation, among many others. This conclusion aligns with recent results identifying a growing number of public datasets emerging across many industrial domains [14].
|
| 120 |
+
|
| 121 |
+
From this initial assessment, we conclude that IIDS research tackles the diverseness of industrial domains based on variously utilized datasets and experiences steady growth that does not seem to have reached its peak yet.
|
| 122 |
+
|
| 123 |
+
§ 4.1.2 COHERENCE
|
| 124 |
+
|
| 125 |
+
For such a rapidly growing research landscape in a diverse industrial environment, we further want to understand how coherent research is performed, i.e., whether directions exist that receive more attention and whether recent results build on previous findings. Therefore, we visualize the connections among publications by their citation relationships in Fig. 3. Citation data was retrieved and aggregated from OpenAlex and Semantic Scholar for all 609 publications, and we draw a connection between two publications if one cites another. In Fig. 3, publications are arranged by the force-directed Fruchterman-Reingold placement algorithm [22], i.e., connected vertices are pulled closer together. Moreover, for publications utilizing publicly accessible datasets, we colored their vertices belonging to process data datasets, network traffic, or both. Note, however, that our analysis omits 125 publications for which no connection to other publications could be found, either because the citation data for the respective publications was incomplete or because the IIDSs were indeed presented without relating to the vast body of existing works.
|
| 126 |
+
|
| 127 |
+
< g r a p h i c s >
|
| 128 |
+
|
| 129 |
+
Figure 3: Publications arranged in a citation graph reveal two directions roughly disjunct into approaches considering network traffic datasets and ones evaluating process data.
|
| 130 |
+
|
| 131 |
+
On average, a publication is cited by 2.9 other IIDS publications, while the Top 5 cited publications $\left\lbrack {{13},{29},{30},{42},{70}}\right\rbrack$ (not in order) are cited on average by 46.6 papers as of the 1st March 2023. These numbers provide a first glance at the connectivity in IIDS research.
|
| 132 |
+
|
| 133 |
+
Yet upon an initial inspection of the citation structure, we observe that the IIDS research domain is divided into two basic directions based on the evaluated dataset types: A first group of 102 papers (blue) resembles the larger class that focuses on process data datasets. In addition, we discovered a slightly smaller class of 81 publications (red) that corresponds to intrusion detection methodologies detecting attacks in network data. Only rarely (19 times) do IIDSs fall into both classes (green). Interestingly, both research fields show little connectivity, indicating a limited exchange of knowledge across these fields. This is backed by the fact that the clustering coefficient for the sub-domains (process data 0.15 and network traffic 0.13 ) is slightly higher than for the entire IIDS research landscape (0.11).
|
| 134 |
+
|
| 135 |
+
max width=
|
| 136 |
+
|
| 137 |
+
Origin Name X Type Domain Protocol Usage
|
| 138 |
+
|
| 139 |
+
1-6
|
| 140 |
+
3*iTRUST ${}^{a}$ SWaT [28] P* Water - 9.0%
|
| 141 |
+
|
| 142 |
+
2-6
|
| 143 |
+
BATADAL [68] P Water - 1.6%
|
| 144 |
+
|
| 145 |
+
2-6
|
| 146 |
+
WADI [2] P Water - 1.0%
|
| 147 |
+
|
| 148 |
+
1-6
|
| 149 |
+
3*Morris et al. ${}^{b}$ Morris-Gas [55] $\mathrm{N}$ Gas Modbus 11.8 %
|
| 150 |
+
|
| 151 |
+
2-6
|
| 152 |
+
Morris-Power [1] P Electricity - X 5.6%
|
| 153 |
+
|
| 154 |
+
2-6
|
| 155 |
+
Morris-Water [55] $\mathrm{N}$ Water Modbus 2.8%
|
| 156 |
+
|
| 157 |
+
1-6
|
| 158 |
+
3*Misc UCI-Water [60] P Water - 2.0%
|
| 159 |
+
|
| 160 |
+
2-6
|
| 161 |
+
HAI [65] P Diverse - 1.1 %
|
| 162 |
+
|
| 163 |
+
2-6
|
| 164 |
+
Lemay [45] $\mathrm{N}$ Electricity Modbus X 1.0%
|
| 165 |
+
|
| 166 |
+
1-6
|
| 167 |
+
|
| 168 |
+
$\mathrm{N}$ : Network captures $\mathrm{P}$ : Process data
|
| 169 |
+
|
| 170 |
+
* Network captures for SWaT exist, but are rarely used in research. ${}^{a}$ https://itrust.sutd.edu.sg/itrust-labs_datasets $b$ https://sites.google.com/a/uah.edu/tommy-morris-uah/ ics-data-sets
|
| 171 |
+
|
| 172 |
+
Table 1: Across the top nine public datasets, two account for the majority of uses. Despite ICSs' diversity, the top datasets focus on a few domains and protocol combinations.
|
| 173 |
+
|
| 174 |
+
Consequently, publications are more likely to cite each other if they stem from the same type, which promises a high number of comparisons among them. Still, the low clustering indicates incoherence in the overall research domain.
|
| 175 |
+
|
| 176 |
+
§ 4.2 BENCHMARKING DATASETS
|
| 177 |
+
|
| 178 |
+
With a basic understanding of the IIDS research domain, we now assess how evaluations are conducted in more detail. In this context, the chosen benchmarking datasets are a crucial building block as it serves as the basis for nearly all subsequent performance calculations. While related work has assessed which datasets are readily available [14], their exact usage and distribution remains unknown as of now (cf. Sec. 2.3). Consequently, this section answers our first research question Q1, regarding the datasets IIDSs are evaluated on. For a description of the existing datasets and testbeds, please refer to the survey conducted by Conti et al. [14].
|
| 179 |
+
|
| 180 |
+
§ 4.2.1 OVERVIEW
|
| 181 |
+
|
| 182 |
+
As can be derived from Fig. 2, over the entire timespan, the majority of used datasets are private, and only ${33.3}\%$ of the publications evaluate at least one public dataset. Note that we counted datasets as private if there existed no obvious procedure to retrieve the dataset. While private datasets may represent unique use cases, e.g., real-world data of industrial facilities, they significantly hinder reproducibility and comparisons to related works since they usually deny access to outsiders. In our SMS, we refrained from investigating private datasets in more depth because of the varying degrees of descriptions throughout the publications. Hence, needed details cannot be fully captured or verified. Nonetheless, we observe a trend starting around 2013 toward increased utilization of public datasets, which accounts for ${54.7}\%$ of the evaluated datasets in 2021. Therefore, it is more likely that an IIDS uses public datasets if published recently.
|
| 183 |
+
|
| 184 |
+
< g r a p h i c s >
|
| 185 |
+
|
| 186 |
+
Figure 4: Publications usually utilize a single dataset, and only ${16.4}\%$ of the papers leverage multiple datasets at all.
|
| 187 |
+
|
| 188 |
+
This trend follows the publication of high-quality datasets that are still widely used today. When looking at peak usage of public datasets, the SWaT [28] and Morris-Gas Pipeline [55] datasets jointly occur in 20.4% of the publications, which is the majority of the publications utilizing a public dataset at all (33.3 %) and other public datasets are thus used much less frequently. As a consequence, a significant portion of research activities seems to be biased toward these two datasets.
|
| 189 |
+
|
| 190 |
+
Regarding dataset diversity, across our entire SMS, we identified 35 unique public datasets, which exceeds previous reports of 23 datasets by Conti et al. [14]. In contrast to Balla et al. [10] (cf. Sec. 2.3) and by the design of our SMS (cf. Sec. 3), we dominantly encounter specialized industrial datasets contradicting their observed research bias toward nonindustrial datasets. However, of the many public datasets, 16 are only used once, and 14 occur at least three times (the Top nine public datasets are depicted in Fig. 2). Thus, availability alone is not decisive for a widespread use and other factors such as covered domain and attacks as well as the overall quality of the data seems to play an essential role as well.
|
| 191 |
+
|
| 192 |
+
§ 4.2.2 DATASET TYPES
|
| 193 |
+
|
| 194 |
+
In the next step, we examine the Top nine datasets more closely and highlight their different directions (cf. Tab. 1).
|
| 195 |
+
|
| 196 |
+
First, a dataset's type can be either a network capture, mostly required for network-based IIDSs or a (preprocessed) sample of physical system data, e.g., a time series of temperature values. For each type, we observe one major origin that accounts for most of the utilization across research, with iTRUST for process-based datasets and Morris et al. primarily for network-based ones. Considering the type of the top nine utilized datasets, we observe a strong focus on process-based datasets with 20.3% compared to 15.6% for network-based, which is in line with the observations from Sec. 4.1.2.
|
| 197 |
+
|
| 198 |
+
max width=
|
| 199 |
+
|
| 200 |
+
Combination Count Origins
|
| 201 |
+
|
| 202 |
+
1-3
|
| 203 |
+
Morris-Gas & Morris-Water 12 1
|
| 204 |
+
|
| 205 |
+
1-3
|
| 206 |
+
Morris-Gas & Morris-Power 8 1
|
| 207 |
+
|
| 208 |
+
1-3
|
| 209 |
+
Morris-Power & Morris-Water 7 1
|
| 210 |
+
|
| 211 |
+
1-3
|
| 212 |
+
SWaT & WADI 4 1
|
| 213 |
+
|
| 214 |
+
1-3
|
| 215 |
+
Morris-Gas & UCI-Water 4 2
|
| 216 |
+
|
| 217 |
+
1-3
|
| 218 |
+
Morris-Gas & SWaT 3 2
|
| 219 |
+
|
| 220 |
+
1-3
|
| 221 |
+
Electra Modbus & S7Comm 3 1
|
| 222 |
+
|
| 223 |
+
1-3
|
| 224 |
+
Morris-Gas, Power & Water 5 1
|
| 225 |
+
|
| 226 |
+
1-3
|
| 227 |
+
|
| 228 |
+
No private datasets were considered
|
| 229 |
+
|
| 230 |
+
Table 2: If multiple datasets are used, they mostly stem from the same class or origin, attributing little to richer evaluations.
|
| 231 |
+
|
| 232 |
+
Since industrial domains are diverse, we expect a large coverage of them across utilized datasets as well. However, the commonly covered industrial domains are mainly driven by the water and gas facilities, indicating an underrepresentation of all other domains, such as power generation, electricity distribution, or manufacturing. Yet, considering the large numbers of domains covered by private datasets, for which (high-quality) public alternatives do not exist, we cannot conclude that other domains receive few attention nor that those industries show no interest in IIDS research.
|
| 233 |
+
|
| 234 |
+
Lastly, industries are well known for their diverse and incompatible pooling of network protocols, mostly for legacy reasons [15]. Despite market-share studies identifying 11 dominant network technologies [31], research either focus on Modbus (having 10% market share [31]) or no communication protocol at all. While we discovered IIDSs for further industrial protocols such as IEC 60870-5-104 [46], S7 [47], or DNP3 [62], their representation is marginal and mostly confined to private datasets. Therefore, the distributions of utilized datasets w.r.t. their type, industrial domain, and network protocol reveal a significant drift between peer-reviewed literature and actual production systems.
|
| 235 |
+
|
| 236 |
+
§ 4.2.3 RESEARCH EMBEDDING
|
| 237 |
+
|
| 238 |
+
In the last step, we assess how the different datasets are embedded into research. Therefore we begin with the number of different datasets that are used within a single publication, as shown in Fig. 4. A large class of publications (509) evaluates a single dataset, and only a minority (100) on more than one. One publication uses 1.3 datasets on average. This observation is in line with the previous clustering observed in Sec. 4.1.2, which is more coherent w.r.t. the top-used datasets, suggesting that researchers often primarily focus on a single dataset. Given that we found at least 35 datasets publicly available, researchers most likely could consider additional, compatible datasets, especially when claiming that proposed IIDSs are applicable to a large range of industrial domains [74]. This claim is backed by the fact that two publications have already evaluated as many as six datasets [11,26]. However, our results also suggest a discrepancy between datasets w.r.t. ease of use, documentation, and completeness, motivating the limited use of the available datasets.
|
| 239 |
+
|
| 240 |
+
Looking into the preferred datasets, Tab. 2 enumerates the top dataset combinations. While we observe prominent combinations, the corresponding datasets usually originate from the same source and thus represent similar domains and protocols. Only seven publications evaluate datasets that stem from two origins. Thus, potentially widely applicable IIDSs are evaluated for specific (research) deployments from a single industrial domain, most likely not representative of an entire domain. Consequently, research fails to effectively widen the scope of available evaluations and rather introduces biases by focusing on a few specific niches.
|
| 241 |
+
|
| 242 |
+
Overall, IIDS research is still governed by private datasets, with a steadily increasing trend toward public datasets. However, we observe the potential for improvement in the number of datasets used during evaluation as well as their diversity w.r.t. their type, industrial domain, and network protocol.
|
| 243 |
+
|
| 244 |
+
§ 4.3 REPRODUCIBILITY AND COMPARABILITY
|
| 245 |
+
|
| 246 |
+
Next, we address our second research question Q2 asking to which extent IIDSs compare against each other. We assess this question from two directions, first by examining the conditions for reproducibility and second by measuring the degree of comparability, which are both perceived as good scientific standards [56], even though reproducibility lacks far behind expectations in the entire research community (beyond intrusion detection research) [9]. While reproducibility enables researchers to comprehend, build upon, or even enhance existing work, comparability allows them to determine how well an approach performs, i.e., to highlight the impact of newly proposed contributions over previous work or which approaches might be suitable for real-world deployments.
|
| 247 |
+
|
| 248 |
+
§ 4.3.1 REPRODUCIBILITY
|
| 249 |
+
|
| 250 |
+
Within IIDS research, reproducing existing work is not uncommon, e.g., to concisely analyze the prospects and limitations of individual approaches $\left\lbrack {{17},{43}}\right\rbrack$ , prove the feasibility of new ideas upon reproduced implementations [74], or solely for scientific profoundness [56]. Yet, successfully reproducing approaches is not guaranteed [17]. To even enable the cumbersome process of reproducing IIDS research, the availability of artifacts, such as datasets or code, is needed.
|
| 251 |
+
|
| 252 |
+
In our survey, we observe that ${33.3}\%$ of the publications already utilize public datasets with an improving trend (54.7 % of utilized datasets in 2021 are public; cf. Fig. 2). However, successfully reproducing older publications is less likely. While the availability of code is not strictly required, as the relevant details should be part of the publication, it greatly eases the reproducibility process. Unfortunately, it is difficult to ascertain the availability of source code in a systematic way as it is not always clear where to find availability statements or corresponding pointers in publications. Still, we only encountered 21 publications with obvious references, e.g., clearly highlighted repositories. We subjectively deduce an overall low availability of source code across IIDS research.
|
| 253 |
+
|
| 254 |
+
< g r a p h i c s >
|
| 255 |
+
|
| 256 |
+
${}^{ * }$ A paper is comparable to all papers that share one dataset and metric and were published at least one year earlier.
|
| 257 |
+
|
| 258 |
+
Figure 5: On average, authors compare IIDSs to 0.5 approaches from the related work (black), while theoretically, they could compare to at least 6.0 . This gap increases for papers evaluating the SWaT or Morris-Gas datasets.
|
| 259 |
+
|
| 260 |
+
Thus, researchers often have to rely solely on the descriptions and evaluation results provided by the paper to verify their code. Overall, reproducibility is thus challenging as optimally both criteria (public dataset and source code) have to be met. The increasing use of public datasets promises improvements in at least one direction, while publicly available artifacts accompanying publications remain the exception.
|
| 261 |
+
|
| 262 |
+
§ 4.3.2 COMPARABILITY
|
| 263 |
+
|
| 264 |
+
Fortunately, cumbersome reproducibility is often not needed when, for example, it suffices to compare results to related work, e.g., to prove a novel attack detection approach superior. This requires that both works have been evaluated on at least one common dataset. Likewise, to objectively judge their detection performance, both publications must employ at minimum one identical evaluation metric. Common metrics include, but are not limited to, e.g., accuracy, precision, recall, or F1 [61]. Appx. B provides descriptions of further metrics.
|
| 265 |
+
|
| 266 |
+
To judge the degree of comparability across the research landscape for each publication, we extracted the actual number of comparisons made by the authors and calculated the number of theoretically possible comparisons. Therefore, while conducting our SMS (cf. Sec. 3), we gathered how many publications each author uses as comparison references and additionally extracted the exact metrics used in each publication's evaluation. We estimate the minimum amount of theoretically possible comparisons by counting a publication as comparable if it shares at least one common dataset and metric and was published in an earlier year. Note that while not every two publications assume the same attack model, comparability can still be justifiable in the cases where the dataset matches since authors should select a dataset that best fits their approach. This methodology provides a great opportunity to assess actual and theoretical possible comparability, and Fig. 5 depicts the degree of comparability.
|
| 267 |
+
|
| 268 |
+
Overall, the number of actual comparisons performed by researchers is low, with 0.5 publications on average. For the two most-common datasets, we observe higher values (SWaT 2.4 and Morris-Gas 1.7). Still, there exists the theoretical opportunity for authors to compare a proposed IIDS to an average of 6.0 alternatives. On the one hand, this proves that many works are indeed comparable in terms of datasets and metrics. On the other hand, prominent datasets help in that regard since their theoretical comparability is higher (SWaT 10.0 and Morris-Gas 16.4). Note that it should not be the ultimate goal to compare against as many publications as possible since quality is preferential before quantity.
|
| 269 |
+
|
| 270 |
+
Looking closer into the details of Fig. 5, it is interesting that ${10}\%$ of the publications evaluating the Morris-Gas dataset (yellow) actually compare only against $7\%$ of different Morris-Gas publications. However, for SWaT (red), ${10}\%$ of publications are actually compared to about ${18}\%$ of existing works. Meanwhile, theoretical comparability for Morris-Gas publications is even higher than for SWaT (dotted lines). Regarding all publications (black), a total of 95.4% of publications are not compared to a single IIDS.
|
| 271 |
+
|
| 272 |
+
The state of comparability in the IIDS research is decent but with opportunities for improvement in the future as many publications share common datasets and metrics already.
|
| 273 |
+
|
| 274 |
+
§ 5 SURVEY ON EVALUATION METRICS
|
| 275 |
+
|
| 276 |
+
Previously, we analyzed comparability as a combination of utilizing overlapping datasets and evaluation metrics and observed that more publications could compare against each other in theory (cf. Sec. 4.3.2). However, our analysis still lacks a more detailed look at evaluation metrics used in IIDS research. Moreover, and most importantly, it is still unclear how expressive a given (combination of) metric(s) is in judging the detection performance of an IIDS.
|
| 277 |
+
|
| 278 |
+
To this end, we provide an overview of common and newly proposed metrics and categorize them into a taxonomy (cf. Sec. 5.1). Next, we assess their utilization across IIDS research along our SMS (cf. Sec. 5.2). Finally, since there exist known flaws to metrics (cf. Sec. 2.3), we examine how susceptible the research domain is in that regard by analyzing their expressiveness in practical experiments (cf. Sec. 5.3).
|
| 279 |
+
|
| 280 |
+
max width=
|
| 281 |
+
|
| 282 |
+
2*$\mathbf{{Metric}}$ 2|c|X Appendix $\oplus$ Conf. Matr. 2*Synonym
|
| 283 |
+
|
| 284 |
+
4-4
|
| 285 |
+
2|c|X TPTNFP FN
|
| 286 |
+
|
| 287 |
+
1-5
|
| 288 |
+
© TPR X B. ${1.2}\checkmark$ ✓ Recall Sensitivity Hit-Rate
|
| 289 |
+
|
| 290 |
+
1-5
|
| 291 |
+
o FNR X B. ${1.3}\checkmark$ ✓ Miss-Rate
|
| 292 |
+
|
| 293 |
+
1-5
|
| 294 |
+
OTNR X B.1.4 ✓✓ Specificity Slectivity
|
| 295 |
+
|
| 296 |
+
1-5
|
| 297 |
+
S-FPR X B.1.5 ✓✓ Fall-out
|
| 298 |
+
|
| 299 |
+
1-5
|
| 300 |
+
CoPPV X B. ${1.6}\checkmark$ ✓ Precision Confidence
|
| 301 |
+
|
| 302 |
+
1-5
|
| 303 |
+
5 o NPV X B.1.7 ✓✓ -
|
| 304 |
+
|
| 305 |
+
1-5
|
| 306 |
+
- Accuracy X B. ${1.8}\checkmark$ ✓✓✓ Rand Index
|
| 307 |
+
|
| 308 |
+
1-5
|
| 309 |
+
o F1 X B. ${1.9}\checkmark$ ✓✓ -
|
| 310 |
+
|
| 311 |
+
1-5
|
| 312 |
+
© RoC X B. ${1.10}\checkmark$ ✓✓✓ -
|
| 313 |
+
|
| 314 |
+
1-5
|
| 315 |
+
© AuC X B. ${1.11}\checkmark$ ✓✓✓ -
|
| 316 |
+
|
| 317 |
+
1-5
|
| 318 |
+
- Detected Scenarios X B. ${2.1}\checkmark$ X -
|
| 319 |
+
|
| 320 |
+
1-5
|
| 321 |
+
2. Detection Delay X B. ${2.2}\checkmark$ ✓ -
|
| 322 |
+
|
| 323 |
+
1-5
|
| 324 |
+
2 (e)TaPR $\left\lbrack {{33},{34}}\right\rbrack$ X B. ${2.3}\checkmark$ ✓✓✓ eTaP
|
| 325 |
+
|
| 326 |
+
1-5
|
| 327 |
+
4 - Affiliation [32] X B. ${2.4}\checkmark$ ✓✓✓ -
|
| 328 |
+
|
| 329 |
+
1-5
|
| 330 |
+
|
| 331 |
+
Table 3: Our taxonomy distinguishes between point-based and time series-aware metrics. Metrics may occur under different synonyms. For details, refer to Appx. B.
|
| 332 |
+
|
| 333 |
+
§ 5.1 A TAXONOMY OF IIDS EVALUATION METRICS
|
| 334 |
+
|
| 335 |
+
Evaluating the performance of an IIDS is of utmost importance to prove its effectiveness and compare it quantitatively against related works either in terms of attack detection performance, or computational resources.
|
| 336 |
+
|
| 337 |
+
Since computational resources are stated only occasionally throughout the SMS, we shorty introduce which aspects were evaluated. The most prominent aspect, still in 136 publications, refers to the time to train a model or classify a given datapoint/dataset. More infrequently are statistics about CPU/GPU usage (13), RAM utilization (12), or model size (16). However, a sound comparison without equivalent hardware or implementations is challenging and therefore those metrics are beyond the scope of the SoK in the following.
|
| 338 |
+
|
| 339 |
+
Regarding detection performance, during the conduction of our SMS, we extracted a total of 167 distinct metrics that were used during the evaluations. To provide an initial holistic overview, we present the most used metrics found in the SMS and relevant (newer) ones observed in related work in a taxonomy (cf. Tab. 3). The metrics are discussed in a more general fashion in the following, while short explanations for all 14 introduced metrics can be found in the Appx. B.
|
| 340 |
+
|
| 341 |
+
§ 5.1.1 CONFUSION MATRIX
|
| 342 |
+
|
| 343 |
+
Scientific evaluations of IIDSs base on labeled benchmarking datasets (cf. Sec. 4.2), including samples of cyberattacks (malicious) and benign behavior. After a training phase, for each data-point in the dataset, the known labels are compared to the output of the IIDS (alarm or no alarm). The high-level goal of an IIDS is to detect as many attack instances as possible while emitting few (false) alarms for benign behavior. Note that especially in ICSs, where cyberattacks are rare compared to benign behavior, false alarms should be minimal [18].
|
| 344 |
+
|
| 345 |
+
As the first performance indicators, one can count the occurrences of all four possible combinations between dataset labels and IIDS outcomes called true-positive (TP), true-negative (TN), false-positive (FN), and false-positive (FP), making up the confusion matrix to capture an IIDSs behavior.
|
| 346 |
+
|
| 347 |
+
§ 5.1.2 POINT-BASED METRICS
|
| 348 |
+
|
| 349 |
+
Since there is a desire to express performance with a single value irrespective of the dataset, there exist a large variety of point-based metrics derived from the confusion matrix [61] (cf. Tab. 3). These express properties, such as its overall correctness (accuracy), the fraction of correct alarms (precision), or fraction of identified attacks (recall). Point-based metrics find wide application beyond IIDS research, e.g., machine learning, and thus a natural choice for comparisons.
|
| 350 |
+
|
| 351 |
+
§ 5.1.3 TIME SERIES-AWARE METRICS
|
| 352 |
+
|
| 353 |
+
Point-based metrics are suitable when the benchmarking datasets' entries are independent. However, ICSs are inherently time-dependent, i.e., the current state of an ICS is always a result of the system's previous state. Consequently, IIDS datasets extracted from these systems also need to be considered in the aspect of time, i.e., an alarm extending beyond an attack while the system did not yet reach its normal operational state should be interpreted differently from a false alarm in the middle of normal behavior. In such or similar scenarios, point-based metrics are skewed, which is already known in literature since 2014 by Gensler et al. [25].
|
| 354 |
+
|
| 355 |
+
Consequently, many novel time-aware metrics tackle such flaws $\left\lbrack {{25},{32} - {34},{44},{69}}\right\rbrack$ . They, e.g., simply count the number of detected and continuous attack scenarios (detected scenarios) [48], aggregate the time it takes until the IIDS emits an alert after the attack began (detection delay), or define new time series-aware versions of precision and recall to favor early detection of an attack instance (e)TaPR [34]. Yet, Huet et al. [32] already found that (e)TaPR is not free of flaws and responded with their own Affiliation metric. Note that while time series-aware metrics like Numenta [44] or the one proposed by Tatbul et al. [69] and Gensler et al. [25] exist, they were observed only seldom in our SMS, if at all.
|
| 356 |
+
|
| 357 |
+
§ 5.2 METRICS UTILIZED IN IIDS RESEARCH
|
| 358 |
+
|
| 359 |
+
Given that a wide variety of metrics exist to express IIDS performance, in our final research question Q3, we ask how often and when these metrics are used. Overall in our SMSs, we found 167 different metrics and flavors, including, e.g., subtle deviations such as multi-class or weighted variants. To handle this amount of metrics, we aggregated them into similar classes, e.g., binary-class and multi-class accuracy are considered as the same metric type. Since a majority is used infrequently, i.e., only 12 occur at least ten times, we bundle rarer metrics into a single class (others) in the following.
|
| 360 |
+
|
| 361 |
+
< g r a p h i c s >
|
| 362 |
+
|
| 363 |
+
Figure 6: Point-based metrics dominate IIDS evaluations, with accuracy, precision, recall, and F1 being the top most used metrics. Over time, the number of metrics in a publication increased to currently 3.2 on average in 2021.
|
| 364 |
+
|
| 365 |
+
§ 5.2.1 METRICS OVER TIME
|
| 366 |
+
|
| 367 |
+
To obtain a first overview of the utilization of frequent metrics, we depict their use over time in Fig. 6. First of all, the number of different metrics used in a single publication on average (2.3 overall) kept increasing since 2013, and nowadays, publications use 3.2 metrics on average. This greatly coincides with the previous observation in Fig. 2, where the year 2013 marked the turning point when IIDS research took off. This trend toward more metrics contributes to higher comparability in the research domain and hints at in-depth evaluations. However, there also exist 157 publications that evaluate without any quantitative metrics and instead rely only on textual descriptions, e.g., elaborating which attack scenarios were detected or discussing results visually along graphs. Note that textual descriptions cannot be aggregated into a unified class as they differ significantly, i.e., two publications using textual descriptions hardly describe the same feature.
|
| 368 |
+
|
| 369 |
+
In contrast to dataset utilization (cf. Fig. 2), the metric utilization fluctuates less over time. One notable trend, again starting around 2013, is that accuracy, precision, recall, and F1, i.e., the classical point-based metrics, have established themselves as metrics with high usage by representing 63.1% of all used metrics. At the same time, out of the 348 publications utilizing one of these four metrics, only 81 state all four. Thus their usage is inconsistent, and most publications only focus on certain aspects of their expressiveness.
|
| 370 |
+
|
| 371 |
+
Concerning all point-based metrics, which account for ${93.3}\%$ of all metrics, the confusion matrix resembles an important metric as it builds the foundation to calculate all point-based metrics (cf. Sec. 5.1.1). However, out of the 57 papers that publish the confusion matrix, just 19 fully state or discuss all four common metrics (accuracy, precision, recall, and F1), even though this would be easily doable. In ${9.4}\%$ of the publications where the confusion matrix is published, at least missing metrics can be calculated, which is not possible the other way round, i.e., the confusion matrix cannot be computed if, e.g., F1 scores are indicated. It thus remains questionable why publications omit frequently used metrics when all data to compute them has to be available anyway.
|
| 372 |
+
|
| 373 |
+
< g r a p h i c s >
|
| 374 |
+
|
| 375 |
+
Figure 7: Papers utilizing SWaT and especially the Morris-Gas dataset are dominated by point-based metrics. Time series-aware metrics are slightly more frequent for SWaT.
|
| 376 |
+
|
| 377 |
+
Even though it has been known since 2014 that for industrial IDSs, point-based metrics may be flawed [25], they make up 93.3% of all metrics. As a time series-aware metric, detection delay receives constant but infrequent use by 48 publications overall. Still, detection delay alone does not quantify the portion of detected attacks and thus likely serves to enhance point-based metrics. Newer promising time series-aware metrics yet have to gain traction (only 13 publications use them), despite their added value in interpreting IIDS results.
|
| 378 |
+
|
| 379 |
+
Evaluations in the IIDSs research dominantly build upon point-based metrics, which are known to have flaws, especially on time-series datasets as used in IIDS research [25,33].
|
| 380 |
+
|
| 381 |
+
§ 5.2.2 METRIC DISTRIBUTION ON DATASETS
|
| 382 |
+
|
| 383 |
+
In Sec. 4.1.2, we observed the formation of obvious clusters in research around publications using the same dataset. Consequently, Fig. 7 depicts the dataset's influence on the chosen metrics. Therefore we pick the two most commonly used datasets, SWaT and Morris-Gas (cf. Sec. 4.2), and compare their metrics distribution against all publications.
|
| 384 |
+
|
| 385 |
+
The top four metrics (accuracy, precision, recall, and F1) play a major role for the SWaT and Morris-Gas datasets too, even more than across all publications. Recall, for example, is used in ${43.0}\%$ of all publications but indicated for ${79.2}\%$ of IIDSs evaluated on the Morris-Gas dataset. The order of usage between them is also similar, i.e., precision is used the most and F1 the least. The only exception is accuracy, which is indicated less often for the SWaT dataset. This difference might be caused by SWaT featuring far fewer attack instances. Another exception is that other point-based metrics (confusion matrix, FPR, TNR, FNR, and NPV) receive greater attention in the Morris-Gas dataset. Contrary, time series-aware metrics are slightly more common for SWaT.
|
| 386 |
+
|
| 387 |
+
< g r a p h i c s >
|
| 388 |
+
|
| 389 |
+
Figure 8: Metrics show strong correlations w.r.t. combinations they occur in publications. E.g., F1 is used in 152 publications, and among them ${82.2}\%$ publish precision. Vice versa, ${77.2}\%$ of the 162 papers with precision also state F1.
|
| 390 |
+
|
| 391 |
+
Our analysis highlights once again the dominance of point-based metrics, especially for the top two datasets by usage.
|
| 392 |
+
|
| 393 |
+
§ 5.2.3 METRIC COMBINATIONS
|
| 394 |
+
|
| 395 |
+
Even though there exists a variety of metrics (cf. Sec. 5.1), a single metric usually has to be considered in relation to others. E.g., precision and recall have to be discussed jointly since an IIDS which detects all attacks (high recall score) might do so simply by emitting alerts continuously, which would become visible in a low precision score. Fused metrics like F1 try to remedy this situation but deny in-depth reasoning afterward as they do not retain the precise original information. According to our SMS, publications state 2.3 metrics on average to sketch light on the IIDS performance from different perspectives. Consequently, as the last step, we evaluate which metrics are used together.
|
| 396 |
+
|
| 397 |
+
To this end, Fig. 8 depicts the occurrence of combinations between the considered metrics. On the diagonal, we enumerate how often each metric is utilized globally, i.e., recall is used 262 times. The remaining cells indicate how often the indication of one metric leads to the usage of another metric.
|
| 398 |
+
|
| 399 |
+
In total, 152 publications used the F1, and 91.4% of these papers (stating F1) also published recall values. This is not surprising since knowledge of the recall is required to calculate F1. Vice versa, however, 262 papers used recall, and of them only ${53.1}\%$ of those also published F1 scores. Looking at precision and recall as two complementing metrics, recall is used in ${93.8}\%$ of the publications that state precision. If recall is stated, only ${58.0}\%$ also publish precision. While the number of detected attacks (recall) is valuable information, for the ${42}\%$ of IIDS not indicating precision, it is unknown whether the IIDS indeed performs better than an IIDS that simply outputs one continuous alarm.
|
| 400 |
+
|
| 401 |
+
For popular point-based metrics (within the black rectangle), we observe a strong dependence between them, which is not surprising as these are heavily used (cf. Fig. 6 and Fig. 7). Since many point-based metrics are derived from the confusion matrix (cf. Tab. 3), the confusion matrix likewise has a high correlation with these four. However, it is not guaranteed that these are published reliably, as F1 is contained in only ${57.9}\%$ of the cases when the confusion matrix is presented. This is in line with our previous observation that of the 57 publications with a confusion matrix, only 19 state all of the four most often used point-based metrics (cf. Sec. 5.2.1).
|
| 402 |
+
|
| 403 |
+
Except for the dependencies between FNR and FPR, there exist few apparent correlations, thus often omitting the classical point-based metrics completely. Especially publications taking advantage of newer, time series-aware metrics lack other metrics. While this development makes sense (why should we indicate flawed metrics when we can use better ones), it makes comparisons to prior works harder.
|
| 404 |
+
|
| 405 |
+
§ 5.3 EXPRESSIVENESS OF METRICS
|
| 406 |
+
|
| 407 |
+
Until now, our SoK on evaluations of IIDSs bases on theoretical observations from literature, e.g., which datasets and metrics are used. In the following, we extend our analysis beyond a literature mapping study with practical experiments to understand the quantitative impact of metric choices on the evaluation outcomes and to derive metrics that offer high expressiveness. To this end, we conduct a comparison study across ten IIDSs from research on two datasets and utilize our evaluation tool (cf. Availability Statement) to compare various metrics. Especially for newer time series-aware metrics, which are more difficult to compute $\left\lbrack {{32},{34}}\right\rbrack$ , no common library exists thus far. Besides the metrics discussed in the following, the tool provides a total of 18 point-based and 14 time-aware metrics, for which few implementations exist.
|
| 408 |
+
|
| 409 |
+
§ 5.3.1 EXPERIMENT DESIGN
|
| 410 |
+
|
| 411 |
+
As we observed in Sec. 4.2, the IIDS research community is governed by two major directions of datasets: network-based datasets such as the Morris-Gas [55] and process data datasets such as SWaT [28] containing physical time series data. We aim to cover both types in our evaluation and thereby also cover two important IIDS types from research, namely knowledge- and behavior-based IIDSs (cf. Sec. 2.1). For knowledge-based IIDSs, we examine five supervised machine learning approaches $\left\lbrack {{59},{72}}\right\rbrack$ originally evaluated on the Morris-Gas dataset. For behavior-based IIDSs training on process data, we leverage five anomaly detection approaches, with TABOR basing on timed automata [48], Seq2SeqNN utilizing neural networks [39], PASAD leveraging singular spectrum analysis [6], SIMPLE implementing minimalistic boundary checks [73], and Invariant mining invariant logical formulas [20]. Contrary to the supervised machine learning approaches on the Morris-Gas dataset, these IIDSs are evaluated on the temporally ordered SWaT dataset, which provides dedicated attack-free training data and testing data, including anomalies. As an interesting case for the SWaT dataset, we added an IIDS that randomly emits alerts by a ${50}\%$ chance.
|
| 412 |
+
|
| 413 |
+
§ 5.3.2 METRICS UNDER STUDY
|
| 414 |
+
|
| 415 |
+
In this study, we focus on the four common point-based metrics accuracy, precision, recall, and F1 (cf. Sec. 5.2) and modern time series-aware variants of them called enhanced time series-aware recall (eTaPR) [34] (cf. Appx. B.2.3). More precisely, eTaP for precision, eTaR for recall, and eTaF1 for F1 (there is no time series-aware accuracy equivalent). Additionally, we consider the time-aware Affiliation metrics (again expressed as variations of precision, recall, and F1) proposed by Huet et al. [32], which claim to be robust against randomly generated alerts. These metrics, like their point-based counterparts, favor high detection rates but diminish the expressiveness of consecutive alarms if they start too early or overhang beyond the duration of an attack. Furthermore, we examine a variant of ${F1}$ , which allows weighting precision and recall differently. This may be crucial in industries since cyberattacks are rare compared to normal behavior, preferring a high precision over false alarms. The datasets in our study already incorporate this class imbalance, with Morris-Gas containing 22% malicious data, SWaT just 12%, and real deployments likely observing even fewer attacks. Thus we examine ${F}_{0.1}$ in addition, weighting precision ten times more than recall. As the last metric, and since there is only one repetition for each attack type in SWaT, we discuss the percentage of detected scenarios (unique attack types).
|
| 416 |
+
|
| 417 |
+
§ 5.3.3 RESULTS
|
| 418 |
+
|
| 419 |
+
Point-based. We begin with analyzing the knowledge-based IIDSs on the Morris-Gas dataset in Fig. 9(a). Here, the point-based metrics (accuracy, precision, recall, and F1) coherently judge the IIDSs' performance, i.e., one IIDS is strictly better than another, and only in recall does the ordering between ExtraTrees and DecisionTrees flip. The ${F}_{0.1}$ variant’s judgment is in line with the other metrics, likely due to the high amount of malicious samples ( ${22}\%$ ) in this dataset. Also, the time series-aware variants draw a nearly identical picture here. Note that the attack instances of the Morris-Gas dataset correspond to manipulations of individual network packets, and thus temporal effects are minimal. While the IIDSs are well at detecting these attacks, it is unclear whether the attacks themselves are actually comprehensible to ones observed on real deployments. Overall, the considered metrics coherently judge the IIDSs' performance on the Morris-Gas dataset.
|
| 420 |
+
|
| 421 |
+
< g r a p h i c s >
|
| 422 |
+
|
| 423 |
+
Figure 9: While point-based metrics rate IIDSs' performance consistently on the Morris-Gas dataset (a), they fail to provide a coherent picture of the time-series dataset SWaT (b) and judge IIDSs better than time-aware metrics.
|
| 424 |
+
|
| 425 |
+
The picture changes for the SWaT dataset comprising of time series of physical states (cf. Tab. 1). We additionally depict the raw alert emitted by the IIDSs over time in Fig. 10. First of all, as depicted in Fig. 9(b), all IIDSs perform well according to accuracy (more than 0.75 ). Yet, in comparison to all other metrics, accuracy seems to overestimate their capabilities. We attribute this to SWaT's composition comprising to ${12}\%$ of attacks (which is more realistic than Morris-Gas with ${22}\%$ as attacks are rare in practice), and an IIDS that emits no alarms at all would score an accuracy of 0.88 already.
|
| 426 |
+
|
| 427 |
+
Regarding precision and recall, we observe ambiguity. Seq2SeqNN falls far behind the other approaches in recall, which we attribute to a single long attack in SWaT (accounting for ${63}\%$ of all attack samples) being missed by the approach (cf. Fig. 10). Besides this attack, Seq2SeqNN achieves decent scores as it correctly detects most of the other attacks (cf. detected scenarios). Therefore, point-based metrics overvalue this attack, i.e., no obvious relation exists between the attack's duration and its severity that would justify this effect. In contrast to the Morris-Gas dataset, the ${F}_{0.1}$ score clearly favors IIDSs with higher precision, and thus TABOR is preferred over the SIMPLE IIDS (even though nearly equivalent in F1).
|
| 428 |
+
|
| 429 |
+
Time series-aware. In general, time series-aware metrics promise to solve these inaccuracies of point-based metrics. In our practical study, all IIDSs perform much worse on the time eTa series-aware variants [34], which might be the case since they have not been designed for this kind of (potentially more valuable) evaluation. Here, the SIMPLE IIDS is now the best-performing approach according to eTaP, eTaF ${}_{1}$ , and ${\mathrm{{eTaF}}}_{0.1}$ as its emits alerts are precise, i.e., no overshooting as by PASAD or occasional short false-alarms as in TABOR and the Invariant IIDS. Yet contradicting the traditional recall score, Seq2SeqNN now belongs to the best IIDSs in the time-series recall pendant (eTaR) probably because the time-aware metric analyzes alarms consecutively, and thus false-negatives of overshooting alarms are not weighted that negatively.
|
| 430 |
+
|
| 431 |
+
The time-aware affiliation metrics [32] draw a completely different picture since all IIDSs perform much better. The ratings for precision, ${\mathrm{F}}_{1}$ , and ${\mathrm{F}}_{0.1}$ are mostly consistent, and the IIDSs only differ significantly in terms of affiliation recall. However, a random IIDS, which the metric should consider as the minimum baseline [32], is counterintuitively perceived as a better approach than PASAD and TABOR in the affiliation ${\mathrm{F}}_{1}$ score. In all other point-based and time-aware metrics, this random IIDS is perceived as the worst approach (except for detected scenarios and recall).
|
| 432 |
+
|
| 433 |
+
False-positive resistance. For practical deployment, IIDSs with many false positives are unsuitable [18], and thus identifying those in evaluations is crucial. In that regard, while the Invariant IIDS outperforms all other approaches in many metrics, visually (cf. Fig. 10) exhibits the least usable approach due to its plentiful but short-lived false alarms. Only in the eTa metrics it performs badly.
|
| 434 |
+
|
| 435 |
+
§ 5.3.4 CONCLUSION
|
| 436 |
+
|
| 437 |
+
Point-based metrics draw a coherent picture for the Morris-Gas dataset containing a significant amount of attacks with few temporal effects as single network packets were manipulated. In contrast, authors have to carefully examine their results on the SWaT dataset since, depending on the chosen metric, their IIDS may perform excellently or poorly. These results are in line with Fung et al. [23], finding that time-series metrics are preferable for reconstruction-based IIDSs and point-based scores may be misleading. For the affiliation metrics by Huet et al. [32], our experiment challenges their results, especially for an IIDS that emits alerts randomly. Thus, a better understanding of how such newer time series-aware metrics have to be interpreted is crucial.
|
| 438 |
+
|
| 439 |
+
Overall, it is unlikely that a single metric exists that catches all industrial operators' different goals, e.g., preferring few false alarms over detected attacks. IIDSs should be evaluated with different metrics to truly highlight their capabilities, as cherry-picking metrics may lead to misleading results. The ${\mathrm{F}}_{0.1}$ score provides an interesting alternative for more realistic scenarios. Furthermore, visual comparisons exhibit a non-negligible added value to evaluations too. Lastly, the knowledge- and behavior-based IIDSs are hardly comparable today since they are divided by dataset type.
|
| 440 |
+
|
| 441 |
+
< g r a p h i c s >
|
| 442 |
+
|
| 443 |
+
Figure 10: Visualizing alerts side-by-side provides an in-depth view of their distinct alerting behavior. E.g., the underwhelming performance of the Seq2SeqNN IIDS in point-based recall (cf. Fig. 9) can easily be attributed to a single prolonged attack of the SWaT dataset. Note that alerts have been extended to a minimum width of 1 minute for visibility (except for Random).
|
| 444 |
+
|
| 445 |
+
§ 6 COMMON ISSUES & RECOMMENDATIONS
|
| 446 |
+
|
| 447 |
+
The huge potential of IIDSs to combat rising threats from cy-berattacks against industrial networks is indisputable. Unsurprisingly, our systematic analysis (Sec. 4 and Sec. 5) shows an unbroken and increasing interest in this research field ( ${40.9}\%$ average yearly increase between 2013 and 2021), with at least 609 publications investing great efforts in proposing IIDSs, which are complemented by further work on creating datasets, designing evaluation metrics as well as surveys and meta-analysis. However, our SMS also unveils and quantifies flaws in this field that hamper scientific progress. Thus, in the following, we synthesize common issues persisting in IIDS evaluations and distill recommendations to move forward to more thorough IIDS evaluations.
|
| 448 |
+
|
| 449 |
+
§ 6.1 COMMON ISSUES IN IIDS EVALUATIONS
|
| 450 |
+
|
| 451 |
+
Our systematic analysis of the IIDS research field reveals that the current state-of-the-art w.r.t. evaluation methodologies has serious inefficiencies, eventually slowing down the overall progress in securing industrial deployments. Our SMS, covering the body of literature until 2021, enables quantifying these inefficiencies and makes (promising) trends visible in contrast to previous meta-surveys and experiments on a usually narrower scale (cf. Sec. 2.3). More precisely, we identify three issues (I1-I3) prevalent in evaluations of IIDSs and present them along the results from our SMS in the following. - I1: Dataset Diversity. We identify a lack of diversity in datasets used for evaluations. Regarding the utilization of datasets, we find that IIDSs are evaluated on 1.3 datasets on average (cf. Fig. 4), which aligns with 1.32 datasets on average reported by related work [74]. Notably, the majority (501 publications) considers only a single dataset, despite a significant selection of datasets being publicly available (we identified 35 public datasets in our SMS, and other work lists 23 datasets or 61 industrial testbeds [14]). Since there exists this large gap between available and utilized datasets, this raises the question of why many datasets are only used rarely. Possible reasons include datasets being too narrow in scope (e.g., focusing on single attack types), too small (providing only few training or testing samples), difficult to use (e.g., requiring in-depth knowledge of a specific industrial protocol), or simply not widely known among researchers. Lua et al. [49] also find that high-quality datasets are rare. Moreover, for the few publications that evaluate multiple datasets $\left( {{16.4}\% }\right)$ , these datasets mostly stem from the same origin (cf. Tab. 2). Thus, IIDSs' evaluations are mostly confined to a single scenario (dataset) and do neither cover the diversity of industrial domains nor communication protocols (cf. Sec. 4.2.2). Consequently, it remains unclear whether IIDSs are applicable outside the narrow scenario they have been evaluated in, making real-world deployments risky and requiring repeated efforts for different scenarios.
|
| 452 |
+
|
| 453 |
+
* 12: Metrics Ambiguity. Metrics used in evaluations and comparisons pose ambiguity regarding the actual detection performance of IIDSs. Due to the unclear and biased choice of metrics, the actual detection performance of proposed IIDSs often remains unclear, as also claimed by Giraldo et al. [27]. Seemingly promising, we observed an increase in the number of utilized metrics (3.2 per publication on average in 2021) while simultaneously moving away from mere textual descriptions (cf. Sec. 5.2) toward established point-based metrics (cf. Fig. 6). Accuracy, precision, recall, and F1 make up the majority of utilized metrics again [49]. However, we also encountered a total of 167 flavors of metrics, e.g., subtle variations such as multi-class or weighted scores, which further complicates metric ambiguity. At the same time, essential metrics, expected to be provided in combination, are often omitted or incomplete in publications. I.e., of the 57 publications providing the confusion matrix, only 19 state accuracy, precision, recall, and F1 in combination (cf. Sec. 5.2.1). Even more severe, precisely these four point-based metrics, making up 63.1% of the metric usage, do not accurately capture the detection performance in time-series scenarios [23] and are skewed towards the detection of long-lasting attacks (cf. Sec. 5.3). While plenty new metrics $\left\lbrack {{25},{32} - {34},{44},{69}}\right\rbrack$ are designed that supposedly address these issues, these metrics are rarely used in evaluations (only 13 publications), likely because a broad understanding about their expressiveness is missing. Lastly, as our practical experiments show, not a single metric can describe all aspects of an IIDS, and visual comparisons can disprove, e.g., seeming promising IIDSs.
|
| 454 |
+
|
| 455 |
+
* I3: Underutilized Comparability. Evaluations of IIDSs do not capitalize on the large potential for comparisons among the vast body of existing research. The number of comparisons to related work performed by new IIDSs' has experienced earlier criticism already [27]. On average, an IIDS is only compared with 0.5 other proposals, slightly more than observed in previous works (0.38) [74]. Yet, in theory, authors could compare an IIDS to an average of 6.0 other approaches sharing at least one common dataset and metric (cf. Fig. 5). Simultaneously, the current state of the research field leaves researchers large freedom to choose from any of the theoretically suitable publications for their comparisons. This situation is even aggravated by the sparse commitment to publish artifacts (cf. Sec. 4.3.1), which leaves researchers no choice other than to reproduce others' works, e.g., to ultimately conduct comparability studies-a non-trivial task that is prone to failure [17]. Meanwhile, researchers have to rely on public datasets and the expressiveness of metrics that both exhibit flaws themselves (cf. I1 and I2). However, proper comparisons are essential to better understand if and how a novel IIDS improves upon existing work and thus collectively move the research field forward.
|
| 456 |
+
|
| 457 |
+
§ 6.2 RECOMMENDATIONS FOR IIDS EVALUATIONS
|
| 458 |
+
|
| 459 |
+
To address these prevalent issues and thus enhance evaluations as well as the applicability of future IIDSs research, we extract key aspects from our systematic analysis and turn them into six actionable and practical recommendations (R1-R6).
|
| 460 |
+
|
| 461 |
+
Since our recommendations target different parties involved in IIDS research, we address them to (i) researchers designing new detection approaches, evaluating them, and comparing them to the state-of-the-art; (ii) dataset creators recording qualitative datasets or providing simulations and testbeds; and (iii) industrial operators with precise knowledge of the individual needs of ICSs' striving to role out IIDSs in practice. - R1: Evaluate More and Diverse Datasets. Researchers should use the many readily available datasets to comprehensively evaluate their IIDSs for different industrial domains, communication protocols, and attack types. Using multiple, especially diverse datasets avoids overfitting [73], boosts generalizability across ICS, enables insights across multiple domains, and allows assessing the potential efforts required to facilitate (widespread) deployability across industries. For a concise dataset selection, we recommend focusing on publicly available datasets such as those listed in Conti et al.'s [14] comprehensive datasets and testbeds overview. For evaluations requiring process data, datasets of multiple origins and industrial domains should be used. Likewise, for IIDSs operating on network traffic, generalizing the approach to different industrial protocols should be considered. Moreover, specialized datasets that, e.g., model a single attack type, cover a niche industrial domain, or deploy rarely used protocol, still provide substantial added value when used in combination with other, more general datasets to better understand the capabilities and limitations of an IIDS. Additionally, researchers can consider datasets containing attacks and faults (e.g., the IEC61850SecurityDataset [12]) to evaluate whether their proposed IIDSs can differentiate these kinds of unwanted behavior to facilitate swift and correct reactions by operators to alerts. Lastly, to ease evaluations on a multitude of datasets with potentially varying formats, agreeing on unified dataformats, such as IPAL [74], may help lower the burdens for researchers.
|
| 462 |
+
|
| 463 |
+
* R2: Provide High Quality Datasets. Dataset creators should provide the research community with high-quality and diverse datasets to counteract the current bias to two major datasets (cf. Tab. 1). To ensure the practical relevance of datasets, they should ideally be generated in close cooperation with industrial partners [57] since otherwise, IIDSs designed upon them risk not being of practical use to industrial operators. Such collaborations, even though costly [7], also allow enriching datasets with properties and demands of actual industrial deployments, e.g., the criticality of an attack, an acceptable delay until which a detection is excepted, or documentation of how long the ICS behaves abnormally after an attack until it stabilized again. Furthermore, research lacks datasets that tackle the needs of all IIDS flavors (cf. Sec. 4.2.2), inhibiting a consolidation of the overall research landscape. For one, only a few datasets (Faramondi et al. providing a rare exception [19]) combine network traffic and process data, which is necessary to compare IIDSs that work on these different data types. Moreover, datasets should be designed and created such that they are applicable to both supervised and anomaly-based IIDS training (currently, no corresponding dataset is known to us), e.g., by including repetitions and variations of the same attack, providing sufficient long samples of benign behavior, and including novel attacks, which are not previously trained on, to avoid the drawing of false conclusions [43]. For more concrete advice on how scientific IIDS evaluation datasets should be designed, please refer to the works by Gómez et al. [58] and Mitseva et al. [54].
|
| 464 |
+
|
| 465 |
+
R3: Use Standardized and Accessible Metrics. Researchers should carefully consider the use of metrics and rely on both common (flawed) metrics for comparability as well as recent time-series aware metrics (cf. Sec. 5.2) that attempt to mitigate known flaws. In that regard, meta-studies on how metrics fare against each other, as done in Sec. 5.3 and by Huet et al. [32], help understand the expressiveness of evaluations. Ideally, a wide variety of different metrics is used to disseminate the performance of newly proposed IIDS, which would also facilitate comparisons in the future. Especially with the rise of new metrics and to standardize the evaluation process, researchers should be equipped with adequate tooling to calculate these metrics easily. Our evaluation tool used in Sec. 5.3 and published along this paper will greatly help in that regard. To facilitate a sensible choice of metrics and ensure comparability of related IIDS approaches, dataset creators should explicitly define standard evaluation metrics for their datasets, as has been done, e.g., for the HAI dataset [65]. First, fixing metrics a priori ensures the neutrality of evaluations and reduces potential biases in their selection by researchers. More importantly, however, dataset developers know the underlying ICS best, e.g., w.r.t. the impact of false positives or the likelihood of attacks. Often they are the only people with the necessary expertise to identify the demands of a cybersecurity solution and, thus, the most valuable metrics to benchmark an IIDSs in their scenario.
|
| 466 |
+
|
| 467 |
+
* R4: Facilitate Comparability With Public Artifacts. Researchers should make the artifacts publicly available [18], especially IIDS implementations, underlying their work to facilitate comparability of IIDS research. If artifacts cannot be provided, e.g., due to licensing issues or private datasets, we recommend that researchers at least release the precise IIDS outputs, e.g., a list with all packets classified as malicious by an IIDS. These outputs, together with the (anonymized) labels of the dataset, suffice to calculate new metrics retrospectively, thus gaining new insights into the IIDS's performance even after publication. Furthermore, publishing an IIDS's alerts when evaluated on a public dataset is also valuable if published alongside its implementation, as getting research code to run and produce the same results independently is often hard work (e.g., due to lacking documentation), especially some years down the road. Such published labels directly avoid the current lock-in to metrics during the time of publication, thus greatly enhancing the comparability of IIDS research. This freedom is especially crucial in an early stages of IIDS research since it is unknown which metrics and evaluation methodologies will eventually gain acceptance.
|
| 468 |
+
|
| 469 |
+
* R5: Strive for Continuous Feedback Loops. All stake-holders should strive for coherence and applicability of IIDS research. Researchers should avoid proposing isolated IIDSs without proving their necessity and bridge the gaps between related branches for greater coherence [74]. At the same time, meta-surveys that critically review the state-of-the-art have to provide directions regarding which approaches work well in given settings, which datasets and metrics are suitable, and which approaches should IIDSs should compare to. Lastly, a continuous exchange between all stakeholders should be established [57], e.g., in the form of public talks, workshops, or the dissemination of scientific publications. Only then can industrial operators stay informed about recent advancements and likewise keep dataset creators updated to ensure overall research strives for practical applicability. As an initial step in that direction, we provide the artifacts of our broad SMS, which can serve as the foundation for future surveys on more specific topics, such as in-depth analyses of the proposed detection methodologies or benefits and drawbacks of the wide variety of (newly proposed) evaluation metrics.
|
| 470 |
+
|
| 471 |
+
* R6: Think Beyond Alerting. Researchers should extend their focus beyond optimal attack detection coverage and the required actions after IIDS alerts. Such actions may include steps to understand the alert [18,64], localize the attacker [4], mitigate an attack's damage potential [67], recover the system to a safe state [71], and lastly, perform forensics to learn for the future [37]. Given this chain of tasks operators have to execute, which may include temporal interruptions of the process, it may also be crucial for researchers to consider the costs of (false) alarms emitted by their solutions. While research on follow-up procedure of IIDS alerts is currently critically underrepresented in the literature, this is partially caused by the secrecy of industrial operators. The sharing of detailed information about the operation of real-world ICSs allows researchers to propose valuable and actionable improvements to current processes. Moreover, this information also allows researchers to design suitable evaluation methodologies to evaluate the performance of the processes following an alarm. Overall, IIDS should thus no longer be considered as an isolated system, but the step from detection to (incident) response should be considered a tightly interlocked process.
|
| 472 |
+
|
| 473 |
+
§ 7 CONCLUSION
|
| 474 |
+
|
| 475 |
+
The ongoing digitization of industries and increasing exposure of ICS to the Internet are accompanied by a rise in cy-berattacks. Consequently, the new research field of industrial intrusion detection, promising to provide an easily deployable solution to uncover even sophisticated attacks, gained traction. In 2021 alone, 130 new detection approaches were proposed.
|
| 476 |
+
|
| 477 |
+
This SoK presents the first systematic attempt to shed light on this fast-growing research field and how different approaches are evaluated. Our thorough analysis of 609 publications reveals the tremendous efforts invested by the community to protect industrial systems. However, when it comes to evaluating detection approaches, we uncover widespread issues w.r.t. dataset diversity, the ambiguity of metrics, and missed opportunities for comparability, hampering the overall progress of this quickly growing research field. Based on our systematic analysis, we formulate actionable recommendations to overcome these issues and thus bring the entire research domain forward to sustainably and significantly improve the security of (real-world) industrial deployments.
|
papers/JSYS/JSYS 2023/JSYS 2023 March_Papers/yuY5n8gMn-s/Initial_manuscript_md/Initial_manuscript.md
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|