ACL-OCL / Base_JSON /prefixN /json /naacl /2021.naacl-industry.15.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:14:03.190888Z"
},
"title": "LightSeq: A High Performance Inference Library for Transformers",
"authors": [
{
"first": "Xiaohui",
"middle": [],
"last": "Wang",
"suffix": "",
"affiliation": {},
"email": "wangxiaohui.neo@bytedance.com"
},
{
"first": "Ying",
"middle": [],
"last": "Xiong",
"suffix": "",
"affiliation": {},
"email": "xiongying.taka@bytedance.com"
},
{
"first": "Yang",
"middle": [],
"last": "Wei",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Mingxuan",
"middle": [],
"last": "Wang",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Lei",
"middle": [],
"last": "Li",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Transformer, BERT and their variants have achieved great success in natural language processing. Since Transformer models are huge in size, serving these models is a challenge for real industrial applications. In this paper, we propose LightSeq, a highly efficient inference library for models in the Transformer family. LightSeq includes a series of GPU optimization techniques to to streamline the computation of neural layers and to reduce memory footprint. LightSeq can easily import models trained using PyTorch and Tensorflow. Experimental results on machine translation benchmarks show that LightSeq achieves up to 14x speedup compared with TensorFlow and 1.4x compared with FasterTransformer, a concurrent CUDA implementation. The code is available at https://github.com/bytedance/ lightseq.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "Transformer, BERT and their variants have achieved great success in natural language processing. Since Transformer models are huge in size, serving these models is a challenge for real industrial applications. In this paper, we propose LightSeq, a highly efficient inference library for models in the Transformer family. LightSeq includes a series of GPU optimization techniques to to streamline the computation of neural layers and to reduce memory footprint. LightSeq can easily import models trained using PyTorch and Tensorflow. Experimental results on machine translation benchmarks show that LightSeq achieves up to 14x speedup compared with TensorFlow and 1.4x compared with FasterTransformer, a concurrent CUDA implementation. The code is available at https://github.com/bytedance/ lightseq.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Sequence processing and generation have been fundamental capabilities for many natural language processing tasks, including machine translation, summarization, language modeling, etc (Luong et al., 2015; Qi et al., 2020; Dai et al., 2019) . In recent years, with the introduction of Transformer model (Vaswani et al., 2017b) , many pre-trained language models such as BERT, GPT, and mRASP have also been widely used in these tasks (Devlin et al., 2019; Radford et al., 2019; Yang et al., 2020; Lin et al., 2020) .",
"cite_spans": [
{
"start": 183,
"end": 203,
"text": "(Luong et al., 2015;",
"ref_id": "BIBREF11"
},
{
"start": 204,
"end": 220,
"text": "Qi et al., 2020;",
"ref_id": "BIBREF12"
},
{
"start": 221,
"end": 238,
"text": "Dai et al., 2019)",
"ref_id": "BIBREF3"
},
{
"start": 301,
"end": 324,
"text": "(Vaswani et al., 2017b)",
"ref_id": "BIBREF16"
},
{
"start": 431,
"end": 452,
"text": "(Devlin et al., 2019;",
"ref_id": "BIBREF4"
},
{
"start": 453,
"end": 474,
"text": "Radford et al., 2019;",
"ref_id": "BIBREF13"
},
{
"start": 475,
"end": 493,
"text": "Yang et al., 2020;",
"ref_id": "BIBREF18"
},
{
"start": 494,
"end": 511,
"text": "Lin et al., 2020)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "However, the parameters of these models become increasingly large, which causes the high latency of inference and brings great challenges to the deployment (Kim and Hassan, 2020) . The current popular inference systems are not necessarily the best choice for the online service of sequence processing problems. First, training frameworks, such as TensorFlow and PyTorch, require accommodating flexible model architectures and backward propagation, which introduce additional memory allocation and extra overhead of using fine-grain kernel functions. Therefore, the direct deployment of the training framework is not able to make full use of the hardware resource. Taking an example of machine translation, the Transformer big model currently takes roughly 2 seconds to translate a sentence, which is unacceptable in both academia and industry (Edunov et al., 2018; Hsu et al., 2020) . Second, current optimizing compilers for deep learning such as TensorFlow XLA (Abadi et al., 2017) , TVM (Chen et al., 2018) and Tensor RT (Vanholder, 2016) are mainly designed for fixed-size inputs. However, most NLP problems enjoy variable-length inputs, which are much more complex and require dynamic memory allocation. Therefore, a high-performance sequence inference library for variable-length inputs is required. There are several concurrent CUDA libraries which share a similar idea with our project, such as Faster-Transformer 1 and TurboTransformers (Fang et al., 2021) .",
"cite_spans": [
{
"start": 156,
"end": 178,
"text": "(Kim and Hassan, 2020)",
"ref_id": "BIBREF9"
},
{
"start": 843,
"end": 864,
"text": "(Edunov et al., 2018;",
"ref_id": "BIBREF5"
},
{
"start": 865,
"end": 882,
"text": "Hsu et al., 2020)",
"ref_id": "BIBREF8"
},
{
"start": 963,
"end": 983,
"text": "(Abadi et al., 2017)",
"ref_id": "BIBREF0"
},
{
"start": 990,
"end": 1009,
"text": "(Chen et al., 2018)",
"ref_id": "BIBREF2"
},
{
"start": 1024,
"end": 1041,
"text": "(Vanholder, 2016)",
"ref_id": "BIBREF14"
},
{
"start": 1446,
"end": 1465,
"text": "(Fang et al., 2021)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We will highlight three innovative features that make LightSeq outperforms similar projects. First, we replace a straightforward combination of finegrained GPU kernel functions in TensorFlow or PyTorch implementations with coarse-grain fused ones, which avoid high time cost introduced by a mass of kernel function launches and GPU memory I/O for intermediate results. As a result, Light-Seq reduces the atomic kernel functions by four times compared with Tensorflow approaches. Second, we specially design a hierarchical auto regressive search method to speed up the auto-regressive search. Third, we propose a dynamic GPU memory reuse strategy. Different from fixed-length inputs, sequence processing tackles the variable-length inputs, which bring difficulty for memory allocation. LightSeq proposes to pre-define the maximal memory for each kernel function and shares the GPU Convenient LightSeq is easy to use, which contains a serving system and efficient CUDA implementations. The popular models, such as BERT, Roberta, GPT, VAEs, MT Transformer, and Speech Transformer can be directly deployed online without code modification. For user-specific architectures, LightSeq supports multiple model reuse, which can be easily adapted with only a few lines of code modification.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Transformer-based NLP models mainly consist of two components during inference: the feature calculation layer and the output layer, as shown in Figure 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 144,
"end": 152,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "LightSeq Approach",
"sec_num": "2"
},
{
"text": "The feature calculation layer is mainly based on self-attention mechanism and feature transformation, which is actually implemented by matrix multiplication and a series of I/O-intensive operations such as element-wise (e.g., reshape) and reduce (e.g., layer normalization).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LightSeq Approach",
"sec_num": "2"
},
{
"text": "The output layer slightly changes in different tasks, such as classification in NLU tasks or search (e.g., beam search) in NLG tasks. This layer is usually composed of the Softmax over vocabulary, probability sorting, cache refreshing, etc., which are essentially I/O-intensive.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LightSeq Approach",
"sec_num": "2"
},
{
"text": "These two components pose challenges for efficient inference:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LightSeq Approach",
"sec_num": "2"
},
{
"text": "\u2022 The fine-grained call of I/O-intensive GPU kernel function brings a huge amount of GPU memory I/O, which becomes the performance bottleneck of feature calculation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LightSeq Approach",
"sec_num": "2"
},
{
"text": "\u2022 Redundant calculations exist due to the fact that we only need a few tokens/labels with the highest probability instead of all in classification or search for the output layer.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LightSeq Approach",
"sec_num": "2"
},
{
"text": "\u2022 Dynamic shape in variable sequence length and auto-regressive search makes it difficult to achieve memory reuse within or between requests, which leads to a large number of GPU memory allocation during model service.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LightSeq Approach",
"sec_num": "2"
},
{
"text": "LightSeq employs a series of innovative methods to address these challenges to accelerate model development, such as fusion of multiple kernel functions to reduce I/O overhead, hierarchical optimization of search algorithms to erase redundant calculations, and reuse of dynamic GPU memory to avoid run-time allocation. The following is a detailed introduction to these methods.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LightSeq Approach",
"sec_num": "2"
},
{
"text": "Transformer feature calculation layer needs to be highly optimized since it is ubiquitous in various Figure 1 : The process of sequence to sequence generation using Transformer model with beam search.",
"cite_spans": [],
"ref_spans": [
{
"start": 101,
"end": 109,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Operation Fusion",
"sec_num": "2.1"
},
{
"text": "NLP tasks today. In most deep learning frameworks, such as TensorFlow and PyTorch, it is implemented by a straightforward combination of finegrained kernel functions from standard libraries provided by hardware manufacturers, which introduces high time cost due to a mass of kernel function launches and GPU memory I/O for intermediate results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Operation Fusion",
"sec_num": "2.1"
},
{
"text": "Taking layer normalization implemented by Ten-sorFlow as an example, there are still three kernel launches 4 and two intermediate results (mean and variance) even with the help of optimizing compilers like TensorFlow XLA (Abadi et al., 2017) . As a comparison, we can write a custom kernel function dedicated to layer normalization based on the CUDA toolkit, which produces only one kernel launch without intermediate results.",
"cite_spans": [
{
"start": 221,
"end": 241,
"text": "(Abadi et al., 2017)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Operation Fusion",
"sec_num": "2.1"
},
{
"text": "LightSeq implements the Transformer feature calculation layer with general matrix multiply (GEMM) provided by cuBLAS 5 and custom kernel functions. The detailed structure is shown in Figure 2 . Combination of fine-grained operations between GEMM operations is fused into one custom kernel function. In consequence, there are only six custom kernel functions and six GEMM in a Transformer encoder layer, which is usually more than four times less than its corresponding implementation in common deep learning frameworks like TensorFlow or PyTorch.",
"cite_spans": [],
"ref_spans": [
{
"start": 183,
"end": 191,
"text": "Figure 2",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Operation Fusion",
"sec_num": "2.1"
},
{
"text": "LightSeq supports a comprehensive set of output layers, such as sentence-level and token-level classification, perplexity calculation for language mod- 4 Two for reduce mean operations and one for calculation of the final result.",
"cite_spans": [
{
"start": 152,
"end": 153,
"text": "4",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Hierarchical Auto Regressive Search",
"sec_num": "2.2"
},
{
"text": "5 https://developer.nvidia.com/cublas els, and auto-regressive search like beam search, diverse beam search and top-k/top-p sampling (Holtzman et al., 2020) . Redundant calculations often exist in these output layers since we only need a few labels/tokens with the highest probability instead of all of them. Auto-regressive search is relatively complicated, and we will discuss it in the next paragraph. For the other types of output layers, we can simply replace Softmax with the probability calculation of token/label with the highest logits, which brings more obvious benefit when the size of vocabulary or labels is large.",
"cite_spans": [
{
"start": 133,
"end": 156,
"text": "(Holtzman et al., 2020)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Hierarchical Auto Regressive Search",
"sec_num": "2.2"
},
{
"text": "Y = Y \u22c5 W O + b Y = S \u22c5 V Reshape Y Reshape Q, K, V S = Q \u22c5 K Softmax Q, K, V = X \u22c5 (W Q , W K , W V )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hierarchical Auto Regressive Search",
"sec_num": "2.2"
},
{
"text": "Y = Y \u22c5 W 2 + b 2 Self Attention FFN Custom Kernel CuBLAS GEMM",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hierarchical Auto Regressive Search",
"sec_num": "2.2"
},
{
"text": "Auto-regressive search is widely used in machine translation and text generation. LightSeq proposes Hierarchical Auto Regressive Search (HARS) method to erase redundant calculations and parallel computing. Here we take the most used beam search method as an example to intro-duce the proposed HARS method.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hierarchical Auto Regressive Search",
"sec_num": "2.2"
},
{
"text": "In one step of the beam search process, given the logits, we need to perform two calculations over the whole vocabulary:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hierarchical Auto Regressive Search",
"sec_num": "2.2"
},
{
"text": "1. Compute the conditional probability using Softmax and write the intermediate result into GPU memory.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hierarchical Auto Regressive Search",
"sec_num": "2.2"
},
{
"text": "2. Read the intermediate result from GPU memory and select the top-k beams and tokens by sequential probability.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hierarchical Auto Regressive Search",
"sec_num": "2.2"
},
{
"text": "These two calculations are highly timeconsuming since the vocabulary size is usually in tens of thousands of scales. For example, they account for a latency proportion of 30% in Transformer base models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hierarchical Auto Regressive Search",
"sec_num": "2.2"
},
{
"text": "In order to reduce the input size of these two calculations, LightSeq introduces a two-stage strategy that is widely employed in the recommended system: retrieve and re-rank.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hierarchical Auto Regressive Search",
"sec_num": "2.2"
},
{
"text": "Before the probability computation and top-k selection, the retrieve is carried out first. For each beam, we calculate as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hierarchical Auto Regressive Search",
"sec_num": "2.2"
},
{
"text": "1. Randomly divide logits into k groups.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hierarchical Auto Regressive Search",
"sec_num": "2.2"
},
{
"text": "2. Calculate the maximum of group i, denoted as m i 3. Calculate the minimum of m i , denoted as R, which can be regarded as a rough top-k value of logits.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hierarchical Auto Regressive Search",
"sec_num": "2.2"
},
{
"text": "4. Select logits larger than R and write them into GPU memory.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hierarchical Auto Regressive Search",
"sec_num": "2.2"
},
{
"text": "The retrieve is co-designed based on GPU characteristics and logits distribution. Hence it is efficient and effective:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hierarchical Auto Regressive Search",
"sec_num": "2.2"
},
{
"text": "\u2022 Efficient. The retrieve is implemented by one kernel function and can be executed within a dozen instruction cycles.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hierarchical Auto Regressive Search",
"sec_num": "2.2"
},
{
"text": "\u2022 Effective. After the retrieve, only dozens of candidates were selected.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hierarchical Auto Regressive Search",
"sec_num": "2.2"
},
{
"text": "After the retrieve, the original two calculations of beam search will be carried out on the small set of candidates, named as Hierarchical Auto Regressive Search. Figure 3 is a detailed illustration of the proposed hierarchical strategy. In the original beam search 1 1 1 2 2 2 3 3 4 4 4 5 5 6 7 8",
"cite_spans": [],
"ref_spans": [
{
"start": 163,
"end": 171,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Hierarchical Auto Regressive Search",
"sec_num": "2.2"
},
{
"text": "Directly sorting Figure 3 : An illustration of the proposed hierarchical strategy. In this case, beam size is 2 and vocabulary size is 8. Each row represents logits in a beam.",
"cite_spans": [],
"ref_spans": [
{
"start": 17,
"end": 25,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Hierarchical Auto Regressive Search",
"sec_num": "2.2"
},
{
"text": "method, we need to compute the probability and select the top-k over the whole vocabulary. However, by hierarchical method, we only need to pick a small set of candidates from each beam and then perform probability computation and top-k selection.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hierarchical Auto Regressive Search",
"sec_num": "2.2"
},
{
"text": "In order to save GPU memory occupancy and avoid allocation of GPU memory during the model serving, LightSeq pre-defines the maximum of dynamic shapes, such as the maximal sequence length. At the start of the service, each intermediate result in the calculation process is allocated GPU memory to its maximum. Besides, GPU memory is shared for non-dependent intermediate results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dynamic GPU Memory Reuse",
"sec_num": "2.3"
},
{
"text": "Through this memory reuse strategy, on a T4 graphics card, we can deploy up to 8 Transformer big models 6 at the same time, so as to improve graphics card utilization in low frequency or peakshifting scenarios.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Dynamic GPU Memory Reuse",
"sec_num": "2.3"
},
{
"text": "In this section, we will show the improvements of LightSeq with different GPU hardware and precisions. We first analyze the GPU occupation of LightSeq during inference to investigate if Light-Seq can make full use of GPU resources. Then, we make a fair comparison with TensorFlow, PyTorch, FasterTransformer, and TurboTransformers on machine translation and text generation to show the efficiency of LightSeq. (c) LightSeq with Float32. Figure 4 : Proportion of computation occupation. GEMM is the main indicator and the larger number indicates the higher computation efficiency.",
"cite_spans": [],
"ref_spans": [
{
"start": 437,
"end": 445,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "3"
},
{
"text": "(1 , 3 2 ) (1 , 6 4 ) (1 6 , 3 2 ) (1 6 , 6 4 ) (3 2 , 3 2 ) (3 2 , 6 4 ) (6 4 , 3 2 ) (6 4 , 6 4 ) 1 2 8 , 3 2(1 2 8 , 6 4 ) (Batch size, Seq len) (1 , 3 2 ) (1 , 6 4 ) (8 , 3 2 ) (8 , 6 4 ) (3 2 , 3 2 ) (3 2 , 6 4 ) (6 4 , 3 2 ) (6 4 , 6 4 ) 1 2 8 , 3 2(1 2 8 , 6 4 ) (Batch size, Seq len) ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "3"
},
{
"text": "We test the generation performance of LightSeq on two latest NVIDIA inference GPU Tesla P4 and T4, choosing TensorFlow, PyTorch, and Faster-Transformer implementations as a comparison. Another related library, TurboTransformers, mainly focuses on the Transformer encoder and is not powerful enough for text generation. Its speedup for sequence generation compared to TensorFlow is only about 15%, and it only supports Float32 on GPU. Therefore we do not compare with it.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment Settings",
"sec_num": "3.1"
},
{
"text": "The experiments on machine translation are conducted on the popular WMT14 English to German translation tasks. The hyper-parameters setting resembles transformer base model (Vaswani et al., 2017a) . Specifically, we reduce the vocabulary size of both the source language and target language to 50K symbols using the sub-word technique (Bojanowski et al., 2017) .",
"cite_spans": [
{
"start": 173,
"end": 196,
"text": "(Vaswani et al., 2017a)",
"ref_id": "BIBREF15"
},
{
"start": 335,
"end": 360,
"text": "(Bojanowski et al., 2017)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment Settings",
"sec_num": "3.1"
},
{
"text": "The experiments on text generation are conducted on a randomly initialized Transformer model and test dataset. Results of Tensorflow and FasterTransformer are obtained from the scripts in the source code of FasterTransformer. The sequence length is used for limiting the total size in the generation test, and the values for top-k and top-p are the most selected settings in our deployments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment Settings",
"sec_num": "3.1"
},
{
"text": "We first analyze the GPU occupation to verify the efficiency of LightSeq. The experiments are conducted on Tesla T4 card with the GPU profiling toolkit. The latency of each module is shown in Figure 4 with both Float16 and Float32 precision. We classify the operation into three categories: GEMM, cache refreshing, and others. GEMM latency is the most important indicator, which shows the proportion of matrix calculations occupying the GPU calculation.",
"cite_spans": [],
"ref_spans": [
{
"start": 192,
"end": 200,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "GPU Occupation of LightSeq",
"sec_num": "3.2"
},
{
"text": "After optimization, we can find that: (a) Top-p = 0.75.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "GPU Occupation of LightSeq",
"sec_num": "3.2"
},
{
"text": "\u2022 GEMM",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "GPU Occupation of LightSeq",
"sec_num": "3.2"
},
{
"text": "( 1 , 32 ) (1 , 64 ) (3 2, 32 ) (3 2, 64 ) (1 28 , 32 )( 1 28 87% and 82% respectively for Float16 and Float32, accounting for most of the inference time. However, in the original TensorFlow model, GEMM operations account for only 25%. This shows that beam search optimization has achieved good results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "GPU Occupation of LightSeq",
"sec_num": "3.2"
},
{
"text": "\u2022 Cast and other operations in TensorFlow are expensive, which launches over 80 different GPU kernels. In LightSeq, we fuse cast operations into weight loading, and other operations into more efficient implementations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "GPU Occupation of LightSeq",
"sec_num": "3.2"
},
{
"text": "\u2022 The latency of cache refreshing in LightSeq accounts for 6% and 10% respectively, which are not negligible but hard to be optimized further. Possible solutions include reducing the amount of cache, such as reducing the number of decoder layers, reducing cache precision, etc.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "GPU Occupation of LightSeq",
"sec_num": "3.2"
},
{
"text": "The results demonstrate that LightSeq has been optimized to a disabling extent and greatly increases the speed of inference. Another interesting finding is that Float16 is more efficient than Float32. A possible explanation is that Float16 occupies less memory. Therefore the cache refreshing and memory I/O operations potentially take less time.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "GPU Occupation of LightSeq",
"sec_num": "3.2"
},
{
"text": "The comparison between LightSeq, TensorFlow, PyTorch and FasterTransformer are shown in Figure 5 . We group the test set into different buckets according to the sequence length and batch size. For example, the x-axis (a, b) indicates that the batch size is a and the sequence length is b. The y-axis is the speedup compared with TensorFlow baseline. The results provide several interesting findings:",
"cite_spans": [
{
"start": 88,
"end": 96,
"text": "Figure 5",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison on Machine Translation",
"sec_num": "3.3"
},
{
"text": "\u2022 For both LightSeq and FasterTransformer, the speedup gap for smaller batch size or shorter sequence length is much larger.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison on Machine Translation",
"sec_num": "3.3"
},
{
"text": "\u2022 The speedup for T4 is larger than P4. The main reason is that T4 is more powerful than P4 and has much room for improvement.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison on Machine Translation",
"sec_num": "3.3"
},
{
"text": "\u2022 In most cases, LightSeq performs better than FasterTransformer. For larger batch size and longer sequences, the gap increases. While for smaller batch size, FasterTransformer performs better.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison on Machine Translation",
"sec_num": "3.3"
},
{
"text": "\u2022 PyTorch is slightly slower than TensorFlow in P4 and faster in T4, which indicates that LightSeq also greatly outperforms PyTorch in all cases.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison on Machine Translation",
"sec_num": "3.3"
},
{
"text": "The findings provide some guidance for optimization work in the future. There is almost no space to accelerate the inference by fusion of noncomputationally intensive operators, especially for small batch size. Future work is recommended to focus on optimizing GEMM operations which account for 80% to 90% of the total computation time.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison on Machine Translation",
"sec_num": "3.3"
},
{
"text": "Finally, we compare TurboTransformers with Py-Torch by the translation demo 7 . As of this writing, only decoder layers of MT Transformer in float32 precision is supported, so we only compare the latencies of decoder layers without beam search and cache refreshing. In the final results, TurboTransformers only achieves about 2x speedup for different batch sizes and sequence lengths. So Turbo-Transformers has no comparability with LightSeq in machine translation tasks (As TurboTransformer repo says, \"TurboTransformer will bring 15.9% performance improvements on RTX 2060 GPU. We are still working on decoder model optimization.\").",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison on Machine Translation",
"sec_num": "3.3"
},
{
"text": "In the text generation scenario, the sampling strategy is applied to improve the diversity of generation. Among which, top-k and top-p sampling strategies are more popular. Figure 6 shows the performance comparison of Transformer base with top-k/top-p sampling. The values of top-k and top-p are added in the x-axis. The results provide following findings:",
"cite_spans": [],
"ref_spans": [
{
"start": 173,
"end": 181,
"text": "Figure 6",
"ref_id": "FIGREF6"
}
],
"eq_spans": [],
"section": "Comparison on Text Generation",
"sec_num": "3.4"
},
{
"text": "\u2022 In most cases, LightSeq achieves greater speedup than FasterTransformer. Unlike results in machine translation, LightSeq performs better for smaller batch size and shorter sequence, while FasterTransformer performs better for larger batch size and longer sequence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison on Text Generation",
"sec_num": "3.4"
},
{
"text": "\u2022 The speedup in generation tasks are not as large as machine translation. It is mainly because of the lower complexity of sampling methods than beam search, reducing the benefits obtained from operation fusion and HARS.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison on Text Generation",
"sec_num": "3.4"
},
{
"text": "In this paper, we address the deployment problem of expensive sequence models and present an efficient inference library LightSeq for sequence processing and generation, reducing the gap between the performance of big models and the requirement of online services. Comparisons with Faster-Transformer show that we perform better in both machine translation and text generation. In future work, we will focus on exploring more techniques to achieve a more significant speedup, including efficient integer-arithmetic-only inference and sparse GEMM computations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "4"
},
{
"text": "As of this writing, we use FasterTransformer v2.1 for comparison.3 we use TurboTransformers for comparison at commit 0eae02ebadc8b816cd9bb71f8955a7e620861cd8",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Under the configuration of 8 batch size, 256 sequence length, 4 beam size and 30000 vocabulary size.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/ TurboNLP/Translate-Demo/tree/ 443e6a46fefbdf64282842b6233a8bd0a22d6aeb",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We would like to thank the colleagues in machine translation service and advertisement service to support our experiments in online environments and apply LightSeq into real-time systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A computational model for tensorflow: an introduction",
"authors": [
{
"first": "Mart\u00edn",
"middle": [],
"last": "Abadi",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Isard",
"suffix": ""
},
{
"first": "Derek",
"middle": [
"Gordon"
],
"last": "Murray",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 1st ACM SIG-PLAN International Workshop on Machine Learning and Programming Languages",
"volume": "",
"issue": "",
"pages": "1--7",
"other_ids": {
"DOI": [
"10.1145/3088525.3088527"
]
},
"num": null,
"urls": [],
"raw_text": "Mart\u00edn Abadi, Michael Isard, and Derek Gordon Mur- ray. 2017. A computational model for tensorflow: an introduction. In Proceedings of the 1st ACM SIG- PLAN International Workshop on Machine Learning and Programming Languages, MAPL@PLDI 2017, Barcelona, Spain, June 18, 2017, pages 1-7. ACM.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Enriching word vectors with subword information",
"authors": [
{
"first": "Piotr",
"middle": [],
"last": "Bojanowski",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Armand",
"middle": [],
"last": "Joulin",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2017,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "5",
"issue": "",
"pages": "135--146",
"other_ids": {
"DOI": [
"10.1162/tacl_a_00051"
]
},
"num": null,
"urls": [],
"raw_text": "Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Associa- tion for Computational Linguistics, 5:135-146.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "TVM: An automated end-to-end optimizing compiler for deep learning",
"authors": [
{
"first": "Tianqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Thierry",
"middle": [],
"last": "Moreau",
"suffix": ""
},
{
"first": "Ziheng",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Lianmin",
"middle": [],
"last": "Zheng",
"suffix": ""
},
{
"first": "Eddie",
"middle": [],
"last": "Yan",
"suffix": ""
},
{
"first": "Haichen",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Meghan",
"middle": [],
"last": "Cowan",
"suffix": ""
},
{
"first": "Leyuan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yuwei",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Luis",
"middle": [],
"last": "Ceze",
"suffix": ""
},
{
"first": "Carlos",
"middle": [],
"last": "Guestrin",
"suffix": ""
},
{
"first": "Arvind",
"middle": [],
"last": "Krishnamurthy",
"suffix": ""
}
],
"year": 2018,
"venue": "13th USENIX Symposium on Operating Systems Design and Implementation (OSDI 18)",
"volume": "",
"issue": "",
"pages": "578--594",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tianqi Chen, Thierry Moreau, Ziheng Jiang, Lian- min Zheng, Eddie Yan, Haichen Shen, Meghan Cowan, Leyuan Wang, Yuwei Hu, Luis Ceze, Carlos Guestrin, and Arvind Krishnamurthy. 2018. TVM: An automated end-to-end optimizing compiler for deep learning. In 13th USENIX Symposium on Op- erating Systems Design and Implementation (OSDI 18), pages 578-594, Carlsbad, CA. USENIX Asso- ciation.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Transformer-xl: Attentive language models beyond a fixed-length context",
"authors": [
{
"first": "Zihang",
"middle": [],
"last": "Dai",
"suffix": ""
},
{
"first": "Zhilin",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Yiming",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Jaime",
"middle": [
"G"
],
"last": "Carbonell",
"suffix": ""
},
{
"first": "Quoc",
"middle": [],
"last": "Viet Le",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019",
"volume": "1",
"issue": "",
"pages": "2978--2988",
"other_ids": {
"DOI": [
"10.18653/v1/p19-1285"
]
},
"num": null,
"urls": [],
"raw_text": "Zihang Dai, Zhilin Yang, Yiming Yang, Jaime G. Car- bonell, Quoc Viet Le, and Ruslan Salakhutdinov. 2019. Transformer-xl: Attentive language models beyond a fixed-length context. In Proceedings of the 57th Conference of the Association for Compu- tational Linguistics, ACL 2019, Florence, Italy, July 28-August 2, 2019, Volume 1: Long Papers, pages 2978-2988. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "BERT: pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {
"DOI": [
"10.18653/v1/n19-1423"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Pa- pers), pages 4171-4186. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Understanding back-translation at scale",
"authors": [
{
"first": "Sergey",
"middle": [],
"last": "Edunov",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Grangier",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "489--500",
"other_ids": {
"DOI": [
"10.18653/v1/d18-1045"
]
},
"num": null,
"urls": [],
"raw_text": "Sergey Edunov, Myle Ott, Michael Auli, and David Grangier. 2018. Understanding back-translation at scale. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31 -November 4, 2018, pages 489-500. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Turbotransformers: an efficient GPU serving system for transformer models",
"authors": [
{
"first": "Jiarui",
"middle": [],
"last": "Fang",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Chengduo",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Jie",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2021,
"venue": "PPoPP '21: 26th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming",
"volume": "",
"issue": "",
"pages": "389--402",
"other_ids": {
"DOI": [
"10.1145/3437801.3441578"
]
},
"num": null,
"urls": [],
"raw_text": "Jiarui Fang, Yang Yu, Chengduo Zhao, and Jie Zhou. 2021. Turbotransformers: an efficient GPU serv- ing system for transformer models. In PPoPP '21: 26th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, Virtual Event, Re- public of Korea, February 27-March 3, 2021, pages 389-402. ACM.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "The curious case of neural text degeneration",
"authors": [
{
"first": "Ari",
"middle": [],
"last": "Holtzman",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Buys",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Du",
"suffix": ""
},
{
"first": "Maxwell",
"middle": [],
"last": "Forbes",
"suffix": ""
},
{
"first": "Yejin",
"middle": [],
"last": "Choi",
"suffix": ""
}
],
"year": 2020,
"venue": "8th International Conference on Learning Representations",
"volume": "2020",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi. 2020. The curious case of neural text degeneration. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Efficient inference for neural machine translation",
"authors": [
{
"first": "Yi-Te",
"middle": [],
"last": "Hsu",
"suffix": ""
},
{
"first": "Sarthak",
"middle": [],
"last": "Garg",
"suffix": ""
},
{
"first": "Yi-Hsiu",
"middle": [],
"last": "Liao",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Chatsviorkin",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of SustaiNLP: Workshop on Simple and Efficient Natural Language Processing",
"volume": "",
"issue": "",
"pages": "48--53",
"other_ids": {
"DOI": [
"10.18653/v1/2020.sustainlp-1.7"
]
},
"num": null,
"urls": [],
"raw_text": "Yi-Te Hsu, Sarthak Garg, Yi-Hsiu Liao, and Ilya Chatsviorkin. 2020. Efficient inference for neural machine translation. In Proceedings of SustaiNLP: Workshop on Simple and Efficient Natural Language Processing, pages 48-53, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "FastFormers: Highly efficient transformer models for natural language understanding",
"authors": [
{
"first": "Jin",
"middle": [],
"last": "Young",
"suffix": ""
},
{
"first": "Hany",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hassan",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of SustaiNLP: Workshop on Simple and Efficient Natural Language Processing",
"volume": "",
"issue": "",
"pages": "149--158",
"other_ids": {
"DOI": [
"10.18653/v1/2020.sustainlp-1.20"
]
},
"num": null,
"urls": [],
"raw_text": "Young Jin Kim and Hany Hassan. 2020. FastFormers: Highly efficient transformer models for natural lan- guage understanding. In Proceedings of SustaiNLP: Workshop on Simple and Efficient Natural Language Processing, pages 149-158, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Pretraining multilingual neural machine translation by leveraging alignment information",
"authors": [
{
"first": "Zehui",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Xiao",
"middle": [],
"last": "Pan",
"suffix": ""
},
{
"first": "Mingxuan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Xipeng",
"middle": [],
"last": "Qiu",
"suffix": ""
},
{
"first": "Jiangtao",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Lei",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing",
"volume": "2020",
"issue": "",
"pages": "2649--2663",
"other_ids": {
"DOI": [
"10.18653/v1/2020.emnlp-main.210"
]
},
"num": null,
"urls": [],
"raw_text": "Zehui Lin, Xiao Pan, Mingxuan Wang, Xipeng Qiu, Jiangtao Feng, Hao Zhou, and Lei Li. 2020. Pre- training multilingual neural machine translation by leveraging alignment information. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, On- line, November 16-20, 2020, pages 2649-2663. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Effective approaches to attention-based neural machine translation",
"authors": [
{
"first": "Thang",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "Hieu",
"middle": [],
"last": "Pham",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1412--1421",
"other_ids": {
"DOI": [
"10.18653/v1/d15-1166"
]
},
"num": null,
"urls": [],
"raw_text": "Thang Luong, Hieu Pham, and Christopher D. Man- ning. 2015. Effective approaches to attention-based neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, EMNLP 2015, Lisbon, Portu- gal, September 17-21, 2015, pages 1412-1421. The Association for Computational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Prophetnet: Predicting future n-gram for sequence-to-sequence pre-training",
"authors": [
{
"first": "Weizhen",
"middle": [],
"last": "Qi",
"suffix": ""
},
{
"first": "Yu",
"middle": [],
"last": "Yan",
"suffix": ""
},
{
"first": "Yeyun",
"middle": [],
"last": "Gong",
"suffix": ""
},
{
"first": "Dayiheng",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Nan",
"middle": [],
"last": "Duan",
"suffix": ""
},
{
"first": "Jiusheng",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Ruofei",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings, EMNLP 2020",
"volume": "",
"issue": "",
"pages": "2401--2410",
"other_ids": {
"DOI": [
"10.18653/v1/2020.findings-emnlp.217"
]
},
"num": null,
"urls": [],
"raw_text": "Weizhen Qi, Yu Yan, Yeyun Gong, Dayiheng Liu, Nan Duan, Jiusheng Chen, Ruofei Zhang, and Ming Zhou. 2020. Prophetnet: Predicting future n-gram for sequence-to-sequence pre-training. In Proceed- ings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings, EMNLP 2020, Online Event, 16-20 November 2020, pages 2401-2410. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Language models are unsupervised multitask learners",
"authors": [
{
"first": "Alec",
"middle": [],
"last": "Radford",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Rewon",
"middle": [],
"last": "Child",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Luan",
"suffix": ""
},
{
"first": "Dario",
"middle": [],
"last": "Amodei",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
}
],
"year": 2019,
"venue": "OpenAI Blog",
"volume": "1",
"issue": "8",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI Blog, 1(8):9.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Efficient inference with tensorrt",
"authors": [
{
"first": "Han",
"middle": [],
"last": "Vanholder",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Han Vanholder. 2016. Efficient inference with tensorrt.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems",
"volume": "30",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141 ukasz Kaiser, and Illia Polosukhin. 2017a. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Gar- nett, editors, Advances in Neural Information Pro- cessing Systems 30, pages 5998-6008. Curran Asso- ciates, Inc.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "Lukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "6000--6010",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017b. Attention is all you need. In Advances in Neural Information Pro- cessing Systems 30: Annual Conference on Neural Information Processing Systems 2017, 4-9 Decem- ber 2017, Long Beach, CA, USA, pages 6000-6010.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Diverse beam search for improved description of complex scenes",
"authors": [
{
"first": "K",
"middle": [],
"last": "Ashwin",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Vijayakumar",
"suffix": ""
},
{
"first": "Ramprasaath",
"middle": [
"R"
],
"last": "Cogswell",
"suffix": ""
},
{
"first": "Qing",
"middle": [],
"last": "Selvaraju",
"suffix": ""
},
{
"first": "Stefan",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "David",
"middle": [
"J"
],
"last": "Lee",
"suffix": ""
},
{
"first": "Dhruv",
"middle": [],
"last": "Crandall",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Batra",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18)",
"volume": "",
"issue": "",
"pages": "7371--7379",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashwin K. Vijayakumar, Michael Cogswell, Ram- prasaath R. Selvaraju, Qing Sun, Stefan Lee, David J. Crandall, and Dhruv Batra. 2018. Diverse beam search for improved description of complex scenes. In Proceedings of the Thirty-Second AAAI Confer- ence on Artificial Intelligence, (AAAI-18), the 30th innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educa- tional Advances in Artificial Intelligence (EAAI-18), New Orleans, Louisiana, USA, February 2-7, 2018, pages 7371-7379. AAAI Press.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Towards making the most of BERT in neural machine translation",
"authors": [
{
"first": "Jiacheng",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Mingxuan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Chengqi",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Weinan",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Yong",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Lei",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2020,
"venue": "The Thirty-Second Innovative Applications of Artificial Intelligence Conference",
"volume": "2020",
"issue": "",
"pages": "9378--9385",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiacheng Yang, Mingxuan Wang, Hao Zhou, Chengqi Zhao, Weinan Zhang, Yong Yu, and Lei Li. 2020. Towards making the most of BERT in neural ma- chine translation. In The Thirty-Fourth AAAI Con- ference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial In- telligence, EAAI 2020, New York, NY, USA, Febru- ary 7-12, 2020, pages 9378-9385. AAAI Press.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "The structure of optimized Transformer encoder layers in LightSeq."
},
"FIGREF4": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "Speedup on Transformer with beam search compared with FasterTransformer, TurboTransformers and PyTorch implementation. The baseline is TensorFlow implementation."
},
"FIGREF6": {
"num": null,
"uris": null,
"type_str": "figure",
"text": "T4 speedup on Transformer with sampling compared with FasterTransformer in Float16. Light-Seq outperforms FasterTransformer in most cases."
},
"TABREF0": {
"num": null,
"html": null,
"content": "<table><tr><td>FasterTransformer</td></tr><tr><td>TurboTransformers</td></tr><tr><td>LightSeq</td></tr></table>",
"type_str": "table",
"text": "ModelsDecoding Methods Inference Libraries Transformer GPT VAE BERT Multilingual Beam Search Diverse Beam Search Sampling"
},
"TABREF1": {
"num": null,
"html": null,
"content": "<table><tr><td>memory across non-dependent ones. As a result,</td></tr><tr><td>LightSeq reduces eight times memory allocation</td></tr><tr><td>without loss of inference speed. As a benefit, Light-</td></tr><tr><td>Seq enjoys several advantages:</td></tr><tr><td>Efficient LightSeq shows better inference perfor-</td></tr><tr><td>mance for generation tasks. For example, in</td></tr><tr><td>machine translation benchmarks, LightSeq</td></tr><tr><td>achieves up to 14 times speedup compared</td></tr><tr><td>with TensorFlow and 1.4 times speedup com-</td></tr><tr><td>pared with FasterTransformer.</td></tr><tr><td>Functional LightSeq supports more architecture</td></tr><tr><td>variants, such as BERT, GPT, Transformer,</td></tr><tr><td>and Variational Autoencoders (VAEs). Fur-</td></tr><tr><td>ther, LightSeq provides different search algo-</td></tr><tr><td>rithms, such as beam search, diverse beam</td></tr><tr><td>search and probabilistic sampling (Vijayaku-</td></tr><tr><td>mar et al., 2018). Table 1 shows the functional</td></tr><tr><td>comparison between FasterTransformer 2 , Tur-</td></tr><tr><td>boTransformers 3 , and LightSeq in text gener-</td></tr><tr><td>ation tasks.</td></tr></table>",
"type_str": "table",
"text": "Features for FasterTransformer, TurboTransformers and our proposed LightSeq. LightSeq supports the most features for a comprehensive set of Transformer models."
}
}
}
}