diff --git "a/20240318/2308.13137v3.json" "b/20240318/2308.13137v3.json" new file mode 100644--- /dev/null +++ "b/20240318/2308.13137v3.json" @@ -0,0 +1,692 @@ +{ + "title": "OmniQuant: Omnidirectionally Calibrated Quantization for Large Language Models", + "abstract": "Large language models (LLMs) have revolutionized natural language processing tasks. However, their practical deployment is hindered by their immense memory and computation requirements.\nAlthough recent post-training quantization (PTQ) methods are effective in reducing memory footprint and improving the computational efficiency of LLM, they hand-craft quantization parameters, leading to low performance, especially in extremely low-bit quantization.\nTo tackle this issue, we introduce an Omnidirectionally calibrated Quantization (OmniQuant) technique for LLMs, which achieves good performance in diverse quantization settings while maintaining the computational efficiency of PTQ by efficiently optimizing various quantization parameters.\nOmniQuant comprises two innovative components including Learnable Weight Clipping (LWC) and Learnable Equivalent Transformation (LET). LWC modulates the extreme values of weights by optimizing the clipping threshold. Meanwhile, LET tackles activation outliers by shifting the challenge of quantization from activations to weights.\nOperating within a differentiable framework using block-wise error minimization, OmniQuant can optimize the quantization process efficiently for both weight-only and weight-activation quantization.\nFor instance, the LLaMA-2 model family size 7-70B can be processed with OmniQuant on a single A100-40G GPU within 1-16 hours using 128 samples.\nExtensive experiments validate OmniQuant\u2019s superior performance across diverse quantization configurations such as W4A4 (4-bit weight, 4-bit activation), W6A6, W4A16, W3A16, and W2A16. Additionally, OmniQuant demonstrates effectiveness in instruction-tuned models and delivers notable improvements in inference speed and memory reduction on real devices.\nCodes are available at\nhttps://github.com/OpenGVLab/OmniQuant.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "Large language models (LLMs) such as GPT-4 (Bubeck et al., 2023 ###reference_b3###) and LLaMA (Touvron et al., 2023a ###reference_b36###), have demonstrated impressive performance across various natural language benchmarks (Hendrycks et al., 2020 ###reference_b17###; Zellers et al., 2019 ###reference_b43###).\nFurthermore, the language understanding capabilities inherent in LLMs can be successfully transferred into multimodal models (Mu et al., 2023 ###reference_b32###; Xu et al., 2023 ###reference_b41###; Zhang et al., 2023a ###reference_b45###; Huang et al., 2024 ###reference_b20###; 2023 ###reference_b19###). Thereby, LLMs can be regarded as precursors to artificial general intelligence (Bubeck et al., 2023 ###reference_b3###).\nHowever, the considerable computational and memory requirements of LLMs pose substantial challenges (Zhang et al., 2023b ###reference_b46###; Hu et al., 2023 ###reference_b18###). For instance, the GPT-3 model (Brown et al., 2020 ###reference_b2###) requires 350G of memory to load its parameters in FP16 format, which corresponds to the requirement of at least five A100-80G GPUs for inference.\nThis significant demand for computational resources and associated communication overheads impedes the practical deployment of LLMs in real-world applications.\nQuantization has shown to be promising to mitigate both computational and memory overhead in LLMs. In general, it comes in two types including post-training quantization (PTQ) and quantization-aware training (QAT).\nAlthough QAT can lead to more competitive accuracy than PTQ, it is not practical due to the high training cost because the whole model is trained with the awareness of the quantization process.\nAs a result, PTQ is commonly utilized in existing quantization methods on LLMs.\nFor example, lots of PTQ methods (Frantar et al., 2022 ###reference_b14###; Lin et al., 2023 ###reference_b25###; Dettmers et al., 2023b ###reference_b12###) reduce memory consumption by weight-only quantization which quantizes the weights while maintaining full-precision activation. To further reduce the computational overhead, another line of work (Xiao et al., 2023 ###reference_b40###; Wei et al., 2022 ###reference_b38###; Yuan et al., 2023 ###reference_b42###; Wei et al., 2023 ###reference_b39###; Liu et al., 2023a ###reference_b26###) employs weight-activation quantization which quantizes both weight and activation into low-bit values for the execution of low-bit matrix multiplication.\n###figure_1### Existing quantization methods have demonstrated significant achievements in various scenarios, including W4A16 (i.e. 4-bit weight and 16-bit activation) weight-only quantization such as (Lin et al., 2023 ###reference_b25###; Dettmers et al., 2023b ###reference_b12###; Lee et al., 2023 ###reference_b22###), as well as W8A8 weight-activation quantization (Wei et al., 2023 ###reference_b39###).\nHowever, they usually exhibit significant performance degradation when confronted with low-bit quantization, such as W2A16 and W4A4, as illustrated in Figure 1 ###reference_### (b & c).\nThis performance shortfall in low-bit quantization can be attributed to the fact that these methods (Frantar et al., 2022 ###reference_b14###; Lin et al., 2023 ###reference_b25###; Wei et al., 2023 ###reference_b39###) primarily rely on handcrafted quantization parameters such as migration strength (Xiao et al., 2023 ###reference_b40###) and scaling parameters (Wei et al., 2023 ###reference_b39###), which often leads to lower performance.\nAlthough Quantization-Aware Training (QAT) (Liu et al., 2023b ###reference_b28###) is effective in determining the optimal quantization configurations, it introduces substantial training overhead in both training and data efficiency. It is thus hard to quantize LLMs with QAT-based techniques efficiently such as LLMQAT (Liu et al., 2023b ###reference_b28###). For instance, GPTQ (Frantar et al., 2022 ###reference_b14###), a PTQ approach, can complete the quantization of LLaMA-13B in an hour using 128 samples on a single A100 GPU, while LLM-QAT (Liu et al., 2023b ###reference_b28###) requires 100k samples and hundreds of GPU hours.\nThis leads us to a central question: can we attain the performance of QAT, while maintaining the time and data efficiency of PTQ?\nThis paper introduces a novel quantization technique, OmniQuant, which effectively addresses the above question. OmniQuant achieves state-of-the-art performance across various quantization scenarios, particularly in low-bit settings, while preserving the time and data efficiency of PTQ, as illustrated in Figure 1 ###reference_###.\nUnlike Quantization-Aware Training (QAT) (Liu et al., 2023b ###reference_b28###) which involves cumbersome weight optimization, OmniQuant freezes the original full-precision weight and only incorporates a few learnable quantization parameters.\nAs shown in Figure 2 ###reference_###,\nOmniQuant consists of two key components that incorporate different types of learnable quantization parameters, including Learnable Weight Clipping (LWC) and Learnable Equivalent Transformation (LET). Specifically, LWC modulates the extreme values of weights by optimizing the clipping threshold. In the meanwhile, LET tackles activation outliers by learning mathematically equivalent transformations in a transformer encoder.\nInstead of jointly optimizing all parameters across the LLM, OmniQuant sequentially quantizes the parameters of one layer before moving on to the next under a block-wise quantization error minimization framework.\nIn this way,\nOminiQuant can be optimized efficiently using a simple Stochastic Gradient Descent (SGD) algorithm.\nThanks to the differentiable optimization, LWC and LET can be seamlessly integrated into the quantization.\nWe find that LWC can mitigate the difficulty in quantizing weights and LET further shifts the challenge of quantization from activations to weights, facilitating OmniQuant a versatile quantization framework for both weight-only and weight-activation quantization.\nNotably, OmniQuant introduces no extra computation or parameters for the quantized model because the clipping threshold in LWC and equivalent factors in LET can be fused into quantized weights.\n###figure_2### As depicted in Figure 2 ###reference_###, OmniQuant is easy to implement even with limited resources. Especially, taking the LLaMA-2 model family (7B-70B) as an example, all models can be quantized on a single A100-40G GPU utilizing only 128 training samples. The training time ranges from 1 to 16 hours, depending on the size of the quantized model, which ranges from 7B to 70B.\nOwing to the seamless integration of LWC and LET achieved by differentiable optimization, OmniQuant exhibits superior performance compared to prior PTQ-based methods in various quantization settings. For example, when LLaMA-13B is quantized into W2A16, OmniQuant achieves a perplexity of , while GPTQ incurs a significant increase in perplexity to , as demonstrated in Figure 1 ###reference_###. A similar performance advancement is also observed in the W4A4 quantization.\nThe contributions of OmniQuant are summarized as follows. 1) We formulate a novel quantization pipeline for LLM, OmniQuant, which freezes original full-precision weights while incorporating a restrained set of learnable parameters. OmniQuant imbues quantization with gradient updates while preserving the time and data efficiency of PTQ methods. 2) OmniQuant consists of Learnable Weight Clipping (LWC) and Learnable Equivalent Transformation (LET). These strategies make full-precision weights and activations more amenable to quantization.\n3) Through extensive experiments, we demonstrate that OmniQuant outperforms previous methods across a spectrum of quantization settings (W416, W3A16, W2A16, W6A6, W4A4), various model families (OPT, LLaMA, LLaMA-2, LLaMA-2-chat, Falcon), and a range of model sizes (125M-180B). The computation speedup and memory reduction of OmniQuant are also demonstrated on real devices." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Related Work", + "text": "" + }, + { + "section_id": "2.1", + "parent_section_id": "2", + "section_name": "Quantization Methods.", + "text": "Quantization reduces neural network bit-precision, leading to smaller models and faster inference. Current methods are largely divided into Quantization Aware Training (QAT)(Liu et al., 2023b ###reference_b28###) and Post-training Quantization (PTQ)(Xiao et al., 2023 ###reference_b40###; Frantar et al., 2022 ###reference_b14###). While QAT maintains performance by simulating quantization during training, its training cost makes it unsuitable for LLM. PTQ techniques like AdaRound (Nagel et al., 2020 ###reference_b33###) and BRECQ (Li et al., 2021 ###reference_b23###) use gradient optimization to determine optimal rounding, but tuning all weights is time-intensive for larger models. Thus, most LLM quantization methods (Xiao et al., 2023 ###reference_b40###; Frantar et al., 2022 ###reference_b14###; Dettmers et al., 2023b ###reference_b12###; Lee et al., 2023 ###reference_b22###; Wei et al., 2023 ###reference_b39###) prioritize training-free PTQ, which limit performance in lower-bit situations. Our goal is to integrate gradient updates in LLM quantization, mirroring QAT\u2019s approach, while retaining PTQ\u2019s efficiency." + }, + { + "section_id": "2.2", + "parent_section_id": "2", + "section_name": "Quantization of LLM.", + "text": "Considering the quantized object, exiting LLM quantization can be classified into two fields: weight-only quantization and weight-activation quantization.\nWeight-only quantization. Weight-only quantization focuses on converting weights to low-bit values. For instance, GPTQ (Frantar et al., 2022 ###reference_b14###) uses block-wise reconstruction for 3/4-bit quantization. SpQR (Dettmers et al., 2023b ###reference_b12###), OWQ (Lee et al., 2023 ###reference_b22###), and AWQ (Lin et al., 2023 ###reference_b25###) emphasize the significance of weights tied to higher-magnitude activations. Therefore, SpQR and OWQ employ mixed-precision quantization to safeguard vital weights, while AWQ opts for channel-wise scaling to avoid mixed-precision\u2019s hardware inefficiency.\nQlora (Dettmers et al., 2023a ###reference_b11###) and INT2.1 (Chee et al., 2023 ###reference_b4###) restore the capabilities of the quantized model through parameter-efficient fine-tuning. Our method, in contrast, enhances the quantization process directly, making OmniQuant complementary to Qlora and INT2.1.\nWeight-activation quantization. Weight-activation quantization compresses both weights and activations. SmoothQuant (Xiao et al., 2023 ###reference_b40###), LLM.int8() (Dettmers et al., 2022 ###reference_b10###), and Outlier Suppression (Wei et al., 2022 ###reference_b38###) achieve W8A8 quantization by managing activation outliers. LLM.int8() uses mixed-precision decomposition, while the other two employ channel-wise scaling. Furthermore, Outlier Suppression+(Wei et al., 2023 ###reference_b39###) adds channel-wise shifting to drive W6A6 quantization. Unlike previous heuristic designs, we use gradient optimization and expand equivalent transformations to attention mechanisms, further boosting the K/V cache quantization.\nRecently, RPTQ (Yuan et al., 2023 ###reference_b42###) and LLM-QAT (Liu et al., 2023b ###reference_b28###) have achieved W4A4 quantization. However, RPTQ adopts deployment-unfriendly group-wise activation quantization, and LLM-QAT employs time-consuming QAT. In distinction from RPTQ and LLM-QAT, we achieve W4A4 quantization through deployment-friendly per-token quantization and maintain the PTQ efficiency.\n###figure_3###" + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "OmniQuant", + "text": "Challenge of LLM quantization. Two main difficulties lie in quantizing an LLM. First, the activation is hard to quantize due to the existence of outlier channels. Considering that weight distribution is flat and uniform, SmoothQuant (Xiao et al., 2023 ###reference_b40###) and Outlier Suppression+ (Wei et al., 2023 ###reference_b39###) tackle this issue by migrating the quantization difficulty from activations to weights with a pre-defined migration strength or grid-searching based optimization. Second, the quantization error of weights also plays a pivotal role in the final performance due to the importance of weights corresponding to activations. SqQR (Dettmers et al., 2023b ###reference_b12###) and OWQ (Lee et al., 2023 ###reference_b22###) propose to retain crucial weights in full-precision, while AWQ (Lin et al., 2023 ###reference_b25###) safeguards these weights using grid-searched channel-wise scaling. Although these methods have achieved certain success in compressing various LLMs, they often lead to suboptimal performance and fail to deal with extremely low-bit quantization due to the crude design of hand-crafted quantization parameters such as migration strength and scaling factors.\nIn this section, we introduce a differentiable quantization technique for LLM called OmniQuant where quantization parameters are learned with better flexibility. Towards this goal, OmniQuant is implemented with a block-wise quantization error minimization framework as presented in Sec.3.1 ###reference_###. To tackle the aforementioned challenges of LLM quantization, we devise two novel strategies for additional learnable quantization parameters including a learnable weight clipping (LWC) to mitigate the difficulty in quantizing weights and a learnable equivalent transformation (LET) to further shift the challenge of quantization from activations to weights. We introduce LWC and LCT in Sec. 3.2 ###reference_### and Sec. 3.3 ###reference_###, respectively." + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "Block-wise Quantization Error Minimization", + "text": "Previous PTQ methods with gradient optimization, such as AdaRound (Nagel et al., 2020 ###reference_b33###), BRECQ (Li et al., 2021 ###reference_b23###) cannot be applied in models with billions of parameters because they are hard to optimize due to the huge solution space.\nInstead of turning the whole model, we propose a new optimization pipeline with block-wise quantization error minimization where the additional quantization parameters can be optimized in a differentiable manner. We formulate the optimization goal as follows:\nwhere represents the mapping function for a transformer block in the LLM, and are full-precision weight and activation, and represent weight and activation quantizer, respectively, and are quantization parameters in learnable weight clipping (LWC) and learnable equivalent transformation (LET), respectively. The Block-wise quantization in Eqn.(1 ###reference_###) sequentially quantizes the parameters of one transformer block before moving on to the next.\nBlock-wise minimization in Eqn.(1 ###reference_###) has two advantages. First, equipped with block-wise minimization in Eqn.(1 ###reference_###), OmniQuant can optimize quantization parameters in LWC and LET jointly, making it capable enough to encompass both weight-only and weight-activation quantization. Second, block-wise minimization is easy to optimize with minimal resource requirements. OmniQuant only determines a few quantization parameters with optimality, which is easier than optimizing the whole weights in previous PTQ-based methods (Nagel et al., 2020 ###reference_b33###; Li et al., 2021 ###reference_b23###). Empirically, we find that all models from the LLaMA-2 family (Touvron et al., 2023b ###reference_b37###) can be quantized on a single A100-40G GPU utilizing only 128 training samples." + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "Learnable Weight Clipping", + "text": "OmniQuant employs a module of learnable weight clipping (LWC) to reduce the difficulty of quantizing the weights in an LLM. Similar to previous methods with learnable clipping threshold (Esser et al., 2019 ###reference_b13###; Liu et al., 2022 ###reference_b27###; Choi et al., 2018 ###reference_b6###), LWC also determines the optimal dynamic range of the weights by optimizing a clipping threshold. However, we find that directly employing prior arts such as PACT (Choi et al., 2018 ###reference_b6###) and LSQ (Esser et al., 2019 ###reference_b13###) in quantization would produce unsatisfactory performance, as demonstrated in Table\u2009A14 ###reference_### in the Appendix.\nInstead of directly learning a clipping threshold as in previous methods (Esser et al., 2019 ###reference_b13###; Choi et al., 2018 ###reference_b6###), LWC optimizes a clipping strength as formulated by\nwhere indicates round operation. is the target bit number. and denote the quantized and full-precision weights, respectively. is the normalization factor for weights and is the zero-point value. The clamp operation constrains the value within the range of -bit integer, specifically . In Eqn.(2 ###reference_###), and are learnable clipping strengths for the upper and the lower bound of weights, respectively. We instantiate and by the sigmoid function***. Hence, in Eqn.(1 ###reference_###).\nNote that LWC degrades into a vanilla MinMax quantization scheme used in existing works (Xiao et al., 2023 ###reference_b40###),Frantar et al. (2022 ###reference_b14###) when and .\nBy inheriting the benefits of MinMax quantization, LWC only needs to adjust the clipping strengths to determine an optimal clipping threshold, which would reduce the optimization difficulty.\nClipped by an optimal threshold, the original weights would be easy to quantize.\nAs indicated by the experiments in Table\u20091 ###reference_###, our proposed learnable weight clipping method significantly outperforms previous weight-only quantization techniques (Frantar et al., 2022 ###reference_b14###; Lin et al., 2023 ###reference_b25###))." + }, + { + "section_id": "3.3", + "parent_section_id": "3", + "section_name": "Learnable Equivalent Transformation", + "text": "Other than LWC which enables quantization-friendly weights by optimizing the clipping threshold, we further reduce the difficulty of weight-activation quantization by a learnable equivalent transformation (LET).\nConsidering that outliers in the activation map are systematic and unique to specific channels, previous methods such as SmoothQuant (Xiao et al., 2023 ###reference_b40###) migrate the difficulty of quantization from activations to weights with a mathematically equivalent transformation. However, they hand-craft the equivalent parameters, leading to suboptimal results.\nThanks to the inclusion of block-wise quantization error minimization, our LET can determine the optimal equivalent parameters in a differentiable way.\nInspired by SmoothQuant (Xiao et al., 2023 ###reference_b40###) and Outlier Suppression+ (Wei et al., 2023 ###reference_b39###), we adopt channel-wise scaling and channel-wise shifting to manipulate the activation distribution, providing an effective solution for the outlier issue.\nSpecifically, we investigate the equivalent transformation across both the linear layer and attention operation, as illustrated in Figure3 ###reference_###.\nLinear layer. The linear layer takes an input token sequence where is the token length and is the multiplication of the weight matrix and bias vector . A mathematically equivalent linear layer is expressed as:\nwhere represents the output, and are channel-wise scaling and shifting parameters, respectively, and are equivalent activation, weight and bias, respectively, \u2018\u2019 and \u2018\u2019 are elementwise division and multiplication. By Eqn.(3 ###reference_###), the activations are transformed to be quantization-friendly at a cost of increased quantization difficulty in weights. In this sense, LWC in Sec. 3.2 ###reference_### can improve the performance of weight-activation quantization achieved by LET because it renders weights quantization-friendly. Finally, we perform quantization on transformed activations and weights, as given by\nwhere is the vanilla MinMax quantizer and is the MinMax quantizer with learnable weight clipping (i.e. our LWC).\nNote that the scaling and shifting parameters in can be absorbed into the previous normalization or linear layer and the the scaling factors in can be fused into the original linear weight . Therefore, the equivalent transformation in Eqn.(3 ###reference_###) can effectively reduce quantization errors without introducing additional parameters or costs. We employ this equivalent transformation in all linear layers of the LLM except for the second linear layer of FFN as shown in Figure3 ###reference_###. This may be because the high sparsity of features after the non-linear layer (Liu et al., 2023c ###reference_b29###) leads to unstable gradients when applying learnable equivalent transformations.\nAttention operation. Beyond the linear layer, the attention operation also accounts for a significant proportion of the computation. Additionally, the auto-regressive pattern of LLM necessitates storing the key-value(KV) cache for each token, which results in substantial memory demands for long sequences. Therefore, we also quantize matrixes into low-bit in the weight-activation quantization setting. Specifically, the learnable equivalent transform of the self-attention affinity matrix can be written as:\nwhere is the scaling factor in the affinity matrix. Similar to Eqn.(4 ###reference_###), the quantized affinity matrix calculation is expressed as . Here we also use MinMax quantization scheme as to quantize matrixes. From Eqn.(4 ###reference_###) and Eqn.(5 ###reference_###) we know that in Eqn.(1 ###reference_###).\nThe channel-wise scaling factors in and , as seen in Eq.(5 ###reference_###), can be absorbed into linear weights of the query and key projection, respectively.\nIt is worth mentioning that the explicit transformation of is omitted as its distribution has already been channel-wise altered by the inverse transformation associated with the output projection linear layer." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Experiments", + "text": "" + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "Settings", + "text": "Quantization.\nWe experiment with both weight-only and weight-activation quantization. For the former, default settings are INT4/INT3/INT2 per-channel weight quantization. Group-wise weight quantization is represented by \u2018g\u2019, e.g., W3A16g128 means 3-bit weight-only quantization with a 128-group size. In weight-activation quantization, defaults are INT6/INT4 per-channel weight and per-token activation quantization (Dettmers et al., 2022 ###reference_b10###). All intermediate activations are quantized into low-bit, excluding the SoftMax output, kept at full precision due to its long-tail distribution making it unsuitable for uniform quantization.\nTraining The channel-wise scaling factor is initialized with SmoothQuant (Xiao et al., 2023 ###reference_b40###), and the channel-wise shifting factor is initialized using Outlier Suppression+ (Wei et al., 2023 ###reference_b39###). To optimize the learnable parameters, we utilize the AdamW optimizer with zero weight decay. The learning rate for learnable weight clipping and equivalent transformation is set as and , respectively. We employ a calibration dataset consisting of 128 randomly selected 2048-token segments from WikiText2 (Merity et al., 2016 ###reference_b31###). The entire training process is facilitated on a single Nvidia A100 GPU, using a batch size of 1 over 20 epochs, except for W2A16 quantization that leverages 40 epochs.\nFor weight-activation quantization, both learnable weight clipping and equivalent transformation are activated. For weight-only, both are used for OPT, but only the clipping is for LLaMA, as Table A3 ###reference_### shows negligible benefits from the equivalent transformation for LLaMA.\nModels. We test on OPT(125M-66B)(Zhang et al., 2022 ###reference_b44###)), LLaMA(7B-65B) (Touvron et al., 2023a ###reference_b36###), LLaMA-2(7B-70B) (Touvron et al., 2023b ###reference_b37###), Falcon-180B (Penedo et al., 2023 ###reference_b34###), and instruction-tuned LLaMA-2-chat (Touvron et al., 2023b ###reference_b37###) for generalizability. While the main paper highlights the LLaMA results, comprehensive details for other models are available in Sec. A8 ###reference_### of the Appendix.\nEvaluation. Following the previous work (Lin et al., 2023 ###reference_b25###; Frantar et al., 2022 ###reference_b14###), we evaluate quantized models by reporting the perplexity of language generation experiments, specifically on WikiText2 (Merity et al., 2016 ###reference_b31###), PTB (Marcus et al., 1994 ###reference_b30###)), C4 (Raffel et al., 2020 ###reference_b35###). Moreover, accuracy is evaluated in zero-shot tasks including PIQA (Bisk et al., 2020 ###reference_b1###), ARC (Clark et al., 2018 ###reference_b8###), BoolQ (Clark et al., 2019 ###reference_b7###), and HellaSwag (Clark et al., 2018 ###reference_b8###). We adhere to the GPTQ (Frantar et al., 2022 ###reference_b14###) settings for language generation experiments, and implement the lm-eval-harness (Gao et al., 2021 ###reference_b16###) for the execution of all zero-shot tasks.\nBaselines. For weight-only quantization, we compare with vanilla round-to-nearest quantization (RTN), GPTQ (Frantar et al., 2022 ###reference_b14###), and AWQ (Lin et al., 2023 ###reference_b25###). For weight-activation quantization, we compare our method with SmoothQuant (Xiao et al., 2023 ###reference_b40###), Outlier Supression + (Wei et al., 2023 ###reference_b39###), RPTQ (Yuan et al., 2023 ###reference_b42###), and the recent QAT method LLM-QAT (Liu et al., 2023b ###reference_b28###).\nNote that we reproduce SmoothQuant and Outlier Suppression+ with per-channel weight quantization and per-token activation quantization for fair comparisons." + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "Weight-only Quantization Results", + "text": "The results of the LLaMA family can be found in Table 1 ###reference_###, while the results for OPT are presented in the Sec.\u2009A8 ###reference_### of Appendix.\nAs illustrated by the tables, OmniQuant consistently outperforms the prior LLM weight-only quantization method across various LLM families (OPT, LLaMA-1, LLaMA-2) and diverse quantization configurations, including W2A16, W2A16g128, W2A16g64, W3A16, W3A16g128, W4A16, and W4A16g128.\nThese findings suggest OmniQuant\u2019s versatility, being adaptable to a multitude of quantization configurations. For instance, while AWQ (Lin et al., 2023 ###reference_b25###) is particularly effective with group-wise quantization, OmniQuant demonstrates superior performance across both channel-wise and group-wise quantization.\nFurthermore, the performance benefits of OmniQuant become more pronounced as the quantization bit size decreases." + }, + { + "section_id": "4.3", + "parent_section_id": "4", + "section_name": "Weight-Activation Quantization Results", + "text": "In weight-activation quantization, our main focus lies on W6A6 and W4A4 quantization. We exclude W8A8 quantization as SmoothQuant can nearly achieve lossless W8A8 quantized models when compared with full-precision counterparts.\nThe results of the LLaMA family can be found in Table 2 ###reference_###, while the results for OPT are presented in Table A25 ###reference_### of Appendix.\nTable 2 ###reference_### illustrates the zero-shot task accuracy of LLaMA weight-activation quantization. Notably, OmniQuant markedly enhances the average accuracy by +4.99% +11.80% across various models at W4A4 quantization.\nRemarkably, in the LLaMA-7B, OmniQuant even surpasses the recent QAT method, LLM-QAT (Liu et al., 2023b ###reference_b28###), by an impressive margin of +6.22%. This improvement demonstrates the efficacy of incorporating additional learnable parameters, which proves to be more beneficial than the global weight tuning utilized by QAT." + }, + { + "section_id": "4.4", + "parent_section_id": "4", + "section_name": "Quantization of instruction-tuned models", + "text": "To validate the generalization capability of our method, we test the quantization on LLaMA-2-chat (Touvron et al., 2023b ###reference_b37###), an instruction-tuned model for chatbots. Using the GPT-4 evaluation protocol (Chiang et al., 2023 ###reference_b5###), performance is assessed on the Vicuna benchmark (Chiang et al., 2023 ###reference_b5###) comprising 80 questions. To negate position bias (Zheng et al., 2023 ###reference_b47###), each pair is compared in both sequences, totaling 160 trials per comparison.\nFigure 4 ###reference_### compares RTN, AWQ (Lin et al., 2023 ###reference_b25###), and OmniQuant. In LLaMA-2-7b-chat, OmniQuant matches AWQ with a 50% win rate but surpasses RTN more (80.3% vs. 69.4%). In LLaMA-2-13b-chat, while AWQ lags behind RTN, OmniQuant consistently improves quantization model performance.\n###figure_4###" + }, + { + "section_id": "4.5", + "parent_section_id": "4", + "section_name": "Acceleration on Real Device", + "text": "MLC-LLM\u2020\u2020\u2020https://github.com/mlc-ai/mlc-llm provides a versatile deployment solution for diverse language models across various hardwares. It particularly excels in deploying quantized models on CUDA.\nOne of OmniQuant\u2019s strengths lies in its ability to avoid extra operations for quantized models, allowing MLC-LLM to seamlessly run models created with OmniQuant.\nTable,3 ###reference_### shows memory requirements and inference speeds of the LLaMA family on an NVIDIA A100-80G. \u2019Weights Memory (WM)\u2019 represents quantized weight storage, and \u2019Running Memory (RM)\u2019 indicates the memory for inference, with the latter being higher due to certain retained activations. Inference speed is gauged by generating 512 tokens.\nIt is evident that quantized models significantly reduce memory usage compared to 16-bit full-precision models. For instance, models with W4A16g128 and W2A16g128 quantization almost double the inference speed. However, MLC-LLM\u2019s support for INT3/INT2 is currently suboptimal, particularly for INT3. Enhancements to INT3/INT2 quantization speed are in our future roadmap.\nAdditionally, we only explore the deployment of weight-only quantization in this study due to that W4A4 and W6A6 quantization methods lack out-of-the-box hardware support." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Conclusion", + "text": "We present OmniQuant, a method advancing weight-only and weight-activation quantization to low-bit formats. OmniQuant\u2019s core principle is to retain original full-precision weights while adding learnable parameters. It uses learnable weight clipping and learnable equivalent transformation to optimize weight and activation for quantization. While incorporating gradient updates, OmniQuant maintains training efficiency comparable to existing PTQ methods. It outperforms current methods in language generation and zero-shot tasks and is suited for instruction-tuned LLMs. In addition, OmniQuant also ensures hardware compatibility as its added parameters can be absorbed." + } + ], + "appendix": [ + { + "section_id": "Appendix 1", + "parent_section_id": null, + "section_name": "Appendix A1 Overall algorithm", + "text": "The comprehensive training algorithm of OmniQuant is illustrated in Algorithm\u20091 ###reference_###.\nWe employ a block-wise calibration strategy comprising three steps: initialization of learnable parameters (Lines 4-5), training these learnable parameters (Lines 6-15), transforming the model with learned parameters, and then quantization(Lines 16-18).\nThe OmniQuant algorithm finds the optimal transformation to enhance the quantization compatibility of the LLM model. Additionally, due to the elegant design, OmniQuant can achieve rapid convergence using a small calibration dataset.\nInput: calibration dataset , pre-trained LLM model \nOutput: quantized model." + }, + { + "section_id": "Appendix 2", + "parent_section_id": null, + "section_name": "Appendix A2 Distinction of existing equivalent transformation methods", + "text": "Equivalent transformation is popular in the quantization of large language models. In this section, we summarize the distinction of proposed OmniQuant with existing equivalent transformation works, including SmoothQuant (Xiao et al., 2023 ###reference_b40###), AWQ Lin et al. (2023 ###reference_b25###), Outlier Supression (OP+)+ Wei et al. (2023 ###reference_b39###). As shown in Table A1 ###reference_###:\nFor the equivalent transformation operation, both SmoothQuant and AWQ only consider channel-wise scaling operation, while OP+ and OmniQuant consider both channel-wise scaling and shifting operation.\nFor the execution position, previous methods only carry equivalent transformation on linear layers (Eq.(4 ###reference_###)), while OmniQuant also considers the matrix multiplication within attention (Eq.(5 ###reference_###)). This point enlarges the solution space of equivalent transformation and facilitates the quantization of and .\nFor the manners to obtain parameters of equivalent transformation, SmoothQuant leverage pre-defined migration strength. Then, AWQ and OP+ introduce grid searching based on some heuristic proxy. However, OmniQuant optimized all equivalent transformation parameters through end-to-end gradient descent, which significantly improve the performance.\nFor the application scenario, previous methods are designed for weight-only quantization or weight-activation quantization. However, because of the elegant design and cooperation of the proposed LWC and LET, OmniQuant can achieve excel in both weight-only quantization and weight-activation quantization." + }, + { + "section_id": "Appendix 3", + "parent_section_id": null, + "section_name": "Appendix A3 Ablation studies", + "text": "Combination of equivalent transformation and weight clipping. The synergy between LET and LWC is achieved through a sophisticated differentiable framework as demonstrated in Algorithm 1 ###reference_###, not a simple additive combination. LET performs activation-to-weight migration, and LWC further facilitates the quantization of weights, resulting in a seamless integration of the two techniques. In Table A2 ###reference_###, we also test other combination variants, including replacing LET with SmoothQuant or replacing LWC with grid-searched weight clipping. The results show that training LET and LWC simultaneously achieves the best performance.\nEfficacy of each component. Table A3 ###reference_### reveals that the baseline model incorporates both LWC and LET, labeled as \u2019LWC+LET\u2019. We further investigate their contributions by removing each component.\nBoth components positively influence performance, but LET proves essential for weight-activation quantization. Disabling it for W4A4 results in a marked increase in perplexity to , mainly due to challenges with activation quantization outliers.\nFor weight-only quantization, LET significantly boosts OPT\u2019s performance but offers a slight enhancement for LLaMA, explained by LLaMA\u2019s few weight outliers. For example, in naive W3A16 quantization (-LWC-LET), LLaMA reaches a perplexity of 10.68, while OPT\u2019s spikes to .\nConsequently, LET is turned off for LLaMA in weight-only quantization given its limited advantage for faster training.\nDesign choices of learnable equivalent transformation.\nIn comparison to the equivalent transformation incorporated in SmoothQuant (Xiao et al. (2023 ###reference_b40###)), our approach additionally implements channel-wise shifting and attention transformation. The effects of these innovations are evaluated in Table A4 ###reference_###. We can observe that both modifications enhance the performance of weight-activation quantization. However, the incremental benefit of the equivalent transformation in the attention operation is comparatively minor. This discrepancy is primarily due to the majority of outliers existing in the output of the normalization layer while being less prevalent in the matrix.\nImpact of LET on each position. We exclude the LET of the second linear layer due to the high sparsity of features after the non-linear layer leads to unstable gradients. Therefore, we have four LET pairs, represented as [ln1, (q_proj, k_proj, v_proj)], [v_proj, out_proj], [Q, K], and [ln2, fc1]. As shown in Table A5 ###reference_###, we can find that all four LETs can improve the performance, specially for the [ln1, (q_proj, k_proj, v_proj)] pair. Such results also demonstrate that the activation outliers are more serious after layer normalization layers.\nImpact of initialization of LET. We initialize the channel-wise scaling factor with SmoothQuant Xiao et al. (2023 ###reference_b40###), and initialize the channel-wise shifting with Outlier Suppression+ Wei et al. (2023 ###reference_b39###). To validate the impact of careful initialization, we try to initial scaling as 1 and initial shifting as 0. As shown in Table A6 ###reference_###, we can find that careful initialization of scaling and shifting can improve the final performance. Specifically, scaling initialization is more important than shifting, since scaling plays the main role in alleviating outliers.\nImpact of Softmax quantization. The output of SoftMax has a long-tailed distribution, making it unsuitable for uniform quantization. We carry out experiments to quantize the Softmax output into different bit numbers. As shown in the following table, we can find that quantizing the output of softmax into 8-bit and 6-bit bring acceptable performance degeneration, which demonstrates that block-wise calibration can compensate for the loss of 8-bit and 6-bit Softmax quantization. However, 4-bit Softmax quantization brings significantly performance loss, which requires further exploration and additional trick such as log2 quantization in RepQViT (Li et al., 2023 ###reference_b24###). Note that we keep the output of SoftMax as 16-bit if no special instruction.\nImpact of iterative training. In our approach, LWC and LET are trained simultaneously, and we have also explored an iterative training approach by iterations or epochs. The results, as presented in Table A8 ###reference_###, clearly indicate that training LWC and LET simultaneously yields the best performance. This experiment demonstrates that the synergy between LET and LWC creates a progressive process, where both techniques reinforce each other rather than interfere. To further support this statement, we conducted an additional experiment (last row in Table A8 ###reference_###), training LWC and LET iteratively with double training epochs. The results show that simultaneous training with 20 epochs achieves comparable performance to iterative training with 40 epochs. This demonstrates the effectiveness and efficiency of training LWC and LET simultaneously.\nTraining Time\nAs illustrated in Table A9 ###reference_###, LLaMA-7B was trained across various epochs to determine the optimal convergence time. Most quantization configurations converge within 20 epochs, with the exception of W2A16, which necessitates 80 epochs. Consequently, we establish a training epoch of 20 for all configurations, except for W2A16, for which we set it to 40 in consideration of the training time.\nCalibration Data\nOmniQuant utilizes gradient optimization on constrained calibration datasets, sourced from WikiText2 and comprising 128 segments with 2048 tokens each. This prompts concerns about potential overfitting to the calibration dataset. To explore this, we evaluated the calibration dataset\u2019s influence using two other datasets: Pile (Gao et al. (2020 ###reference_b15###)) and c4 (Raffel et al. (2020 ###reference_b35###)).\nAs depicted in Table A10 ###reference_###, the variance in perplexity across diverse calibration datasets is marginal, fluctuating between 0.0006 and 0.17. This underscores OmniQuant\u2019s robustness concerning calibration set distribution.\nFurthermore, the data efficiency of OmniQuant was gauged by modulating the number of training samples, as presented in Table A11 ###reference_###. Remarkably, OmniQuant converges with as few as 16 samples. Our selection of 128 samples aligns with established practices in prior works (Frantar et al. (2022 ###reference_b14###); Lin et al. (2023 ###reference_b25###))." + }, + { + "section_id": "Appendix 4", + "parent_section_id": null, + "section_name": "Appendix A4 Training Time", + "text": "As shown in Table\u2009A12 ###reference_###, we report the training time of the proposed OmniQuant within the LLaMA family. Note that for LLaMA, we only activate learnable weight clipping for weight-only quantization. Therefore, the training time for weight-only quantization is shorter relative to weight-activation quantization, given the fewer learnable parameters involved. While our proposed method necessitates a training time that is approximately 5 greater than GPTQ, it remains markedly faster than QAT methods, which demand hundreds of GPU hours.\n###figure_5###" + }, + { + "section_id": "Appendix 5", + "parent_section_id": null, + "section_name": "Appendix A5 Performance Analysis", + "text": "In this section, we investigate the internal mechanism of learnable weight clipping and learnable equivalent transformation respectively. Further, we show that with OmniQuant, 3-bit and 4-bit achieve similar trade-off between model bits and perplexity.\nLearnable weight clipping.\nIn addition to perplexity and accuracy, the quality of a quantization method can intuitively be evaluated by calculating the distance between quantized models and their full-precision counterparts. This is demonstrated in Table A13 ###reference_###, where we detail the distance of weights and activations for LLaMA-7B\u2019s weight-only quantization.\nWe can observe that the proposed Learned Weight Clipping (LWC) substantially decreases the distance for both weights and activations. It\u2019s noteworthy that, in certain instances, the distance for quantized models without LWC is similar to that of those utilizing LWC. However, models incorporating LWC exhibit markedly lower activation distances.\nThis observation underpins the argument that LWC can effectively balance quantization precision between outlier and regular values.\nAdditionally, we illustrate the distribution of the learned clipping scale ( and ) as delineated in Eq. (2 ###reference_###) in Figure A1 ###reference_###.\nIt is apparent that LWC can learn different clippings for diverse quantization configurations. For instance, with per-channel weight quantization W3A16 as depicted in Figure A1 ###reference_###(a), the learned clipping scale showcases a normal distribution. This suggests that approximately half of the outliers are being clipped.\nIn the case of group-wise quantization, the learned clipping scale exhibits a long-tailed distribution, implying that most quantized groups are associated with minimal clipping. Note that lower bits exhibit more pronounced clipping. For example, W2A16g128 possesses a 50% clipping scale larger than 0.95, whereas, in W3A16g128, this percentage rises to 70%.\n###figure_6### Learnable equivalent transformation.\nFigure A2 ###reference_### provides visualizations of the intermediate activation in the linear layer. It is apparent that several outlier channels in the original activation (Figure A2 ###reference_###(a)) possess significantly larger magnitudes compared to the regular channels, thereby creating an incompatibility with activation quantization.\nAlthough SmoothQuant mitigates this issue to some degree, such as reducing the outlier magnitude from 70 to 2, Figure A2 ###reference_###(b) reveals that the magnitude of outlier channels still remains notably larger than that of other regular channels after SmoothQuant. This phenomenon can be attributed to SmoothQuant\u2019s heuristic approach in deriving channel-wise scaling, which inevitably makes it challenging to discover an optimal solution.\nThe impact of the proposed LET is depicted in Figure A2 ###reference_###(c). It is noteworthy that the magnitude disparity between the outlier and regular channels is markedly diminished. This homogenization of the activation distribution, facilitated by the LET, empowers OmniQuant to efficiently steer the weight-activation quantization towards a low-bit scheme.\n###figure_7### Quantization error.\nOmniQuant is the first differentiable post-training quantization algorithm for large language models. To demonstrate the advantage of gradient-based optimization, we also compare the quantization error of each block in Figure A3 ###reference_###. We can find that OmniQuant significantly reduces the quantization loss compared with the grid-searching based method such as AWQ Lin et al. (2023 ###reference_b25###) and Outlier Suppression + (Wei et al., 2023 ###reference_b39###).\n###figure_8### Scaling laws.\nQuantization serves as a potent strategy to curtail the total model bits, thereby facilitating the deployment of LLMs on edge or consumer devices with restricted memory. However, the total model bits are contingent on both the number of parameters within the original model and the quantization bits. Therefore, given a model bits constraint, the challenge arises: how does one optimally determine the number of parameters for the full-precision model and the quantization bits?\nTim Dettmers (Dettmers & Zettlemoyer (2023 ###reference_b9###)) demonstrated that 4-bit quantization establishes a universally optimal balance between the total model bits and zero-shot accuracy. Nonetheless, in this study, as shown in Figure\u2009A4 ###reference_###,we would like to claim that OmniQuant can make 3-bit quantization achieve comparable performance like 4-bit quantization in the trade off between model bits and perplexity.\n###figure_9###" + }, + { + "section_id": "Appendix 6", + "parent_section_id": null, + "section_name": "Appendix A6 Comparisons with clipping-based methods", + "text": "In this paper, we proposed a novel method, learnable weight clipping (LWC), designed to adaptively determine the weight clipping threshold. LWC sets the threshold by scaling the original minimum and maximum values to delineate the solution space. We compare LWC against existing clipping-based methods: PACT and LSQ. While PACT directly determines the clipping threshold, LSQ focuses on the direct derivation of the scaling factor and zero-point. Both PACT and LSQ were initially formulated as QAT methods, accounting for both weight and activation clipping. For an equitable comparison, our examination is restricted to weight clipping. We integrated PACT and LSQ into our optimization pipeline in lieu of LWC.\nTable A14 ###reference_### illustrates that while PACT and LSQ enhance the performance of weight-only quantization compared to MinMax quantization, their efficacy diminishes in the weight-activation quantization setting. This decline can be attributed to the proposed LET during activation quantization, which alters the weight distribution in each training iteration, undermining the convergence of both LSQ and PACT. In contrast, LWC defines relative scaling values instead of absolute metrics, making it proficient in handling changes in weight distribution.\nFor example, Figure A5 ###reference_### shows that LWC can catch the dramatically changing of weights while PACT and LSQ failed." + }, + { + "section_id": "Appendix 7", + "parent_section_id": null, + "section_name": "Appendix A7 Comparisons with other weight-only quantization methods", + "text": "OmniQuant is an asymmetrically uniform quantization method. In the main paper, we compare with the same type of quantization methods, such as AWQ and GPTQ. Recently, there are also some other methods exploring for other quantization format. For example, SpQR (Dettmers et al., 2023b ###reference_b12###) and SqueezeLLM (Kim et al., 2023 ###reference_b21###) employ mixed-precision quantization to safeguard vital weights. Furthermore, SqueezeLLM also introduces non-uniform quantization to allocate more bits to sensitive weights. As shown in Table A15 ###reference_###, we can find that OmniQuant can achieve comparable performance to SpQR and SqueezeLLM. While OmniQuant performs slightly worse than SqueezeLLM, our focus on uniform (INT) quantization provides simplicity and flexibility, supporting both weight-only quantization and weight-activation quantization. In contrast, SpQR and SqueezeLLM only support weight-only quantization. We believe this distinction adds valuable context to the comparison." + }, + { + "section_id": "Appendix 8", + "parent_section_id": null, + "section_name": "Appendix A8 Full Results", + "text": "In this section, we provide a comprehensive presentation of our results across various datasets to complement the main paper. Specifically, the results include:\nThe perform overview (Figure A6 ###reference_###).\nExperiments results on extreme large model Falcon-180B (Table A18 ###reference_###).\nMMLU results on LLaMa-1-7B (Table A16 ###reference_###).\nAsymmetric bits quantization, including W4A8 on LLaMa-1-7B, W4A6, and W8A4. (Table A17 ###reference_###).\nC4 perplexity with weight-only quantization in the LLaMA families (Table A19 ###reference_###).\nPTB perplexity with weight-only quantization in OPT families (Table A21 ###reference_###).\nC4 perplexity with weight-only quantization in OPT families (Table A22 ###reference_###).\nWikiText2 perplexity for weight-activation quantization in the LLaMA families (Table A23 ###reference_###).\nC4 perplexity for weight-activation quantization in the LLaMA families (Table A24 ###reference_###).\nWikiText2/PTB/C4 perplexity for weight-activation quantization in the LLaMA families (Table A25 ###reference_###).\n###figure_10###" + } + ], + "tables": { + "1": { + "table_html": "
\n
Table 1: Weight-only quantization Results of LLaMA-1 and LLaMA-2 Models. We report WikiText2 perplexity in this table, C4 perplexity can be found in Table\u00a0A19 in Appendix.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\nLLaMA1&2 / PPL1-7B1-13B1-30B1-65B2-7B2-13B2-70B
FP16-5.685.094.103.535.474.883.31
\n\nW2A16\nRTN1.1e56.8e42.4e42.2e43.8e45.6e42.0e4
GPTQ2.1e35.5e3499.7555.917.7e32.1e377.95
OmniQuant15.4713.218.717.5837.3717.217.81
\n\nW2A16\ng128\nRTN1.9e3781.2068.0415.084.2e3122.0827.27
GPTQ44.0115.6010.929.5136.7728.14NAN
AWQ2.6e52.8e52.4e57.4e42.2e51.2e5-
OmniQuant9.727.937.125.9511.068.266.55
\n\nW2A16\ng64\nRTN188.32101.8719.209.39431.9726.2210.31
GPTQ22.1010.068.548.3120.8522.44NAN
AWQ2.5e52.7e52.3e57.4e42.1e51.2e5-
OmniQuant8.907.346.595.659.627.566.11
\n\nW3A16\nRTN25.7311.3914.9510.68539.4810.687.52
GPTQ8.066.765.845.068.376.444.82
AWQ11.887.4510.075.2124.0010.45-
OmniQuant6.495.684.744.046.585.583.92
\n\nW3A16\ng128\nRTN7.015.884.874.246.665.513.97
GPTQ6.555.624.804.176.295.423.85
AWQ6.465.514.633.996.245.32-
OmniQuant6.155.444.563.946.035.283.78
\n\nW4A16\nRTN6.435.554.573.876.115.203.67
GPTQ6.135.404.483.835.835.133.58
AWQ6.085.344.393.766.155.12-
OmniQuant5.865.214.253.715.745.023.47
\n\nW4A16\ng128\nRTN5.965.254.233.675.724.983.46
GPTQ5.855.204.233.655.614.983.42
AWQ5.815.204.213.625.624.97-
OmniQuant5.775.174.193.625.584.953.40
\n
", + "capture": "Table 1: Weight-only quantization Results of LLaMA-1 and LLaMA-2 Models. We report WikiText2 perplexity in this table, C4 perplexity can be found in Table\u00a0A19 in Appendix." + }, + "2": { + "table_html": "
\n
Table 2: Weight-activation quantization results of LLaMA Models. This table reports the accuracy of 6 zero-shot tasks. Perplexity results can be found in Table\u00a0A23 &\u00a0A24 at Appendix.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\nLLaMA / Acc#BitsMethodPIQAARC-eArc-cBoolQHellaSwagWinograndeAvg.
LLaMA-1-7BFP16-77.4752.4841.4673.0873.0067.0764.09
W6A6SmoothQuant76.7551.6439.8871.7571.6765.0362.81
W6A6OS+76.8251.3541.1372.0871.4265.9861.13
W6A6OmniQuant77.0951.8940.8772.5371.6165.0363.17
W4A4SmoothQuant49.8030.4025.8049.1027.4048.0038.41
W4A4LLM-QAT51.5027.9023.9061.3031.1051.9041.27
W4A4LLM-QAT+SQ55.9035.5026.4062.4047.8050.6046.43
W4A4OS+62.7339.9830.2960.2144.3952.9648.43
W4A4OmniQuant66.1545.2031.1463.5156.4453.4352.65
LLaMA-1-13BFP16-79.1059.8944.4568.0176.2170.3166.33
W6A6SmoothQuant77.9156.6042.4064.9575.3669.3664.43
W6A6OS+78.2956.9043.0966.9875.0969.2264.92
W6A6OmniQuant78.4057.2842.9167.0075.8268.2764.95
W4A4SmoothQuant61.0439.1830.8061.8052.2951.0649.36
W4A4OS+63.0040.3230.3860.3453.6151.5449.86
W4A4OmniQuant69.6947.3933.1062.8458.9655.8054.37
LLaMA-1-30BFP16-80.0858.9245.4768.4479.2172.5367.44
W6A6SmoothQuant77.1457.6142.9165.5678.0769.9265.20
W6A6OS+80.1458.9245.0568.0277.9671.9867.01
W6A6OmniQuant79.8158.7945.2268.3878.9572.2167.23
W4A4SmoothQuant58.6535.5327.7360.4235.5648.0644.83
W4A4OS+67.6346.1734.4060.7054.3252.6452.62
W4A4OmniQuant71.2149.4534.4765.3364.6559.1956.63
LLaMA-1-65BFP16-80.7958.7146.2482.2980.7277.5071.04
W6A6SmoothQuant80.2557.9245.5080.2280.1874.7669.80
W6A6OS+79.6755.6845.2280.0278.0373.9568.76
W6A6OmniQuant81.0158.1246.3380.6479.9175.6970.28
W4A4SmoothQuant64.4740.4429.8259.3839.9052.2447.71
W4A4OS+68.0643.9835.3262.7550.7354.3052.52
W4A4OmniQuant71.8148.0235.9273.2766.8159.5159.22
\n
", + "capture": "Table 2: Weight-activation quantization results of LLaMA Models. This table reports the accuracy of 6 zero-shot tasks. Perplexity results can be found in Table\u00a0A23 &\u00a0A24 at Appendix." + }, + "3": { + "table_html": "
\n
Table 3: Deployment of weight-only quantization through MLC-LLM. We report the memory size of quantized weights (denoted as \u2018WM\u2019) and the running memory (denoted as \u2018RM\u2019) and speed in NVIDIA A100-80G.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
LLaMA\n7B\n13B\n30B\n65B
WMRMtoken/sWMRMtoken/sWMRMtoken/sWMRMtoken/s
FP12.6G14.4G69.224.3G27.1G52.560.6G66.1G23.9OOM--
W4A16g1283.8G5.7G134.27.0G10.0G91.316.7G21.7G43.633.0G41.0G24.3
W3A16g1283.2G5.1G83.45.8G8.7G57.613.7G18.7G29.027.0G35.1G15.2
W2A16g1282.2G4.1G83.94.0G7.5G92.69.2G14.1G36.718.0G25.6G24.8
\n
", + "capture": "Table 3: Deployment of weight-only quantization through MLC-LLM. We report the memory size of quantized weights (denoted as \u2018WM\u2019) and the running memory (denoted as \u2018RM\u2019) and speed in NVIDIA A100-80G. " + }, + "4": { + "table_html": "
\n
Table A1: Distinction of existing equivalent transformation methods.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
MethodET operationET positionET parametersapplication
SmoothQuantscalinglinear layerpre-definingweight-activation quantization
AWQscalinglinear layergrid searchingweight-only quantization
OP+scalinglinear layergrid searching for scalingweight-activation quantization
& shiftingand pre-defining for shifting
OmniQuantscalinglinear layergradient-based optimizationweight-only quantization
& shifting& attention\n& weight-activation quantization\n
\n
\n
", + "capture": "Table A1: Distinction of existing equivalent transformation methods." + }, + "5": { + "table_html": "
\n
Table A2: Effect of combination of equivalent transformation and weight clipping. We report the average perplexity of WikiText2 and C4, and the average accuracy on 6 zero-shot tasks like Table 2.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
LLaMa-7B W4A4Average PPL Average Acc.
SmoothQuant28.7838.41
LET16.9748.83
LET + grid-searched WC15.8249.59
SmoothQuant + LWC15.8050.15
LET + LWC12.8752.65
\n
", + "capture": "Table A2: Effect of combination of equivalent transformation and weight clipping. We report the average perplexity of WikiText2 and C4, and the average accuracy on 6 zero-shot tasks like Table 2." + }, + "6": { + "table_html": "
\n
Table A3: Efficacy of each component. WikiText2 perplexity1 is reported in this table. \u2018-\u2019 indicats remove the corresponding module from the overall proposed methods.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\nPPL\n\nLLaMA-13B\n\nOPT-13B\n
\nMethod\nW4A4W3A16W4A4W3A16
LWC+LET10.875.6511.6510.87
\n\ncomponents\n-LWC20.757.6515.2312.98
-LET5.4e35.687.8e311.29
-LWC-LET1.8e310.687.8e54.6e3
\n
", + "capture": "Table A3: Efficacy of each component. WikiText2 perplexity1 is reported in this table. \u2018-\u2019 indicats remove the corresponding module from the overall proposed methods." + }, + "7": { + "table_html": "
\n
Table A4: Design choices of learnable equivalent transformation. WikiText2 perplexity1 is reported in this table.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\nPPL\n\nLLaMA-13B\n\nOPT-13B\n
\nMethod\nW4A4W3A16W4A4W3A16
LWC+LET10.875.6511.6510.87
\n\nLET\n-shifting11.475.6513.6410.87
-attention11.345.6511.7910.87
\n
", + "capture": "Table A4: Design choices of learnable equivalent transformation. WikiText2 perplexity1 is reported in this table. " + }, + "8": { + "table_html": "
\n
Table A5: Impact of LET on each position. \u2018-\u2019 indicates removing corresponding LET. We respectively remove the LET from each layer, and reporting the average perplexity of WikiText2 and C4, and the average accuracy on 6 zero-shot tasks like Table 2.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
LLaMa-7BAverage PPL Average Acc.
W4A412.8752.65
-[ln1, (q_proj, k_proj, v_proj)]19.8746.79
-[v_proj, out_proj]13.0351.68
-[Q,K]13.3451.47
-[ln2, fc1]14.4751.04
\n
", + "capture": "Table A5: Impact of LET on each position. \u2018-\u2019 indicates removing corresponding LET. We respectively remove the LET from each layer, and reporting the average perplexity of WikiText2 and C4, and the average accuracy on 6 zero-shot tasks like Table 2." + }, + "9": { + "table_html": "
\n
Table A6: Impact of initialization of LET. We report the average perplexity of WikiText2 and C4, and the average accuracy on 6 zero-shot tasks like Table 2.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
LLaMa-7BAverage PPL Average Acc.
W4A412.8752.65
initialize scaling as 113.6451.37
initialize shifting as 012.9552.22
\n
", + "capture": "Table A6: Impact of initialization of LET. We report the average perplexity of WikiText2 and C4, and the average accuracy on 6 zero-shot tasks like Table 2." + }, + "10": { + "table_html": "
\n
Table A7: Impact of Softmax quantization. We report the average perplexity of WikiText2 and C4, and the average accuracy on 6 zero-shot tasks like Table 2.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
LLaMa-7BAverage PPL Average Acc.
W4A4 + Softmax 16bit12.8752.65
W4A4 + Softmax 8bit12.9151.93
W4A4 + Softmax 6bit13.2051.70
W4A4 + Softmax 4bit18.8048.52
\n
", + "capture": "Table A7: Impact of Softmax quantization. We report the average perplexity of WikiText2 and C4, and the average accuracy on 6 zero-shot tasks like Table 2." + }, + "11": { + "table_html": "
\n
Table A8: Impact of iterative training of LWC and LET. We report the average perplexity of WikiText2 and C4, and the average accuracy on 6 zero-shot tasks like Table 2.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
LLaMa-7B W4A4Average PPL Average Acc.
simultaneously12.8752.65
each iteration13.5650.91
each epoch13.5152.06
each epoch + double training epochs 4bit12.8052.50
\n
", + "capture": "Table A8: Impact of iterative training of LWC and LET. We report the average perplexity of WikiText2 and C4, and the average accuracy on 6 zero-shot tasks like Table 2." + }, + "12": { + "table_html": "
\n
Table A9: Ablation of training time. We train LLaMA-7B with different quantization configuration on 128 2048-tokens segments from WikiText2 over various epochs. \u20180\u2019 indicates only initialization without fine-tuning. Wikitext perplexity is reported in this table.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
EpochsW4A16W3A16W2A16W6A6W4A4
06.2924.041.1e56.1633.93
105.876.5127.495.9612.04
205.856.4917.465.9511.26
405.866.4715.475.9511.23
80--14.77--
\n
", + "capture": "Table A9: Ablation of training time. We train LLaMA-7B with different quantization configuration on 128 2048-tokens segments from WikiText2 over various epochs. \u20180\u2019 indicates only initialization without fine-tuning. Wikitext perplexity is reported in this table." + }, + "13": { + "table_html": "
\n
Table A10: Ablation of calibration dataset.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\nLLaMA-7B/PPL\n\nW3A16\n\nW4A4\n
Calibration DatasetWikiText2C4WikiText2C4
WikiText26.478.1911.2314.61
C46.678.1312.1714.24
Pile6.698.1712.0414.22
Varience0.0090.00060.170.03
\n
", + "capture": "Table A10: Ablation of calibration dataset." + }, + "14": { + "table_html": "
\n
Table A11: Ablation of sample number of calibration dataset.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\nLLaMA-7B/PPL\n\nW3A16\n\nW4A4\n
Sample NumberWikiText2C4WikiText2C4
166.478.1811.5614.84
326.478.1811.4814.80
646.488.1911.4014.57
1286.478.1911.2314.61
2566.468.1911.4114.90
\n
", + "capture": "Table A11: Ablation of sample number of calibration dataset." + }, + "15": { + "table_html": "
\n
Table A12: Omniquant runtime on LLaMA family. The time correspond to training 128 2048-tokes segment over 20 epochs and a batch size of 1 on a single NVIDIA A100-80G.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
LLaMA7B13B30B65B
weight-only1.1h2.2h4.5h8.9h
weight-activation1.6h3.3h7.3h14.4h
\n
", + "capture": "Table A12: Omniquant runtime on LLaMA family. The time correspond to training 128 2048-tokes segment over 20 epochs and a batch size of 1 on a single NVIDIA A100-80G." + }, + "16": { + "table_html": "
\n
Table A13: distance between quantized model and full-precision model. indicates the average distance between quantized weight and full-precision weight. denotes the distance between the output of last transformer block.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\nLLaMA-7B / \n\n\n\n\n
quantizationw/o LWCw/ LWCw/o LWCw/ LWC
W2A16g1280.00890.00823.241.36
W2A16g640.00980.00863.511.44
W3A160.00620.00442.801.05
W3A16g1280.00420.00401.370.79
W4A160.00280.00240.980.61
W4A16g1280.00200.00190.680.47
\n
", + "capture": "Table A13: distance between quantized model and full-precision model. indicates the average distance between quantized weight and full-precision weight. denotes the distance between the output of last transformer block." + }, + "17": { + "table_html": "
\n
Table A14: WikiText2 perplexity of clipping-based quantization methods. For fair comparison, we reproduce LSQ and PACT by replace LWC in our pipeline with them.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\nLLaMA-7B/PPL\nPerplexity
MethodW3A16W4A4
FP5.68
MinMax25.7314.49
\nPACT\u00a0(Choi et\u00a0al. (2018))\n6.9518.25
\nLSQ\u00a0(Esser et\u00a0al. (2019))\n6.6315.03
LWC (Ours)6.4711.26
\n
", + "capture": "Table A14: WikiText2 perplexity of clipping-based quantization methods. For fair comparison, we reproduce LSQ and PACT by replace LWC in our pipeline with them." + }, + "18": { + "table_html": "
\n
Table A15: Comparisons with SpQR and SqueezeLLM.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
SizeMethodAvg bitsWiki2C4
LLaMa-1-7B\u201316.005.687.08
SpQR3.945.877.28
SqueezeLLM4.075.797.20
SqueezeLLM4.275.777.18
OmniQuant4.165.777.21
SqueezeLLM3.056.207.67
SqueezeLLM3.246.137.56
OmniQuant3.156.157.75
LLaMa-1-13B\u201316.005.096.61
SpQR3.965.226.72
SqueezeLLM4.075.176.69
SqueezeLLM4.265.176.68
OmniQuant4.165.176.69
SqueezeLLM3.045.517.01
SqueezeLLM3.245.456.92
OmniQuant3.155.447.05
LLaMa-1-30B\u201316.004.105.98
SpQR3.894.256.08
SqueezeLLM4.064.206.05
SqueezeLLM4.254.186.04
OmniQuant4.164.196.06
SqueezeLLM3.044.566.31
SqueezeLLM3.244.446.23
OmniQuant3.154.566.37
\n
", + "capture": "Table A15: Comparisons with SpQR and SqueezeLLM." + }, + "19": { + "table_html": "
\n
Table A16: Average MMLU accuracy of LLaMa-7B.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
LLaMa-1-7B (FP: 38.41%)W4A16g128W3A16g128W2A16g128W4A4
RTN37.37%33.43%22.55%23.31
GPTQ35.39%30.53%23.83%-
AWQ37.71%35.43%22.58%-
OP+---25.72
OmniQuant37.50%35.60%26.03%26.93
\n
", + "capture": "Table A16: Average MMLU accuracy of LLaMa-7B." + }, + "20": { + "table_html": "
\n
Table A17: Performance of weights and activations quantization on LLaMA-1-7B model with asymmetric bits.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
#BitsMethod\nPPL \n\nAccuracy (%) \n
WikiText2C4Avg.PIQAARC-eARC-cBoolQHellaSwagWinograndeAvg.
FP16-5.687.086.3877.4752.4841.4673.0873.0067.0764.09
W4A8OmniQuant5.877.346.6077.3651.8538.6570.6771.2064.7162.40
W4A6OmniQuant6.097.636.8575.7351.5138.3168.2870.7965.2761.64
W8A4OmniQuant10.2712.7711.5269.4745.8732.8459.0858.6654.8553.46
\n
\n
", + "capture": "Table A17: Performance of weights and activations quantization on LLaMA-1-7B model with asymmetric bits." + }, + "21": { + "table_html": "
\n
Table A18: Weight-only quantization on Falcon-180B.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\nFalcon-180b\n\nPPL\n\nAcc\n
MethodBit#MemoryDevicesWikiPTBC4PIQAARC-eArc-cBoolQHellaSwagWinogrande
-BF16/FP16335GB5xA100 80GB3.296.646.3184.8284.2060.8386.8585.9180.58
RTNW3A16g51265GB1xA100 80GB5.338.088.3483.4880.8555.4678.3781.0577.97
OmniQuantW3A16g51265GB1xA100 80GB3.716.956.7184.7182.9160.9284.0384.9679.40
\n
\n
", + "capture": "Table A18: Weight-only quantization on Falcon-180B." + }, + "22": { + "table_html": "
\n
Table A19: C4 perplexity of Weight-only quantization results in LLaMA-1 and LLaMA-2 models Continue of Table\u00a01.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\nLLaMA1&2 / PPL\n1-7B1-13B1-30B1-65B2-7B2-13B2-70B
FP16-7.086.615.985.626.976.465.52
\n\nW2A16\nRTN1.3e55.6e42.7e42.2e44.8e47.2e42.4e4
GPTQ689.132.5e3169.8040.58NAN323.1248.82
OmniQuant24.8918.3113.8910.7790.6426.7612.28
\n\nW2A16\ng128\nRTN1.0e3447.6499.4517.154.9e3139.6542.13
GPTQ27.7115.2911.9311.9933.7020.97NAN
AWQ1.9e52.3e52.4e57.5e41.7e59.4e4-
OmniQuant12.9710.369.368.0015.0211.058.52
\n\nW2A16\ng64\nRTN151.4376.0030.0711.34475.3528.6913.43
GPTQ17.7111.709.9210.0719.4012.48NAN
AWQ2.8e52.2e52.3e57.4e41.6e59.5e4-
OmniQuant11.789.758.657.6012.7210.057.88
\n\nW3A16\nRTN28.2613.2228.6612.79402.3512.5110.02
GPTQ9.498.167.296.719.818.026.57
AWQ13.269.1312.677.1123.8513.07-
OmniQuant8.197.326.576.078.657.446.06
\n\nW3A16\ng128\nRTN8.627.496.586.108.407.186.02
GPTQ7.857.106.476.007.897.005.85
AWQ7.927.076.375.947.846.94-
OmniQuant7.757.056.375.937.756.985.85
\n\nW4A16\nRTN7.936.986.345.857.716.835.79
GPTQ7.436.846.205.807.376.705.67
AWQ7.526.866.175.777.686.74-
OmniQuant7.346.766.115.737.356.655.65
\n\nW4A16\ng128\nRTN7.376.696.065.697.246.585.63
GPTQ7.216.696.065.697.126.565.58
AWQ7.216.706.055.687.136.56-
OmniQuant7.216.696.065.687.126.565.58
\n
", + "capture": "Table A19: C4 perplexity of Weight-only quantization results in LLaMA-1 and LLaMA-2 models Continue of Table\u00a01." + }, + "23": { + "table_html": "
\n
Table A20: WikiText2 perplexity of Weight-only quantization results in OPT models.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\nOPT / PPL\n125M1.3B2.7B6.7B13B30B66B
FP16-27.6514.6312.4710.8610.129.569.34
\n\nW2A16\ng128\nRTN7.2e31.3e45.7e47.8e37.6e41.3e43.6e5
GPTQ597.66115.1661.5920.1821.3612.7182.10
AWQ251.8447.9728.5016.2014.3212.3114.54
OmniQuant75.4323.9518.1314.4312.9411.3930.84
\n\nW2A16\ng64\nRTN7.0e31.0e419.3e47.6e31.8e48.2e31.1e4
GPTQ204.4049.5829.3716.8116.6511.87356.01
AWQ124.1829.7820.6414.6313.2811.5912.74
OmniQuant62.5621.4016.7613.5712.3311.0010.59
\n\nW3A16\nRTN1.2e31.3e41.6e46.5e34.6e31.5e36.1 e3
GPTQ53.0521.1716.8315.0911.7310.3014.42
AWQ69.4328.01263.1015.1320.0935.744.5e3
OmniQuant35.6616.6813.8011.6510.8710.009.83
\n\nW3A16\ng128\nRTN51.22119.00297.9823.5446.0318.80136.89w
GPTQ39.2416.4713.6911.6510.359.7310.96
AWQ36.7416.3213.5811.4110.689.859.60
OmniQuant32.2515.7213.1811.2710.479.799.53
\n\nW4A16\nRTN37.2848.1716.9212.1011.3210.97110
GPTQ31.4315.5612.8211.4110.319.639.55
AWQ32.2815.4912.9311.3010.399.779.61
OmniQuant29.4515.0412.7611.0310.309.659.65
\n\nW4A16\ng128\nRTN30.4715.2913.0211.1510.309.949.65
GPTQ29.8114.8912.5210.9310.179.589.34
AWQ29.1514.9412.7410.9310.219.599.40
OmniQuant28.8614.8812.6510.9610.209.629.37
\n
", + "capture": "Table A20: WikiText2 perplexity of Weight-only quantization results in OPT models." + }, + "24": { + "table_html": "
\n
Table A21: PTB perplexity of Weight-only quantization results in OPT models.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\nOPT / PPL\n125M1.3B2.7B6.7B13B30B66B
FP16-32.5416.9615.1113.0812.3311.8411.36
\n\nW2A16\ng128\nRTN4.6e37.1e32.5e45.7e33.0e46.2e31.4e5
GPTQ655.17130.8861.3625.2420.4615.15323.23
AWQ263.8871.8743.1519.4917.6114.9219.33
OmniQuant126.4934.3325.2818.9216.7414.51139.17
\n\nW2A16\ng64\nRTN5.1e39.4e37.7e46.1e38.2e34.1e36.2e3
GPTQ245.2855.6136.1219.4517.0214.0588.92
AWQ143.1841.1925.0818.0015.8314.9215.72
OmniQuant112.1030.3622.6317.5815.7013.9813.51
\n\nW3A16\nRTN1.2e31.1e41.0e45.2e33.6e31.4e33.6e3
GPTQ34.0527.3915.9413.7513.7112.5421.16
AWQ80.7333.20224.1118.4635.4566.683.4e3
OmniQuant45.2920.4217.0814.2313.4912.5412.06
\n\nW3A16\ng128\nRTN64.67222.13337.7539.9065.3334.27309.69
GPTQ45.1719.9017.0614.2412.8412.5413.27
AWQ44.0719.5916.5213.9812.8766.683.4e3
OmniQuant40.7619.0616.2913.7712.9612.1911.71
\n\nW4A16\nRTN44.9833.6322.2316.0515.4014.17274.23
GPTQ37.7518.2315.9413.7512.5811.9811.58
AWQ38.7418.3515.7013.5912.7212.0611.58
OmniQuant34.9417.8015.5213.4112.6211.9511.86
\n\nW4A16\ng128\nRTN36.5033.6322.2316.0515.4014.1711.79
GPTQ35.4817.4115.4213.2112.4211.8911.51
AWQ34.9517.4615.3313.2812.4611.9011.43
OmniQuant34.2817.4015.2813.2512.4611.9411.40
\n
", + "capture": "Table A21: PTB perplexity of Weight-only quantization results in OPT models." + }, + "25": { + "table_html": "
\n
Table A22: C4 perplexity of Weight-only quantization results in OPT models.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\nOPT / PPL\n125M1.3B2.7B6.7B13B30B66B
FP16-24.6014.7213.1611.7411.1910.6910.28
\n\nW2A16\ng128\nRTN5.0e37.7e33.8e45.2e32.8e46.5e32.6e5
GPTQ597.6660.8833.8318.5516.3412.89598.81
AWQ168.3538.3826.4116.4814.7312.9815.42
OmniQuant80.1027.3321.1116.6714.9213.1273.83
\n\nW2A16\ng64\nRTN3.9e37.3e31.2e56.3e37.5e34.0e38.4e3
GPTQ133.5131.3123.2316.2414.4812.2458.60
AWQ90.1927.3420.0115.2013.9012.4313.31
OmniQuant64.0123.7119.1615.4414.1612.8012.13
\n\nW3A16\nRTN722.836.1e31.2e45.8e33.3e31.4e33.6e3
GPTQ37.7519.4513.7515.6712.2811.3413.68
AWQ55.7324.56154.4915.8423.7155.013.8e3
OmniQuant32.1717.1014.9312.7812.1311.3710.82
\n\nW3A16\ng128\nRTN40.13126.47372.2332.5644.1225.70286.87
GPTQ30.0816.4714.5412.4811.5810.9111.35
AWQ30.3916.2714.1912.3011.6110.9610.53
OmniQuant29.3416.1114.1512.3111.6310.9810.51
\n\nW4A16\nRTN31.5824.6817.6113.3812.3511.90249.54
GPTQ27.1215.5713.7512.1511.3610.8010.50
AWQ27.6415.6513.7112.0411.4210.8310.41
OmniQuant26.3615.2813.5811.9711.4110.8010.63
\n\nW4A16\ng128\nRTN26.7915.7113.7912.3111.5110.9410.54
GPTQ25.9615.0513.4011.8711.2610.7410.37
AWQ25.9015.0413.3911.8711.2810.7510.34
OmniQuant25.6315.0313.3811.8511.2910.7510.33
\n
", + "capture": "Table A22: C4 perplexity of Weight-only quantization results in OPT models. " + }, + "26": { + "table_html": "
\n
Table A23: WikiText2 perplexity of weight-activation quantization results in LLaMA-1 and LLaMA-2 models Continue of Table\u00a02.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\nLLaMA1&2 / PPL\n1-7B1-13B1-30B1-65B2-7B2-13B
FP16-5.685.094.103.535.474.88
\n\nW6A6\nSmoothQuant6.035.424.553.886.205.18
OmniQuant5.965.284.383.755.875.14
\n\nW4A4\nSmoothQuant25.2540.05192.40275.5383.1235.88
OmniQuant11.2610.8710.339.1714.2612.30
\n
", + "capture": "Table A23: WikiText2 perplexity of weight-activation quantization results in LLaMA-1 and LLaMA-2 models Continue of Table\u00a02." + }, + "27": { + "table_html": "
\n
Table A24: C4 perplexity of weight-activation quantization results in LLaMA-1 and LLaMA-2 models. Continue of Table\u00a02.
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\nLLaMA1&2 / PPL\n1-7B1-13B1-30B1-65B2-7B2-13B
FP16-7.086.615.985.626.976.46
\n\nW6A6\nSmoothQuant7.476.976.345.997.766,76
OmniQuant7.436.846.225.827.486.74
\n\nW4A4\nSmoothQuant32.3247.18122.38244.3577.2743.19
OmniQuant14.5113.7812.4911.2818.0214.55
\n
", + "capture": "Table A24: C4 perplexity of weight-activation quantization results in LLaMA-1 and LLaMA-2 models. Continue of Table\u00a02." + }, + "28": { + "table_html": "
\n
Table A25: Weight-activation quantization results of OPT Models. We report perplexity on three datasets: WikiText2 (WIKI), Pen Treebank (PT), and C4. RPTQ indicates the data from RPTQ\u00a0(Yuan et\u00a0al. (2023)) paper, which keeps the output of LN and SoftMax as 8-bit. RPTQ represents reproducing RPTQ with our setting that quantizes all activation into low-bit except keeping the softmax output at full precision.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
\nOPT / PPL\nOPT-6.7bOPT-13bOPT-30bOPT-66b
TaskWIKIPTC4WIKIPTC4WIKIPTC4WIKIPTC4
FP16-10.8613.0911.7410.1312.3411.209.5611.8410.699.3411.3610.28
\n\nW6A6\nSmoothQuant11.3413.8212.1410.5612.7611.409.6712.0110.8110.7213.2511.60
RPTQ11.1913.9812.0811.0015.2311.6810.2214.9511.739.4513.0310.62
\nRPTQ\n10.9613.2411.8610.2512.6011.319.6012.2310.839.4812.6110.39
OmniQuant10.9613.2011.8110.2112.4711.279.6211.9210.769.4211.4210.32
\n\nW4A4\nSmoothQuant1.8e41.4e41.5e47.4e36.5e35.6e31.2e47.8e38.3e32.2e51.0e51.8e5
RPTQ12.0015.1712.8512.7415.7614.7111.1514.1113.4812.2318.8715.93
\nRPTQ\n17.8325.1019.9116.4523.0116.8011.5014.8712.8111.1613.7311.78
OmniQuant12.2415.5413.5611.6515.8913.4610.6013.7511.8910.2913.1911.35
\n
\n
", + "capture": "Table A25: Weight-activation quantization results of OPT Models. We report perplexity on three datasets: WikiText2 (WIKI), Pen Treebank (PT), and C4. RPTQ indicates the data from RPTQ\u00a0(Yuan et\u00a0al. (2023)) paper, which keeps the output of LN and SoftMax as 8-bit. RPTQ represents reproducing RPTQ with our setting that quantizes all activation into low-bit except keeping the softmax output at full precision. " + } + }, + "image_paths": { + "1": { + "figure_path": "2308.13137v3_figure_1.png", + "caption": "Figure 1: (a) provides an overview of LLaMA-7B with W4A4 quantization, highlighting OmniQuant\u2019s ability to achieve quantization-aware training (QAT) performance with post-training quantization (PTQ) time and data efficiency. (b) and (c) showcase the perplexity (low is better) of quantized LLaMA-13B across different bit-widths on WikiText2.", + "url": "http://arxiv.org/html/2308.13137v3/x1.png" + }, + "2": { + "figure_path": "2308.13137v3_figure_2.png", + "caption": "Figure 2: Characteristics of OmniQuant on LLaMA family.", + "url": "http://arxiv.org/html/2308.13137v3/x2.png" + }, + "3": { + "figure_path": "2308.13137v3_figure_3.png", + "caption": "Figure 3: Details of OmniQuant in a transformer block. Note that all learnable parameters can be eliminated after quantization.", + "url": "http://arxiv.org/html/2308.13137v3/x3.png" + }, + "4": { + "figure_path": "2308.13137v3_figure_4.png", + "caption": "Figure 4: Comparing W3A16g128 quantization among RTN, AWQ (Lin et al., 2023), and OmniQuant under Vicuna-Bench (Chiang et al., 2023). Win rates are calculated without considering tie samples. A higher win rate indicates the better performance of the former of vs. pairs.", + "url": "http://arxiv.org/html/2308.13137v3/x4.png" + }, + "5": { + "figure_path": "2308.13137v3_figure_5.png", + "caption": "Figure A1: Visualization of learned clipping scale in different quantization settings in LLaMA-7B.", + "url": "http://arxiv.org/html/2308.13137v3/x5.png" + }, + "6": { + "figure_path": "2308.13137v3_figure_6.png", + "caption": "Figure A2: Visualization of activation of a linear layer in OPT-13B. (a) Original activation. (b) Activation after SmoothQuant. (c) Activation after proposed learnable equivalent transformation. Similar phenomena can be observed in different layer and different models.", + "url": "http://arxiv.org/html/2308.13137v3/x6.png" + }, + "7": { + "figure_path": "2308.13137v3_figure_7.png", + "caption": "Figure A3: Block-wise quantization error. Grid-searched methods such as AWQ (Lin et al., 2023) and Outlier Suppression + (Wei et al., 2023) produce a more significant error than our gradient-based optimization method.", + "url": "http://arxiv.org/html/2308.13137v3/x7.png" + }, + "8": { + "figure_path": "2308.13137v3_figure_8.png", + "caption": "Figure A4: Bit-level scaling laws for perplexity.", + "url": "http://arxiv.org/html/2308.13137v3/x8.png" + }, + "9": { + "figure_path": "2308.13137v3_figure_9.png", + "caption": "Figure A5: Weights range changing of different clipping-based methods during training. We plot the changing of weights range (maximum minus minimum) of the 3049-th output channel of the q-proj linear layer in the first LLaMa-1-7B block with W4A4 quantization. MinMax is the baseline which indicate withoud clipping. Similar phenomena can also be observed in other channels and other layers.", + "url": "http://arxiv.org/html/2308.13137v3/x9.png" + }, + "10": { + "figure_path": "2308.13137v3_figure_10.png", + "caption": "Figure A6: Performance overview. We display the trade-off curves for three model families. Each model showcases two quantization variants: W4A16g128 and W3A16g128. It is evident that OmniQuant markedly enhances the trade-off between perplexity and model size. Specifically, OmniQuant delivers a reduction of 0.81 in perplexity for an equivalent model size and achieves the same perplexity with only 0.33x of the model size.", + "url": "http://arxiv.org/html/2308.13137v3/x10.png" + } + }, + "validation": true, + "references": [ + { + "1": { + "title": "Piqa: Reasoning about physical commonsense in natural language.", + "author": "Yonatan Bisk, Rowan Zellers, Jianfeng Gao, Yejin Choi, et al.", + "venue": "In Proceedings of the AAAI conference on artificial intelligence, volume 34, pp. 7432\u20137439, 2020.", + "url": null + } + }, + { + "2": { + "title": "Language models are few-shot learners.", + "author": "Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al.", + "venue": "Advances in neural information processing systems, 33:1877\u20131901, 2020.", + "url": null + } + }, + { + "3": { + "title": "Sparks of artificial general intelligence: Early experiments with gpt-4.", + "author": "S\u00e9bastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lundberg, et al.", + "venue": "arXiv preprint arXiv:2303.12712, 2023.", + "url": null + } + }, + { + "4": { + "title": "Quip: 2-bit quantization of large language models with guarantees.", + "author": "Jerry Chee, Yaohui Cai, Volodymyr Kuleshov, and Christopher De Sa.", + "venue": "arXiv preprint arXiv:2307.13304, 2023.", + "url": null + } + }, + { + "5": { + "title": "Vicuna: An open-source chatbot impressing gpt-4 with 90%* chatgpt quality, March 2023.", + "author": "Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan Zhuang, Yonghao Zhuang, Joseph E. Gonzalez, Ion Stoica, and Eric P. Xing.", + "venue": "URL https://lmsys.org/blog/2023-03-30-vicuna/.", + "url": null + } + }, + { + "6": { + "title": "Pact: Parameterized clipping activation for quantized neural networks.", + "author": "Jungwook Choi, Zhuo Wang, Swagath Venkataramani, Pierce I-Jen Chuang, Vijayalakshmi Srinivasan, and Kailash Gopalakrishnan.", + "venue": "arXiv preprint arXiv:1805.06085, 2018.", + "url": null + } + }, + { + "7": { + "title": "Boolq: Exploring the surprising difficulty of natural yes/no questions.", + "author": "Christopher Clark, Kenton Lee, Ming-Wei Chang, Tom Kwiatkowski, Michael Collins, and Kristina Toutanova.", + "venue": "arXiv preprint arXiv:1905.10044, 2019.", + "url": null + } + }, + { + "8": { + "title": "Think you have solved question answering? try arc, the ai2 reasoning challenge.", + "author": "Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord.", + "venue": "arXiv preprint arXiv:1803.05457, 2018.", + "url": null + } + }, + { + "9": { + "title": "The case for 4-bit precision: k-bit inference scaling laws.", + "author": "Tim Dettmers and Luke Zettlemoyer.", + "venue": "In International Conference on Machine Learning, pp. 7750\u20137774. PMLR, 2023.", + "url": null + } + }, + { + "10": { + "title": "Llm. int8 (): 8-bit matrix multiplication for transformers at scale.", + "author": "Tim Dettmers, Mike Lewis, Younes Belkada, and Luke Zettlemoyer.", + "venue": "arXiv preprint arXiv:2208.07339, 2022.", + "url": null + } + }, + { + "11": { + "title": "Qlora: Efficient finetuning of quantized llms.", + "author": "Tim Dettmers, Artidoro Pagnoni, Ari Holtzman, and Luke Zettlemoyer.", + "venue": "arXiv preprint arXiv:2305.14314, 2023a.", + "url": null + } + }, + { + "12": { + "title": "Spqr: A sparse-quantized representation for near-lossless llm weight compression.", + "author": "Tim Dettmers, Ruslan Svirschevski, Vage Egiazarian, Denis Kuznedelev, Elias Frantar, Saleh Ashkboos, Alexander Borzunov, Torsten Hoefler, and Dan Alistarh.", + "venue": "arXiv preprint arXiv:2306.03078, 2023b.", + "url": null + } + }, + { + "13": { + "title": "Learned step size quantization.", + "author": "Steven K Esser, Jeffrey L McKinstry, Deepika Bablani, Rathinakumar Appuswamy, and Dharmendra S Modha.", + "venue": "arXiv preprint arXiv:1902.08153, 2019.", + "url": null + } + }, + { + "14": { + "title": "Gptq: Accurate post-training quantization for generative pre-trained transformers.", + "author": "Elias Frantar, Saleh Ashkboos, Torsten Hoefler, and Dan Alistarh.", + "venue": "arXiv preprint arXiv:2210.17323, 2022.", + "url": null + } + }, + { + "15": { + "title": "The pile: An 800gb dataset of diverse text for language modeling.", + "author": "Leo Gao, Stella Biderman, Sid Black, Laurence Golding, Travis Hoppe, Charles Foster, Jason Phang, Horace He, Anish Thite, Noa Nabeshima, et al.", + "venue": "arXiv preprint arXiv:2101.00027, 2020.", + "url": null + } + }, + { + "16": { + "title": "A framework for few-shot language model evaluation.", + "author": "Leo Gao, Jonathan Tow, Stella Biderman, Sid Black, Anthony DiPofi, Charles Foster, Laurence Golding, Jeffrey Hsu, Kyle McDonell, Niklas Muennighoff, et al.", + "venue": "Version v0. 0.1. Sept, 2021.", + "url": null + } + }, + { + "17": { + "title": "Measuring massive multitask language understanding.", + "author": "Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt.", + "venue": "arXiv preprint arXiv:2009.03300, 2020.", + "url": null + } + }, + { + "18": { + "title": "You only segment once: Towards real-time panoptic segmentation.", + "author": "Jie Hu, Linyan Huang, Tianhe Ren, Shengchuan Zhang, Rongrong Ji, and Liujuan Cao.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 17819\u201317829, 2023.", + "url": null + } + }, + { + "19": { + "title": "Geometric-aware pretraining for vision-centric 3d object detection.", + "author": "Linyan Huang, Huijie Wang, Jia Zeng, Shengchuan Zhang, Liujuan Cao, Rongrong Ji, Junchi Yan, and Hongyang Li.", + "venue": "arXiv preprint arXiv:2304.03105, 2023.", + "url": null + } + }, + { + "20": { + "title": "Leveraging vision-centric multi-modal expertise for 3d object detection.", + "author": "Linyan Huang, Zhiqi Li, Chonghao Sima, Wenhai Wang, Jingdong Wang, Yu Qiao, and Hongyang Li.", + "venue": "Advances in Neural Information Processing Systems, 36, 2024.", + "url": null + } + }, + { + "21": { + "title": "Squeezellm: Dense-and-sparse quantization.", + "author": "Sehoon Kim, Coleman Hooper, Amir Gholami, Zhen Dong, Xiuyu Li, Sheng Shen, Michael W Mahoney, and Kurt Keutzer.", + "venue": "arXiv preprint arXiv:2306.07629, 2023.", + "url": null + } + }, + { + "22": { + "title": "Owq: Lessons learned from activation outliers for weight quantization in large language models.", + "author": "Changhun Lee, Jungyu Jin, Taesu Kim, Hyungjun Kim, and Eunhyeok Park.", + "venue": "arXiv preprint arXiv:2306.02272, 2023.", + "url": null + } + }, + { + "23": { + "title": "Brecq: Pushing the limit of post-training quantization by block reconstruction.", + "author": "Yuhang Li, Ruihao Gong, Xu Tan, Yang Yang, Peng Hu, Qi Zhang, Fengwei Yu, Wei Wang, and Shi Gu.", + "venue": "arXiv preprint arXiv:2102.05426, 2021.", + "url": null + } + }, + { + "24": { + "title": "Repq-vit: Scale reparameterization for post-training quantization of vision transformers.", + "author": "Zhikai Li, Junrui Xiao, Lianwei Yang, and Qingyi Gu.", + "venue": "In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 17227\u201317236, 2023.", + "url": null + } + }, + { + "25": { + "title": "Awq: Activation-aware weight quantization for llm compression and acceleration.", + "author": "Ji Lin, Jiaming Tang, Haotian Tang, Shang Yang, Xingyu Dang, and Song Han.", + "venue": "arXiv preprint arXiv:2306.00978, 2023.", + "url": null + } + }, + { + "26": { + "title": "Qllm: Accurate and efficient low-bitwidth quantization for large language models.", + "author": "Jing Liu, Ruihao Gong, Xiuying Wei, Zhiwei Dong, Jianfei Cai, and Bohan Zhuang.", + "venue": "arXiv preprint arXiv:2310.08041, 2023a.", + "url": null + } + }, + { + "27": { + "title": "Nonuniform-to-uniform quantization: Towards accurate quantization via generalized straight-through estimation.", + "author": "Zechun Liu, Kwang-Ting Cheng, Dong Huang, Eric P Xing, and Zhiqiang Shen.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4942\u20134952, 2022.", + "url": null + } + }, + { + "28": { + "title": "Llm-qat: Data-free quantization aware training for large language models.", + "author": "Zechun Liu, Barlas Oguz, Changsheng Zhao, Ernie Chang, Pierre Stock, Yashar Mehdad, Yangyang Shi, Raghuraman Krishnamoorthi, and Vikas Chandra.", + "venue": "arXiv preprint arXiv:2305.17888, 2023b.", + "url": null + } + }, + { + "29": { + "title": "Deja vu: Contextual sparsity for efficient llms at inference time.", + "author": "Zichang Liu, Jue Wang, Tri Dao, Tianyi Zhou, Binhang Yuan, Zhao Song, Anshumali Shrivastava, Ce Zhang, Yuandong Tian, Christopher Re, et al.", + "venue": "In International Conference on Machine Learning, pp. 22137\u201322176. PMLR, 2023c.", + "url": null + } + }, + { + "30": { + "title": "The penn treebank: Annotating predicate argument structure.", + "author": "Mitch Marcus, Grace Kim, Mary Ann Marcinkiewicz, Robert MacIntyre, Ann Bies, Mark Ferguson, Karen Katz, and Britta Schasberger.", + "venue": "In Human Language Technology: Proceedings of a Workshop held at Plainsboro, New Jersey, March 8-11, 1994, 1994.", + "url": null + } + }, + { + "31": { + "title": "Pointer sentinel mixture models.", + "author": "Stephen Merity, Caiming Xiong, James Bradbury, and Richard Socher.", + "venue": "arXiv preprint arXiv:1609.07843, 2016.", + "url": null + } + }, + { + "32": { + "title": "Embodiedgpt: Vision-language pre-training via embodied chain of thought.", + "author": "Yao Mu, Qinglong Zhang, Mengkang Hu, Wenhai Wang, Mingyu Ding, Jun Jin, Bin Wang, Jifeng Dai, Yu Qiao, and Ping Luo.", + "venue": "arXiv preprint arXiv:2305.15021, 2023.", + "url": null + } + }, + { + "33": { + "title": "Up or down? adaptive rounding for post-training quantization.", + "author": "Markus Nagel, Rana Ali Amjad, Mart Van Baalen, Christos Louizos, and Tijmen Blankevoort.", + "venue": "In International Conference on Machine Learning, pp. 7197\u20137206. PMLR, 2020.", + "url": null + } + }, + { + "34": { + "title": "The refinedweb dataset for falcon llm: outperforming curated corpora with web data, and web data only.", + "author": "Guilherme Penedo, Quentin Malartic, Daniel Hesslow, Ruxandra Cojocaru, Alessandro Cappelli, Hamza Alobeidli, Baptiste Pannier, Ebtesam Almazrouei, and Julien Launay.", + "venue": "arXiv preprint arXiv:2306.01116, 2023.", + "url": null + } + }, + { + "35": { + "title": "Exploring the limits of transfer learning with a unified text-to-text transformer.", + "author": "Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu.", + "venue": "The Journal of Machine Learning Research, 21(1):5485\u20135551, 2020.", + "url": null + } + }, + { + "36": { + "title": "Llama: Open and efficient foundation language models.", + "author": "Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timoth\u00e9e Lacroix, Baptiste Rozi\u00e8re, Naman Goyal, Eric Hambro, Faisal Azhar, et al.", + "venue": "arXiv preprint arXiv:2302.13971, 2023a.", + "url": null + } + }, + { + "37": { + "title": "Llama 2: Open foundation and fine-tuned chat models.", + "author": "Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al.", + "venue": "arXiv preprint arXiv:2307.09288, 2023b.", + "url": null + } + }, + { + "38": { + "title": "Outlier suppression: Pushing the limit of low-bit transformer language models.", + "author": "Xiuying Wei, Yunchen Zhang, Xiangguo Zhang, Ruihao Gong, Shanghang Zhang, Qi Zhang, Fengwei Yu, and Xianglong Liu.", + "venue": "Advances in Neural Information Processing Systems, 35:17402\u201317414, 2022.", + "url": null + } + }, + { + "39": { + "title": "Outlier suppression+: Accurate quantization of large language models by equivalent and optimal shifting and scaling.", + "author": "Xiuying Wei, Yunchen Zhang, Yuhang Li, Xiangguo Zhang, Ruihao Gong, Jinyang Guo, and Xianglong Liu.", + "venue": "arXiv preprint arXiv:2304.09145, 2023.", + "url": null + } + }, + { + "40": { + "title": "Smoothquant: Accurate and efficient post-training quantization for large language models.", + "author": "Guangxuan Xiao, Ji Lin, Mickael Seznec, Hao Wu, Julien Demouth, and Song Han.", + "venue": "In International Conference on Machine Learning, pp. 38087\u201338099. PMLR, 2023.", + "url": null + } + }, + { + "41": { + "title": "Lvlm-ehub: A comprehensive evaluation benchmark for large vision-language models.", + "author": "Peng Xu, Wenqi Shao, Kaipeng Zhang, Peng Gao, Shuo Liu, Meng Lei, Fanqing Meng, Siyuan Huang, Yu Qiao, and Ping Luo.", + "venue": "arXiv preprint arXiv:2306.09265, 2023.", + "url": null + } + }, + { + "42": { + "title": "Rptq: Reorder-based post-training quantization for large language models.", + "author": "Zhihang Yuan, Lin Niu, Jiawei Liu, Wenyu Liu, Xinggang Wang, Yuzhang Shang, Guangyu Sun, Qiang Wu, Jiaxiang Wu, and Bingzhe Wu.", + "venue": "arXiv preprint arXiv:2304.01089, 2023.", + "url": null + } + }, + { + "43": { + "title": "Hellaswag: Can a machine really finish your sentence?", + "author": "Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi.", + "venue": "arXiv preprint arXiv:1905.07830, 2019.", + "url": null + } + }, + { + "44": { + "title": "Opt: Open pre-trained transformer language models.", + "author": "Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher Dewan, Mona Diab, Xian Li, Xi Victoria Lin, et al.", + "venue": "arXiv preprint arXiv:2205.01068, 2022.", + "url": null + } + }, + { + "45": { + "title": "Meta-transformer: A unified framework for multimodal learning.", + "author": "Yiyuan Zhang, Kaixiong Gong, Kaipeng Zhang, Hongsheng Li, Yu Qiao, Wanli Ouyang, and Xiangyu Yue.", + "venue": "arXiv preprint arXiv:2307.10802, 2023a.", + "url": null + } + }, + { + "46": { + "title": "Dynamic sparse no training: Training-free fine-tuning for sparse llms.", + "author": "Yuxin Zhang, Lirui Zhao, Mingbao Lin, Yunyun Sun, Yiwu Yao, Xingjia Han, Jared Tanner, Shiwei Liu, and Rongrong Ji.", + "venue": "arXiv preprint arXiv:2310.08915, 2023b.", + "url": null + } + }, + { + "47": { + "title": "Judging llm-as-a-judge with mt-bench and chatbot arena.", + "author": "Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric Xing, et al.", + "venue": "arXiv preprint arXiv:2306.05685, 2023.", + "url": null + } + } + ], + "url": "http://arxiv.org/html/2308.13137v3" +} \ No newline at end of file